source
stringlengths
62
1.33M
target
stringlengths
127
12.3k
Background The NCLBA requires states to set challenging academic content and achievement standards in reading or language arts and mathematics and determine whether school districts and schools make AYP toward meeting these standards. To make AYP, schools generally must: show that the percentage of students scoring at the proficient level or higher meets the state proficiency target for the school as a whole and for designated student groups, test 95 percent of all students and those in designated groups, and meet goals for an additional academic indicator, such as the state’s graduation rate. The purpose of Title I Part A is to improve academic achievement for disadvantaged students. Schools receiving Title I federal funds that do not make AYP for 2 or more years in a row must take action to assist students, such as offering students the opportunity to transfer to other schools or providing additional educational services like tutoring. States measure AYP using a status model that determines whether or not schools and students in designated groups meet proficiency targets on state tests 1 year at a time. States generally used data from the 2001-2002 school year to set the initial percentage of students that needed to be proficient for a school to make AYP, known as a starting point. From this point, they set annual proficiency targets that increase up to 100 percent by 2014. For example, for schools in a state with a starting point of 28 percent to achieve 100 percent by 2014, the percentage of students who scored at or above proficient on the state test would have to increase by 6 percentage points each year, as shown in figure 1. Schools that do not reach the state target will generally not make AYP. The law indicates that states are expected to close achievement gaps, but does not specify annual targets to measure progress toward doing so. States, thus, have flexibility in the rate at which they close these gaps. To determine the extent that achievement gaps are closing, states measure the difference in the percentage of students in designated student groups and their peers that reach proficiency. For example, an achievement gap exists if 40 percent of a school’s non-economically disadvantaged students were proficient compared with only 16 percent of economically disadvantaged students, a gap of 24 percentage points. To close the gap, the percentage of students in the economically disadvantaged group that reaches proficiency would have to increase at a faster rate than that of their peers. If a school misses its status model target in a single year, the law includes a “safe harbor” provision that provides a way for schools that are showing significant increases in proficiency rates of student groups to make AYP. Safe harbor measures academic performance in a way that is similar to certain growth models do and allows a school to make AYP by reducing the percentage of students in designated student groups that were not proficient by 10 percent, so long as the groups also show progress on another academic indicator. For example, in a state with a status model target of 40 percent proficient, a school could make AYP under safe harbor if 63 percent of a student group was not proficient compared to 70 percent in the previous year. Nearly All States Reported Using or Considering Growth Models to Measure Academic Performance Twenty-six states reported using growth models in addition to using their status models to track the performance of schools, designated student groups, or individual students, as reported in our March 2006 survey. Additionally, nearly all states are considering the use of growth models (see fig. 2). Of the 26 states using growth models, 19 states reported measuring changes for schools and student groups, while 7 states reported measuring changes for schools, student groups, and individuals, as shown in table 1. For example, Massachusetts used a model that measures growth for the school as a whole and for designated student groups. The state awards points to schools in 25-point increments for each student, depending on how students scored on the state test. Schools earn 100 points for each student who reaches proficiency, but fewer points for students below proficiency. The state averages the points to award a final score to schools. Growth in Massachusetts is calculated by taking the difference in the annual scores that a school earns between 2 years. Figure 3 illustrates the growth a school can make from one year to the next as measured by Massachusetts model. Tennessee reported using a growth model that sets different goals for each individual student based on the students’ previous test scores. The goal is the score that a student would be expected to receive, and any difference between a student’s expected and actual score is considered that student’s amount of yearly growth, as shown in figure 4. In addition, Tennessee’s model, known as the Tennessee Value-Added Assessment System, estimates the unique contribution—the value added— that the teacher and school make to each individual student’s growth in test scores over time. The state then uses that amount of growth, the unique contribution of the school, and other information to determine whether schools are below, at, or above their level of expected performance. The model also grades schools with an A, B, C, D, or F, which is considered a reflection of the extent to which the school is meeting its requirements for student learning. Seventeen of the 26 states using growth models reported that their models were in place before the passage of the NCLBA during the 2001-2002 school year, and the remaining 9 states implemented them after the law was passed. States used them for purposes such as rewarding effective teachers and designing intervention plans for struggling schools. For example, North Carolina used its model as a basis to decide whether teachers receive bonus money. Tennessee used its value-added model to provide information about which teachers are most effective with which student groups. In addition to predicting students’ expected scores on state tests, Tennessee’s model was used to predict scores on college admissions tests, which is helpful for students who want to pursue higher education. In addition, California used its model to identify schools eligible for a voluntary improvement program. Certain Growth Models Can Measure Progress toward Key NCLBA Goals Certain growth models can measure progress in achieving key NCLBA goals of reaching universal proficiency by 2014 and closing achievement gaps. While states developed growth models for purposes other than NCLBA, states such as Massachusetts and Tennessee have adjusted their state models to use them to meet NCLBA goals. The Massachusetts model has been used to make AYP determinations as part of the state’s accountability plan in place since 2003. Tennessee submitted a new model to Education for the growth models pilot that differs from the value-added model described earlier. This new model gives schools credit for students projected to reach proficiency within 3 years in order to meet key NCLBA goals. Our analysis of how models in Massachusetts and Tennessee can measure progress toward the law’s two key goals is shown in table 2. Massachusetts designed a model that can measure progress toward the key goals of NCLBA by setting targets for the improvement of schools and their student groups that increase over time until all students are proficient in 2014. Schools can get credit for improving student proficiency even if, in the short term, the requisite number of students has yet to reach the state’s status model proficiency targets. For example, figure 5 illustrates a school that is on track to make AYP annually through 2014 by reaching its growth targets. While these growth targets increase at a faster pace than the state’s annual proficiency target until 2014, they do provide the school with an additional measure by which it can make AYP. The model also measures whether achievement gaps are closing by setting targets for designated student groups, similar to how it sets targets for schools as a whole. Schools that increase proficiency too slowly—that is, do not meet status or growth targets—will not make AYP. For example, one selected school in Massachusetts showed significant gains for several designated student groups that were measured against their own targets. However, the school did not make AYP because gains for one student group were not sufficient. This group—students with disabilities—fell short of its growth target, as shown in figure 6. Tennessee developed a different model that can also measure progress toward the NCLBA goals of universal proficiency and closing achievement gaps. Tennessee created a new version of the model it had been using for state purposes to better align with NCLBA. Referred to as a projection model, this approach projects individual student’s test scores into the future to determine when they may reach the state’s status model proficiency targets. In order to make AYP under this proposal, a school could reach the state’s status model targets by counting as proficient in the current year those students who are predicted to be proficient in the future. The state projects scores for elementary and middle school students 3 years into the future to determine if they are on track to reach proficiency, as follows: fourth-grade students projected to reach proficiency by seventh grade, fifth-grade students projected to reach proficiency by eighth grade, and sixth-, seventh-, and eighth-grade students projected to reach proficiency on the state’s high school proficiency test. These projections are based on prior test data and assume that the student will attend middle or high schools with average performance (an assumption known as average schooling experience). At our request, Tennessee provided analyses for students in several schools that would make AYP under the proposed model. To demonstrate how the model works, we selected students from a school and compared their actual results in fourth grade (panel A) with their projected results for seventh grade (panel B) (see fig. 7). Tennessee’s proposed model can also measure achievement gaps. Under NCLBA, a school makes AYP if all student groups meet the state proficiency target. In Tennessee’s model, whether the achievement gap is potentially closed would be determined through projections of students’ performance in meeting the state proficiency target. States Face Challenges in Implementing Growth Models States generally face challenges in collecting and analyzing the data required to implement growth models including models that would meet the law’s goals. In addition, using growth models can present risks for states if schools are designated as making AYP while still needing assistance to progress. Education has initiatives that may help states address these challenges. States must have certain additional data system requirements to implement growth models, including models that would meet NCLBA requirements. First, a state’s ability to collect comparable data over at least 2 years is a minimum requirement for any growth model. States must ensure that test results are comparable from one year to the next and possibly from one grade to the next, both of which are especially challenging when test questions and formats change. Second, the capacity to collect data across time and schools is also required to implement growth models that use student-level data. This capacity often requires a statewide system to assign unique numbers to identify individual students. Developing and implementing these systems is a complicated process that includes assigning numbers, setting up the system in all schools and districts, and correctly matching individual student data over time, among other steps. Third, states need to ensure that data are free from errors in their calculations of performance. While ensuring data accuracy is important for status models, doing so is particularly important for growth models, because errors in multiple years can accumulate, leading to unreliable results. States also need greater research and analysis expertise to use growth models as well as support for people who need to manage and communicate the model’s results. For example, Tennessee officials told us that they have contracted with a software company for several years because of the complexity of the model and its underlying data system. Florida has a contract with a local university to assist it with assessing data accuracy, including unique student identifiers required for its model. In addition, states will incur training costs as they inform teachers, administrators, media, legislators, and the general public about the additional complexities that occur when using growth models. For example, administrators in one district in North Carolina told us that their district lacks enough specialists who can explain the state’s growth model to all principals and teachers in need of guidance and additional training. Using growth models can present risks for states if schools are designated as making AYP while still needing assistance to progress. On the basis of growth model results, some schools would make AYP even though these schools may have relatively low-achieving students. As a result, some students in Title I schools may be disadvantaged by not receiving federally-required services. In two Massachusetts districts that we analyzed, 23 of the 59 schools that made AYP did so based on the state’s growth model, even though they did not reach the state’s status model proficiency rate targets in 2003-2004. Consequently, these schools may not be eligible to receive services required under NCLBA for schools in need of improvement, such as tutoring and school choice. Because these schools would need to sustain high growth rates in order to achieve universal proficiency by 2014, it is likely that their students would benefit from additional support. In Tennessee, 47 of the 353 schools that had not made AYP in the 2004-2005 school year would do so under the state’s proposed projection model. One school that would be allowed to make AYP under the proposed model was located in a high-poverty, inner-city neighborhood. That school receives Title I funding, as two-thirds of its students are classified as economically disadvantaged. The school was already receiving services required under NCLBA to help its students. If the school continues to make AYP under the growth model, these services may no longer be provided. Education’s initiatives, such as the growth model pilot project, may facilitate growth model implementation. In November 2005, Education announced a pilot project for states to submit proposals for using a growth model—one that meets criteria established by the department—along with their status model, to determine AYP. While NCLBA does not specify the use of growth models for making AYP determinations, the department started the pilot to evaluate how growth models might help schools meet NCLBA proficiency goals and close achievement gaps. For the growth model pilot project, each state had to demonstrate how its growth model proposal met Education’s criteria, many of which are consistent with the legal requirements of status models. In addition to those requirements, Education included criteria that the proposed models track student progress over time and have an assessment system with tests that are comparable over time. Of the 20 proposals, Education approved 2 states—North Carolina and Tennessee—to use growth models to make AYP determinations in the 2005-2006 school year. States may submit proposals for the pilot again this fall. In addition to meeting all of the criteria, Education and peer reviewers noted that Tennessee and North Carolina had many years of experience with data systems that support growth models. These states must report to Education the number of schools that made AYP on the basis of their status and growth models. Education expects to share the results with other states, Congress, and the public after it assesses the effects of the pilot. In addition to the growth model pilot project, Education awarded nearly $53 million in grants to 14 states for the design and implementation of statewide longitudinal data systems—systems that are essential for the development of student-level growth models. While independent of the pilot project, states with a longitudinal data system—one that gathers data such as test scores on the same student from year to year—will be better positioned to implement a growth model than they would have been without it. Education intended the grants to help states generate and use accurate and timely data to meet reporting requirements, support decision making, and aid education research, among other purposes. Education plans to disseminate lessons learned and solutions developed by states that received grants. Conclusion While status models provide a snapshot of academic performance, growth models can provide states with more detailed information on how schools’ and students’ performance has changed from year to year. Growth models can recognize schools whose students are making significant gains on state tests but are still not proficient. Educators can use information about the academic growth of individual students to tailor interventions to the needs of particular students or groups. In this respect, models that measure individual students’ growth provide the most in-depth and useful information, yet the majority of the models currently in use are not designed to do this. Through its approval of Massachusetts’ model and the growth model pilot program, Education is proceeding prudently in its effort to allow states to use growth models to meet NCLBA requirements. Education is allowing only states with the most advanced models that can measure progress toward NCLBA goals to use the models to determine AYP. Under the pilot project, which has clear goals and criteria that requires states to compare results from their growth model with status model results, Education is poised to gain valuable information on whether or not growth models are overstating progress or whether they appropriately give credit to fast- improving schools. While growth models may be defined as tracking the same students over time, GAO used a definition that also includes tracking the performance of schools and groups of students. In comments on our report, Education expressed concern that this definition may confuse readers because it is very broad and includes models that compare changes in scores or proficiency levels of schools or groups of students. GAO used this definition of growth to reflect the variety of approaches states are taking to measure academic progress. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the committee may have. GAO Contact and Staff Acknowledgments For more information on this testimony, please call Marnie S. Shaul at (202) 512-7215. Individuals making key contributions to this testimony include Blake Ainsworth, Karen Febey, Harriet Ganson, Shannon Groff, Andrew Huddleston, Jason Palmer, and Rachael Valliere. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The No Child Left Behind Act (NCLBA) requires that states improve academic performance so that all students reach proficiency in reading and mathematics by 2014 and that achievement gaps close among student groups. States set annual proficiency targets using an approach known as a status model, which calculates test scores 1 year at a time. Some states have interest in using growth models that measure changes in test scores over time to determine if schools are meeting proficiency targets. The Chairman of the Committee on Education and the Workforce asked GAO to testify on its recent report on measuring academic growth. Specifically, this testimony discusses (1) how many states are using growth models and for what purposes, (2) how growth models can measure progress toward achieving key NCLBA goals, and (3) what challenges states face in using growth models especially to meet the law's key goals. While growth models may be defined as tracking the same students over time, GAO used a definition that also included tracking the performance of schools and groups of students. In comments on the report, Education said that this definition could be confusing. GAO used this definition of growth to reflect the variety of approaches states were taking. Nearly all states were using or considering growth models for a variety of purposes in addition to their status models as of March 2006. Twenty-six states were using growth models, and another 22 were either considering or in the process of implementing them. Most states using growth models measured progress for schools and for student groups, and 7 also measured growth for individual students. States used growth models to target resources for students that need extra help or award teachers bonuses based on their school's performance. Certain growth models are capable of tracking progress toward the goals of universal proficiency by 2014 and closing achievement gaps. For example, Massachusetts uses its model to set targets based on the growth that it expects from schools and their student groups. Schools can make adequate yearly progress (AYP) if they reach these targets, even if they fall short of reaching the statewide proficiency targets set with the state's status model. Tennessee designed a model that projects students' test scores and whether they will be proficient in the future. In this model, if 79 percent of a school's students are predicted to be proficient in 3 years, the school would reach the state's 79 percent proficiency target for the current school year (2005-2006). States face challenges measuring academic growth, such as creating data and assessment systems to support growth models and having personnel to analyze and communicate results. The use of growth models to determine AYP may also challenge states to make sure that students in low-performing schools receive needed assistance. U.S. Department of Education (Education) initiatives may help states address these challenges. Education started a pilot project for states to use growth models that meet the department's specific criteria, including models that track progress of individual students, to determine AYP. Education also provided grants to states to track individual test scores over time.
High-Risk Designation Removed When legislative, administration, and agency actions, including those in response to our recommendations, result in significant progress toward resolving a high-risk problem, we remove the high-risk designation. The five criteria for determining if the high-risk designation can be removed are (1) a demonstrated strong commitment to, and top leadership support for, addressing problems; (2) the capacity to address problems; (3) a corrective action plan; (4) a program to monitor corrective measures; and (5) demonstrated progress in implementing corrective measures. For our 2011 high-risk update, we determined that two areas warranted removal from the High-Risk List: the Department of Defense (DOD) Personnel Security Clearance Program and the 2010 Census. As we have with areas previously removed from the High-Risk List, we will continue to monitor these areas, as appropriate, to ensure that the improvements we have noted are sustained. If significant problems again arise, we will consider reapplying the high-risk designation. Department of Defense Personnel Security Clearance Program We are removing DOD’s personnel security clearance program from the High-Risk List because of the agency’s progress in timeliness and the development of tools and metrics to assess quality, as well as its commitment to sustaining progress. Importantly, continued congressional oversight and the committed leadership of the Suitability and Security Clearance Performance Accountability Council (Council)—which is responsible for overseeing security clearance reform efforts—have greatly contributed to the progress of DOD and governmentwide security clearance reform. DOD officials, in coordination with the Council, have demonstrated a strong commitment to, and a capacity for, addressing security clearance reform efforts in line with the Intelligence Reform and Terrorism Prevention Act (IRTPA) of 2004. Specifically, DOD (1) significantly improved the timeliness of security clearances and met the IRTPA objective for processing 90 percent of initial clearances on average within 60 days for fiscal year 2010, (2) worked with members of the Council to develop a strategic framework for clearance reform, (3) designed quality tools to evaluate completeness of clearance documentation, (4) issued guidance on adjudication standards, and (5) continues to be a prominent player in the overall security clearance reform effort, which includes entities within the Office of Management and Budget, the Office of Personnel Management, and the Office of the Director of National Intelligence. These efforts have yielded positive results. Continued congressional oversight and the committed leadership of DOD have greatly contributed to the progress in addressing the problems with the personnel security clearance process. We will continue to monitor DOD’s efforts because security clearance reform is ongoing, and DOD needs to place a high priority on ensuring that timeliness improvements continue and quality is built into every step of the process using quantifiable and independently verifiable metrics. The 2010 Census We removed the 2010 Census from our High-Risk List because the U.S. Census Bureau (Bureau) generally completed its peak census data collection activities consistent with its operational plans; released the state population counts used to apportion Congress on December 21, 2010, several days ahead of the legally mandated end-of-year deadline; and remaining activities appear to be on track, including, as required by law, delivering the data that states use for congressional redistricting by April 1, 2011. A successful census is critical because the census is a constitutionally mandated program used to apportion and redistrict the U.S. House of Representatives, help allocate about $400 billion yearly in federal financial assistance, and inform the planning and investment decisions of numerous public- and private-sector entities. In March 2008, we designated the 2010 Census a high-risk area because of long-standing weaknesses in the Bureau’s information technology (IT) acquisition and contract management function, problems with the performance of handheld computers used to collect uncertainty over the ultimate cost of the census, which escalated from an initial estimate of $11.3 billion in 2001 to around $13 billion. To address these issues and help secure a successful census, the Bureau demonstrated strong commitment and top leadership support to mitigate the risks, including bringing in experienced personnel to key positions and taking steps to implement our recommendations to strengthen its IT and other management and planning functions. At the same time, similar to the case with the DOD Personnel Security Clearance Program, active congressional oversight—including a dozen congressional hearings held after we added the census to our High-Risk List—helped ensure the Bureau effectively designed and managed operations and kept the enumeration on schedule. Although every census has its decade-specific difficulties, societal trends—including growing concerns over personal privacy, more non- English speakers, and more people residing in makeshift and other nontraditional living arrangements—make each decennial inherently challenging. As shown in figure 1, the cost of enumerating each housing unit has escalated from an average of around $16 in 1970 to around $98 in 2010, an increase of over 500 percent (in constant 2010 dollars). At the same time, the mail response rate—a key indicator of a successful census—has declined from 78 percent in 1970 to 63 percent in 2010. Put another way, the Bureau has to invest substantially more resources each decade in an effort to keep pace with key results from prior enumerations. The bottom line is that the fundamental design of the enumeration—in many ways unchanged since 1970—is no longer capable of delivering a cost-effective headcount given ongoing and newly emerging societal trends. Thus, while the 2020 Census may seem well over the horizon, research and planning activities need to start early in the decade to help ensure the 2020 Census is as cost-effective as possible. Indeed, the Bureau’s past experience has shown that early investments in planning can help reduce the costs and risks of downstream operations. Going forward, potential focus areas for Census reform include new data collection methods such as using administrative records from other government agencies, including driver’s licenses; better leveraging innovations in technology and social media to more fully engage census stakeholders and the general public on census issues; reaching agreement on a set of criteria that could be used to weigh the trade-offs associated with the need for high levels of accuracy on the one hand, and the increasing cost of achieving that accuracy on the other hand; and ensuring that the Bureau’s approach to human capital management, collaboration, capital decision-making, knowledge sharing, and other internal functions are aligned toward delivering a more cost-effective headcount. Ongoing congressional oversight over the course of the decade will also be critical for ensuring the Bureau’s reform efforts stay on track. The Bureau recognizes that it needs to change its method of doing business and has already taken some important first steps in this regard. For example, the Bureau is rebuilding its research directorate to lead early planning efforts and has developed a strategic plan for 2020 and other related documents that, among other things, outline the Bureau’s mission and vision for 2020. Thus, in looking ahead toward the next Census, it will be vitally important to both identify lessons learned from the 2010 enumeration to improve existing census-taking activities, as well as to re-examine and perhaps fundamentally transform the way the Bureau plans, tests, implements, monitors, and evaluates future enumerations in order to address long- standing challenges. New High-Risk Area: Management of Federal Oil and Gas Resources We have designated the Department of the Interior’s management of federal oil and gas on leased federal lands and waters as high risk because Interior (1) does not have reasonable assurance that it is collecting its share of revenue from oil and gas produced on federal lands; (2) continues to experience problems in hiring, training, and retaining sufficient staff to provide oversight and management of oil and gas operations on federal lands and waters; and (3) is currently engaged in a broad reorganization of both its offshore oil and gas management and revenue collection functions. With regard to this organizational effort, there are many open questions about whether Interior has the capacity to undertake such a reorganization while continuing to provide reasonable assurance that billions of dollars of revenue owed the public are being properly assessed and collected and that oil and gas exploration and production on federal lands and waters is well-managed. Federal oil and gas resources provide an important source of energy for the United States, create jobs in the oil and gas industry, and generate billions of dollars annually in revenues that are shared between federal, state, and tribal governments. Revenue generated from federal oil and gas production is one of the largest nontax sources of federal government funds, accounting for about $9 billion in fiscal year 2009. Also, the explosion onboard the Deepwater Horizon and oil spill in the Gulf of Mexico in April 2010 emphasized the importance of Interior’s management of permitting and inspection processes to ensure operational and environmental safety. The National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling reported in January 2011 that this disaster was the product of several individual missteps and oversights by BP, Halliburton, and Transocean, which government regulators lacked the authority, the necessary resources, and the technical expertise to prevent. Historically, Interior’s Bureau of Land Management (BLM) managed onshore federal oil and gas activities, while the Minerals Management Service (MMS) managed offshore activities and collected royalties for all leases. Interior recently began restructuring its oil and gas program, transferring offshore oversight responsibilities to the newly created Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE) and revenue collection to a new Office of Natural Resource Revenue. Interior faces ongoing challenges in three broad areas, including the following: Revenue collection. In 2008, GAO reported that Interior collected lower levels of revenues for oil and gas production than all but 11 of 104 oil and gas resource owners whose revenue collection systems were evaluated in a comprehensive industry study—these resource owners included many other countries as well as some states. GAO recommended that Interior undertake a comprehensive reassessment of its revenue collection policies and processes. Interior has commissioned such a study in response to GAO’s September 2008 report, and the study is expected to be completed in 2011. The results of the study may reveal the potential for greater revenues to the federal government. GAO also reported in 2010 that neither BLM nor MMS had consistently met their statutory requirements or agency goals for oil and gas production verification inspections. Without such verification, Interior cannot provide reasonable assurance that the public is collecting its legal share of revenue from oil and gas development on federal lands and waters. In addition, GAO reported in 2009 on numerous problems with Interior’s efforts to collect data on oil and gas produced on federal lands, including missing data, errors in company- reported data on oil and gas production, sales data that did not reflect prevailing market prices for oil and gas, and a lack of controls over changes to the data that companies reported. As a result of Interior’s lack of consistent and reliable data on the production and sale of oil and gas from federal lands, Interior could not provide reasonable assurance that it was assessing and collecting the appropriate amount of royalties on this production. GAO made a number of recommendations to Interior to improve controls on the accuracy and reliability of royalty data. Interior generally agreed with GAO’s recommendations and is working to implement many of them, but these efforts are not complete and it is uncertain if they will be fully successful. Human capital. GAO has reported that BLM and MMS have encountered persistent problems in hiring, training, and retaining sufficient staff to meet its oversight and management of oil and gas operations on federal lands and waters. For example, in 2010, GAO found that BLM and MMS experienced high turnover rates in key oil and gas inspection and engineering positions. As a result, Interior faces challenges meeting its responsibilities to oversee oil and gas development on federal leases, potentially placing both the environment and royalties at risk. GAO made recommendations to address these issues. While Interior’s reorganization of MMS includes plans to hire additional staff with expertise in oil and gas inspections and engineering, these plans have not been fully implemented, and it remains unclear whether Interior will be fully successful in hiring, training, and retaining these staff. Further, human capital issues also exist in the BLM and the management of onshore oil and gas, and these issues have not been addressed in Interior’s reorganization plans. Reorganization. In May 2010, the Secretary of the Interior announced plans to reorganize MMS—its bureau responsible for overseeing offshore oil and gas activities and collecting royalties—into three separate bureaus. The Secretary of the Interior stated that dividing MMS’s responsibilities among three separate bureaus will help ensure that each of the three newly established bureaus have a distinct and independent mission. While this reorganization may eventually lead to more effective operations, GAO has reported that organizational transformations are not simple endeavors and require the concentrated efforts of both leaders and employees to realize intended synergies and accomplish new organizational goals. One key practice that GAO has identified for effective organizational transformation is to balance continued delivery of services with transformational activities. However, we are concerned about Interior’s capacity to find the proper balance given its history of management problems and challenges in the human capital area. Specifically, GAO is concerned about Interior’s ability to undertake this reorganization while providing reasonable assurance that billions of dollars of revenues owed the public are being properly assessed and collected and that oversight of oil and gas exploration and production on federal lands and waters maintains an appropriate balance between efficiency and timeliness on one hand, and protection of the environment and operational safety on the other. In addition, Interior’s reorganization efforts do not address BLM’s ongoing challenges with its permitting and inspections programs and human capital challenges. Interior must successfully address the challenges GAO has identified, implement open recommendations, and meet its routine responsibilities to manage federal oil and gas resources in the public interest, while managing a major reorganization that has the potential to distract agency management from other important tasks and put additional strain on Interior staff. While Interior recently began implementing a number of GAO recommendations, including those intended to improve the reliability of data necessary for determining royalties, the agency has yet to fully implement a number of recommendations, including those intended to (1) provide reasonable assurance that oil and gas produced from federal leases is accurately measured and that the public is getting an appropriate share of oil and gas revenues, and (2) address its long-standing human capital issues. Remaining High-Risk Areas While there has been some progress on nearly all of the issues that remain on the High-Risk List, the nation cannot afford to allow problems to persist. Addressing high-risk problems can save billions of dollars each year. Several areas on GAO’s list illustrate both the challenges of addressing difficult and tenacious high-risk problems and the opportunities for savings that can accrue if progress is made to address high-risk problems. Medicare and Medicaid. GAO designated Medicare as a high-risk program because its complexity and susceptibility to improper payments, added to its size, have led to serious management challenges. In 2010, Medicare covered 47 million elderly and disabled beneficiaries and had estimated outlays of $509 billion. GAO also designated Medicaid as a high- risk program in part due to concerns about the adequacy of fiscal oversight, which is necessary to prevent inappropriate program spending. Medicaid, the federal-state program that covered acute health care, long- term care and other services for over 65 million low-income people in fiscal year 2009, consists of more than 50 distinct state-based programs that cost the federal government and states an estimated $381 billion that year. The program accounts for more than 20 percent of states’ expenditures and exerts continuing pressure on state budgets. New directives, implementing guidance, and legislation will impact the Centers for Medicare & Medicaid Services’ (CMS) efforts to reduce improper payments in the next few years. The administration issued Executive Order 13520 on Reducing Improper Payments in 2009 and related implementing guidance in 2010. In addition, the Improper Payments Elimination, and Recovery Act of 2010 (IPERA) amended the Improper Payment Information Act of 2002 (IPIA) and established additional requirements related to accountability, recovery auditing, compliance and noncompliance determinations, and reporting. In its fiscal year 2010 Agency Financial Report, the Department of Health and Human Services estimated that federal Medicare and Medicaid improper payments in fiscal year 2010 were more than $70 billion. CMS has taken actions to address some of the improper payment requirements. For example, recovery audit contractors identify improper payments and thus, help agencies to recover them. As required by law, CMS implemented a national Medicare Recovery Audit Contractors (RAC) program in 2009 and has provided guidance to the states for implementing Medicaid RACs. Other recent CMS program integrity efforts include issuing regulations tightening provider enrollment requirements. In addition, in compliance with the Executive Order, CMS has established reduction targets for the Medicare Fee-for-Service, Medicare Advantage, and Medicaid programs’ improper payment rates. We view these new laws, directives, and agency efforts as positive steps toward improving transparency over and reducing improper payments in the Medicare and Medicaid programs. However, it is too soon to determine whether the activities called for in recent laws and guidance will achieve their goals of reducing improper payments while continuing to ensure that federal programs serve and provide access to intended beneficiaries. CMS is still developing its improper payment rate methodology for its prescription drug program and has not been able to demonstrate sustained progress in lowering its improper payment rates for the other parts of Medicare. CMS needs a plan with clear measures and benchmarks for reducing Medicare’s risk for improper payments and other issues that leave the programs at risk. For Medicaid, we continue to stress that more federal oversight of its fiscal integrity is needed. Identifying the nature, extent and underlying causes of improper payments is an essential prerequisite to taking appropriate action to reduce them, as is implementing GAO’s recommendation to develop an adequate corrective action process to address vulnerabilities. Further, CMS could take other actions to help better address the issue of improper payments in the Medicare and Medicaid programs. For Medicare, these include establishing policies to improve contract oversight and better target review of claims for services with high rates of improper billing. For Medicaid, these include (1) ensuring that states develop adequate corrective action processes to address vulnerabilities to improper Medicaid payments to providers, (2) issuing guidance to states to better prevent payment of improper claims for controlled substances, and (3) improving oversight of managed care payment rate setting and Medicaid supplemental payments. The level of importance CMS, HHS, and the administration place on the efforts to implement the requirements established by recent laws and guidance and implementation of our recommendations will be key factors in reducing improper payments in the Medicare and Medicaid programs and ensuring that federal funds are used efficiently and for their intended purposes. Managing Federal Real Property and DOD Support Infrastructure Management. Since our 2009 update, sufficient progress has been made to narrow the scope of both the Managing Federal Real Property and DOD Support Infrastructure Management high-risk areas. However, in both areas, excess federal property remains a concern. The federal real property portfolio is vast and diverse. It totals over 900,000 buildings and structures with a combined area of over 3 billion square feet. Progress has been made on many fronts, including significant progress with real property data reliability and managing the condition of facilities. Since 2004, both OMB and GSA have demonstrated commitment in promoting reform efforts through establishing and improving a centralized real property data base. Agencies have developed asset management plans, standardized data, and adopted performance measures. Further, a June 2010 presidential memorandum directed agencies to identify and eliminate excess properties to produce a $3 billion cost savings by 2012. However, federal agencies continue to face long-standing problems, such as overreliance on leasing, excess and underutilized property, and protecting federal facilities. For example, OMB has not developed a corrective action plan to address the fact that agencies increasingly rely on leasing. GSA, the government’s principal landlord, leases more property than it owns. In addition, although efforts to dispose of unneeded assets have been made, a large number of excess and underutilized assets remain. Agencies reported 45,190 buildings as underutilized in fiscal year 2009—an increase of 1,830 such buildings from the previous fiscal year. Maintaining this unneeded space is costly. In fiscal year 2009, agencies reported underutilized buildings accounted for $1.66 billion in annual operating costs. As GAO has reported over the years, attempted corrective action measures have not addressed the root causes that exacerbate these problems, such as various legal and budget-related limitations and competing stakeholder interests. While the Department of Defense has made progress in better aligning its missions and facilities and disposing of unneeded facilities through the base realignment and closure process, the Department still has a significant amount of excess infrastructure. Senior Defense officials have stated that further reductions may be needed to ensure that its infrastructure is appropriately sized to carry out its missions in a cost- effective manner. Federal agencies also have made limited progress and continue to face challenges in securing real property. GAO has reported that, since transferring to the Department of Homeland Security, the Federal Protective Service (FPS) experienced management and funding challenges that have hampered its ability to protect about 9,000 federal facilities. In particular, FPS has limited ability to allocate resources using risk management and lacks appropriate oversight and enforcement to manage its growing contract guard program. In 2010, GAO found that limited information about risks and the inability to control common areas pose challenges to protecting leased space. As a result, the management of federal real property remains high risk, with the exceptions of governmentwide real property data reliability and management of condition of facilities, which GAO found to be sufficiently improved to be no longer considered high risk. Notwithstanding the progress in property data reliability which allows OMB to measure progress governmentwide, other actions need to occur to address root problems, including a strategy to address the continued reliance on leasing in cases where ownership would be less costly. This strategy should identify the conditions, if any, under which leasing is an acceptable alternative. In addition, OMB and the Federal Real Property Council should develop potential strategies to reduce the effect of competing stakeholder interests as a barrier to disposing of excess property. DOD Weapon Systems Acquisition. Over the next 5 years, the Department of Defense (DOD) expects to invest almost $343 billion (in fiscal year 2011 dollars) on the development and procurement of major defense acquisition programs. Defense acquisition programs usually take longer, cost more, and deliver fewer quantities and capabilities than DOD originally planned. Congress and DOD have taken steps to improve the acquisition of major weapon systems, yet some program outcomes continue to fall short of what was agreed to when the programs started. With the prospect of slowly growing or flat defense budgets for the foreseeable future, DOD must get better value for its weapon system spending and find ways to deliver needed capability to the warfighter for less than it has spent in the past. While the performance of individual programs can vary greatly, GAO’s work has revealed significant aggregate cost and schedule growth in DOD’s portfolio of major defense acquisition programs. In 2009, GAO reported that the total cost growth on DOD’s fiscal year 2008 portfolio of 96 major defense acquisition programs was over $303 billion (fiscal year 2011 dollars) and the average delay in delivering initial capability was 22 months. DOD has demonstrated a strong commitment, at the highest levels, to address the management of its weapon system acquisitions. At the strategic level, DOD has started to reprioritize and rebalance its weapon system investments. In 2009 and 2010, the Secretary of Defense proposed canceling or significantly curtailing weapon programs, such as the Army’s Future Combat System Manned Ground Vehicles and the Navy’s DDG-1000 Destroyer—which he characterized as too costly or no longer relevant for current operations. DOD plans to replace several of the canceled programs and has an opportunity to pursue knowledge-based acquisition strategies on the new programs. In addition, the Under Secretary of Defense for Acquisition, Technology, and Logistics has embraced an Army initiative to eliminate redundant programs within capability portfolios and make affordability a key requirement for weapon programs. These actions are consistent with past GAO findings and recommendations. However, if these initiatives are going to have a lasting, positive effect, they need to be translated into better day-to-day management and decision making. For example, GAO has recommended that DOD empower its capability portfolio managers at the departmentwide level to prioritize needs, make decisions about solutions, and allocate resources; and develop criteria to assess the affordability and capabilities provided by new programs in the context of overall defense spending. At the program level, GAO’s recent observations present a mixed picture of DOD’s adherence to a knowledge-based acquisition approach, which is key for improving acquisition outcomes. For 42 programs GAO assessed in depth in 2010, there was continued improvement in the technology, design, and manufacturing knowledge the programs had at key points in the acquisition process. However, most programs were still proceeding with less knowledge than best practices suggest, putting them at higher risk for cost growth and schedule delays. DOD has begun to implement a revised acquisition policy and congressional reforms that address these and other common acquisition risks. If DOD consistently implements these reforms, the number of programs adhering to a knowledge-based acquisition approach should increase and the outcomes for DOD programs should improve. To help promote accountability for compliance with acquisition policies and address the factors that keep weapon acquisitions on the High-Risk list, DOD has worked with GAO and the Office of Management and Budget to develop a comprehensive set of process and outcome metrics to provide consistent criteria for measuring progress. Due to actions by Congress, such as the Weapon Systems Acquisition Reform Act of 2009, and DOD, the department’s policy for defense acquisition programs now reflects the basic elements of a knowledge- based acquisition approach and its weapon system investments are being rebalanced. However, to improve outcomes over the long-term, DOD should (1) develop an analytical approach to better prioritize capability needs; (2) empower portfolio managers to prioritize needs, make decisions about solutions, and allocate resources; and (3) enable well-planned programs by providing them the resources they need, while holding itself and its programs accountable for policy implementation via milestone and funding decisions and reporting on performance metrics. DOD Supply Chain Management. We have identified Department of Defense (DOD) supply chain management as a high-risk area due to weaknesses in the management of supply inventories and responsiveness to warfighter requirements. Supply chain management is the operation of a continuous and comprehensive logistics process, from initial customer order for material or services to the ultimate satisfaction of the customer’s requirements. DOD estimated that its logistics operations, including supply chain management, cost about $194 billion in fiscal year 2009. Our work has identified three major areas of weakness in DOD supply chain management—requirements forecasting, asset visibility, and materiel distribution. Since our last high-risk update, DOD has taken a major step toward improving management of supply inventories. In response to a legislative mandate, the department submitted its Comprehensive Inventory Management Improvement Plan to Congress in November 2010. DOD reported that the total value of its secondary inventory was more than $91 billion in 2009, and that $10.3 billion (11 percent) of its secondary inventory has been designated as excess and categorized for potential reuse or disposal. In its plan, DOD establishes goals for reducing this excess inventory, which could limit future costs associated with its supply inventories. Issuing the plan and establishing working groups and associated reporting structures will help resolve long-standing problems in requirements forecasting and other areas of inventory management. Nevertheless, DOD faces implementation challenges, including aggressive timelines and benchmarking; non-standard definitions, processes, procedures, and metrics across DOD components; and the need for coordination and collaboration among multiple stakeholders. DOD will also need to place continued management emphasis on improving asset visibility and materiel distribution, the two other focus areas for improvement in supply chain management. Weaknesses in these focus areas can affect DOD’s ability to support the warfighter. For example, we reported on supply support problems and shortages of critical items during the early operations in Iraq and on the numerous logistics challenges that DOD faces in supporting forces in Afghanistan. In July 2010, DOD issued its Logistics Strategic Plan, providing high-level direction for supply chain management and other logistics areas. DOD, however, has not developed detailed corrective action plans that address the asset visibility and materiel distribution problems or their root causes and effective solutions. DOD also will need to fully implement a program for monitoring and independently validating the effectiveness and sustainability of corrective actions and will need to demonstrate progress in all three of the key focus areas. Among other things, DOD could build on the performance management framework in the Logistics Strategic Plan and the inventory improvement plan to develop management processes to comprehensively guide and integrate its various improvement efforts, implement outcome- based performance measures, gather reliable performance data, and demonstrate progress towards its goals for effective and efficient supply chain management. DOD has acknowledged that it needs to track the speed, reliability, and overall efficiency of the supply chain. Enforcement of Tax Laws. Internal Revenue Service (IRS) enforcement of the tax laws is vital to ensuring that all taxes owed are paid, which in turn can promote voluntary compliance by giving taxpayers confidence that others are paying their fair share. Typically, about 84 percent of taxes owed are paid voluntarily and timely. IRS last estimated the resulting tax gap to be $345 billion for 2001. After late payments and IRS enforcement, the net tax gap was $290 billion. Many experts believe that the tax gap was underestimated for 2001 and has grown since then. Congress and IRS have taken innovative actions aimed at improving tax compliance, some based on GAO’s work. In 2010, IRS began implementing a new regulatory regime for paid tax return preparers intended to help improve taxpayer compliance. Congress recently passed laws requiring financial institutions to report information on taxpayers’ foreign bank accounts, taxpayers’ securities’ basis, and businesses’ credit card receipts. In reports and testimonies, we have said that because the tax gap arises from so many different types of taxes and taxpayers, multiple approaches will be needed to reduce it. Suggestions from our recent work include Continuing to perform compliance research and use it to identify and target areas of noncompliance; Developing a strategy for ensuring compliance by networks of related businesses; Expanding IRS’s legal authority to correct simple tax return errors before refunds are issued; and Leveraging the new paid preparer requirements, new sources of information about taxpayers, and new technology to improve service and compliance. If approaches like these could reduce the tax gap by 1 percent, the resulting revenue increase would be about $3 billion annually. The complexity of the tax code also contributes to noncompliance and therefore the tax gap. Complexity can cause taxpayer confusion and provide opportunities to hide willful noncompliance. Consequently, improved tax compliance and a smaller tax gap could be one of the benefits of tax reform and simplification. Sustaining Progress on High-Risk Programs Overall, the government continues to take high-risk problems seriously and is making long-needed progress toward correcting them. Congress has acted to address several individual high-risk areas through hearings and legislation. Continued perseverance in addressing high-risk areas will ultimately yield significant benefits. Lasting solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the American public, and strengthen public confidence and trust in the performance and accountability of our national government. The GAO’s high-risk update and High Risk and Other Major Government Challenges Web site, www.gao.gov/highrisk/, can help inform the oversight agenda for the 112th Congress and guide efforts of the administration and agencies to improve government performance and reduce waste and risks. Thank you, Mr. Chairman, Ranking Member Cummings, and Members of the Committee. This concludes my testimony. I would be pleased to answer any questions you may have. For further information on this testimony, please contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. Contact points for the individual high-risk areas are listed in the report and on our high-risk Web site. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is the world's largest and most complex entity, with about $3.5 trillion in outlays in fiscal year 2010 funding a broad array of programs and operations. GAO maintains a program to focus attention on government operations that it identifies as high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. This testimony summarizes GAO's 2011 High-Risk Update, which describes the status of high-risk areas listed in 2009 and identifies any new high-risk area needing attention by Congress and the executive branch. Solutions to high-risk problems offer the potential to save billions of dollars, improve service to the public, and strengthen the performance and accountability of the U.S. government. This year, GAO removed the high-risk designation from two areas--the DOD Personnel Security Clearance Program and the 2010 Census--and designated one new high-risk area--Interior's Management of Federal Oil and Gas Resources. These changes bring GAO's 2011 High-Risk List to a total of 30 areas. While many positive developments have occurred, additional progress is both possible and needed in all 30 high-risk areas to save billions of dollars and further improve the performance of federal programs and operations. Congressional oversight and sustained attention by top administration officials are essential to ensuring further progress. The high-risk effort is a top priority for GAO. Working with Congress, agency leaders, and the Office of Management and Budget, GAO will continue to provide insights and recommendations on needed actions to solve high-risk areas. Regarding the new high-risk area, Interior does not have reasonable assurance that it is collecting its share of billions of dollars of revenue from oil and gas produced on federal lands, and it continues to experience problems in hiring, training, and retaining sufficient staff to provide oversight and management of oil and gas operations on federal lands and waters. Further, Interior recently began restructuring its oil and gas program, which is inherently challenging, and there are many open questions about whether Interior has the capacity to undertake this reorganization while carrying out its range of responsibilities, especially in a constrained resource environment. While there has been some progress on nearly all of the issues that remain on the High-Risk List, the nation cannot afford to allow problems to persist. This statement discusses opportunities for savings that can accrue if progress is made to address high-risk problems. For example: (1) Billions of dollars are estimated in Medicare and Medicaid improper payments. The effective implementation of recent laws, including the Improper Payments Elimination and Recovery Act of 2010, and administration guidance will be key factors in determining the overall effectiveness of reducing improper payments in the Medicare and Medicaid programs. (2) Federal agencies' real property holdings include thousands of excess and/or underutilized buildings and cost over $1.6 billion annually to operate. If this issue is not addressed, the costs to maintain these properties will continue to rise. (3) Over the next 5 years, the Department of Defense (DOD) expects to invest over $300 billion (in fiscal year 2011 dollars) on the development and procurement of major defense acquisition programs. DOD must get better value for its weapon system spending and find ways to deliver needed capability to the warfighter for less than it has spent in the past. The High-Risk update contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving progress. GAO is dedicated to continue working with Congress and the executive branch to help ensure additional progress is made.
Background The occupants of other vehicles are several times more likely to die in crashes involving large commercial trucks than the occupants of the trucks. From 1995 through 2004, there were 51,791 people killed in large- truck crashes. Of this total, 40,438 were occupants of other vehicles, while 7,131 were the occupants of large trucks and 4,222 were nonmotorists, such as pedestrians. Figure 1 shows the number of passenger-vehicle and large-truck occupants killed in collisions involving large trucks from 1995 through 2004, according to NHTSA’s Fatality Analysis Reporting System. In 2004, we presented information on traffic fatalities using data contained in NHTSA’s Fatality Analysis Reporting System database. We determined that these data were sufficiently reliable for reporting purposes. See GAO, Highway Safety: Federal and State Efforts to Address Rural Road Safety Challenges, GAO-04-663 (Washington, D.C.: May 28, 2004). During 2004 and 2005, the Share the Road Safely Program Funded a Reasonably Designed Education and Enforcement Pilot in Washington State During 2004 and 2005, NHTSA funded the Share the Road Safely program and implemented an aggressive driving pilot initiative in Washington State. Known as Ticketing Aggressive Cars and Trucks (TACT), it combined education and law enforcement activities in an effort to reduce aggressive driving between passenger vehicles and trucks. Another objective of the pilot was for NHTSA to show FMCSA staff how to operate similar initiatives in the future. TACT generally conformed to the proven high- visibility law enforcement model, although it dealt with more complex issues than previous high-visibility law enforcement campaigns. TACT’s design and implementation linked to the STRS goal of changing driver behavior, whereas past STRS initiatives sometimes did not link to goals or were not designed to maximize the potential for success. In addition, Congress requested FMCSA to use a liaison to facilitate the transfer of knowledge about high-visibility law enforcement campaigns from NHTSA to FMCSA. Lastly, educational initiatives that were a part of STRS in 2003 were pursued by FMCSA, although not funded under NHTSA in 2004 and 2005. DOT Helped Establish and Operate an Aggressive Driving Pilot Initiative in Washington State According to DOT officials, Share the Road Safely program funding has supported an aggressive driving pilot initiative in Washington State starting in 2004. In 2004, DOT selected Washington State for the pilot initiative and signed a cooperative agreement with the Washington Traffic Safety Commission. The pilot, known as TACT, combined high-visibility law enforcement waves with education and outreach activities in an effort to reduce aggressive driving between passenger-vehicle and large-truck drivers. TACT focused on four interstate highway corridors, each covering a distance of approximately 25 miles. (See fig. 2.) Two intervention corridors in the western part of the state received media messages and 2 weeks of increased, high-visibility law enforcement waves in July and September, 2005, while two comparison corridors did not. During these waves, law enforcement officers patrolled the intervention corridors in marked and unmarked patrol cars, in state patrol aerial units when weather permitted, and from the cabs of semitrucks to target unsafe driving around large trucks. The TACT pilot initiative used paid radio advertising and earned media, such as local news coverage, to inform the targeted audience of the dangers of aggressive driving related to trucks and to announce that law enforcement officers would issue tickets for such behavior. TACT’s radio advertisement was aired over 6,000 times during the course of the enforcement waves, and eight local television stations dedicated coverage to the pilot. Figure 3 shows an example of the earned media coverage. DOT officials said they selected Washington State to participate in this pilot because of the state’s experience with other related safety initiatives, its accurate fatality and crash database, and its strong relationships with key stakeholders. TACT built upon a previous STEP model campaign, Step Up and R.I.D.E., which operated in Washington for several years. In the Step Up and R.I.D.E. program, Washington partnered with the local trucking industry to periodically place police officers in commercial vehicles to identify and issue citations to drivers observed committing offenses. DOT officials also stated that Washington has shown itself capable of successfully implementing and evaluating a high-visibility law enforcement campaign—specifically its Click It or Ticket campaign, which in 2002 increased safety belt use from about 80 percent to 95 percent. Additionally, DOT cited Washington as having good data on crashes and fatalities. In a 2005 report, we also recognized that Washington has very good cooperation among state agencies involved in crash data collection and reporting, and a strong relationship with its FMCSA division office. Finally, according to DOT officials, a particular strength of Washington is that the Washington Traffic Safety Commission, the lead organization in implementing the TACT initiative, comprises multiple state agencies, including all of the agencies that are participating in TACT, thus setting the stage for easy coordination and cooperation among participating agencies. Federal, state, and local organizations participated in and contributed about $892,000 for the planning and operation of TACT. A steering committee led by the Washington Traffic Safety Commission planned and administered the pilot project. Other partners on the steering committee included the Washington DOT, the Washington State Patrol, the Association of Sheriffs and Police Chiefs, and the Washington Trucking Association. Officials on the steering committee believed having all of these groups involved in developing the pilot was important to the successful implementation of the pilot. The Washington Traffic Safety Commission also contributed $118,000 to the pilot for project management and communications. Local and state police made officers available for the enforcement waves, and the Washington Trucking Association worked with trucking companies to provide decoy trucks and drivers. NHTSA provided considerable assistance in developing and implementing the initiative and supplied the majority of TACT’s funding, awarding $497,000 in fiscal year 2004 and an additional $68,000 for evaluation activities in fiscal year 2005. Congress also provided NHTSA with an additional $99,000 for the TACT initiative in fiscal year 2005. FMCSA did TACT’s initial planning and provided ongoing assistance, including reviewing plans. It also provided $100,000 in fiscal year 2005 for TACT’s enforcement efforts through MCSAP. FMCSA Sought to Learn How to Operate a High- Visibility Law Enforcement Campaign DOT officials told us that a goal for FMCSA in the TACT initiative was to gain institutional knowledge on the operation of high-visibility law enforcement campaigns, such as Click It or Ticket. These campaigns combine education and outreach activities with high-visibility law enforcement to bring about a change in driver behavior. Our 2003 report stated that highway safety experts agree that attempts to modify behavior are more effective when educational and enforcement efforts are used together. However, the STRS initiatives we identified in our 2003 report were purely educational. The report added that FMCSA could improve STRS by drawing from NHTSA’s considerable experience with high- visibility law enforcement campaigns like Click It or Ticket, which has been widely considered effective in increasing the rate of safety belt use. Furthermore, a NHTSA evaluation report found that 10 states that used the Click It or Ticket model had significantly greater increases in safety belt use compared with states that attempted to increase safety belt use through other initiatives. TACT offered FMCSA an opportunity to learn from NHTSA’s experience with high-visibility law enforcement campaigns and learn how to develop similar aggressive driving initiatives in other states. To further ensure this transfer of knowledge as requested by conferees in the Conference Report accompanying its 2005 appropriations act, FMCSA hired and detailed a staff member to NHTSA to act as a communications liaison for STRS. The liaison was involved in some facets of TACT, including meeting with its steering committee and preparing briefings on the pilot. According to DOT officials, however, the liaison came aboard after the completion of the last enforcement wave—later in this report, we discuss this matter further in relation to the future of STRS initiatives. Pilot Generally Conformed to the Proven High-Visibility Law Enforcement Campaign Model with Some Variation, but Dealt with More Complex Issues Our analysis of TACT’s design and implementation shows that it generally conformed to the high-visibility law enforcement campaign model as intended, but varied in a few aspects. Specifically, TACT was modeled after NHTSA’s Click It or Ticket campaign. In modeling Click It or Ticket’s approach, officials in the TACT program collected data before and after its enforcement waves to identify behavior changes; it had highly visible enforcement on each day of its enforcement waves; and it had used both paid and earned media to publicize its enforcement. TACT did deviate from the Click It or Ticket model in two ways. First, the pilot did not use paid television advertising. Washington State officials explained that this was because of the program’s limited budget. While evaluations of Click It or Ticket show that radio advertisements were effective in reaching the motoring public, radio is not as effective a medium as television. Second, the media for the TACT pilot described the enforcement campaign as zero tolerance as prescribed by the Click It or Ticket model, and enforcement was stepped up; however, law enforcement officers participating in TACT issued warnings instead of citations in 28 percent of the traffic stops. NHTSA officials explained that law enforcement officers always have discretion on whether to issue citations, and what is more important is that the public perceive an increase in law enforcement. Furthermore, they explained there is no research about the most effective level of citation tickets. See appendix II for a detailed comparison of TACT’s implementation of the Click It or Ticket model. Although TACT is based on the high-visibility law enforcement campaign model, it deals with more complex issues than previous initiatives. In the case of Click It or Ticket, law enforcement is simply checking for safety belt use. With TACT, there are a number of behaviors that may constitute aggressive driving, including tailgating, speeding, and unsafe merging. These multiple factors also made it more difficult to develop a primary message for TACT to communicate to the public. TACT administrators, for example, determined that they had to choose a primary behavioral theme— leaving more space around trucks—to communicate to motorists, although obeying the speed limit and staying out of a truck’s blind spots also are important and were secondary themes. See figure 4 for a depiction of TACT’s selected message. This message was posted on 16 highway signs in the intervention corridors. Additionally, TACT was more difficult to institute from a legal standpoint. Washington has a primary safety belt law, meaning that officers can pull over drivers solely for not wearing their safety belts. In the case of TACT, however, Washington has no single aggressive driving law. Washington State officials told us they had to ensure that courts would be willing to enforce the tickets because police officers issued citations for violations under a number of laws. Previous STRS Initiatives Were Not Funded under NHTSA In fiscal years 2004 and 2005, STRS did not fund initiatives that were a part of the program in 2003. All STRS funds in fiscal years 2004 and 2005 were directed to the TACT pilot. According to FMCSA officials, however, they continued to disseminate education and outreach materials. For instance, the No-Zone campaign—a major initiative of STRS—was not funded during this period. FMCSA did, however, keep No-Zone information available on its Web site and responded to requests for educational material. For example, according to FMCSA officials, during this period they distributed over 200,000 copies of the No-Zone brochure through venues such as conferences and industry events. Also prior to TACT, FMCSA developed a curriculum for teaching students about sharing the road with trucks. FMCSA completed work on the curriculum and produced a video for the course, and it distributed the materials during fiscal years 2004 and 2005, including 1,500 copies of the video. Evaluation of TACT Demonstrated Positive Results and Was Generally Well- Designed DOT and Washington State officials conducted an evaluation of TACT that demonstrated the initiative’s success and was generally well-designed. Specifically, analysis of videotaped driver behavior showed reductions in aggressive driving, and targeted motorists reported significant exposure to the initiative’s message. Additionally, the evaluation followed accepted experimental design principles by comparing changes on two intervention highway corridors, which were exposed to the initiative’s message and enforcement, with changes on two comparison highway corridors, which were not exposed to the message. This experimental setup enabled program administrators to attribute positive changes in driver behavior to TACT initiatives. The evaluation did not assess changes in crashes, but increased driver awareness and improved driver behavior should logically lead to reduced crashes, injuries, and fatalities. Also, TACT’s design of combining education outreach with law enforcement better lent itself to reaching STRS goals than previous initiatives that were purely educational. TACT Improved Driver Behavior and Public Awareness The TACT evaluation demonstrated that the initiative was able to produce improvements in driver behavior. TACT evaluated changes in driver behavior by recording video footage of drivers in the four corridors and using three groups of reviewers—police officers, truck drivers, and Washington Traffic Safety Commission staff—to rate the seriousness of any unsafe driving acts. (See app. III for a more detailed explanation of how this video footage was analyzed.) This analysis found that the rate of unsafe driving acts per observation hour was nearly cut in half, from 5.80 to 3.05, for the intervention corridors, as compared with a slight decrease, from 4.03 to 3.92, for the comparison corridor. When controlled for the preenforcement rates, these data represent a 46 percent decrease in unsafe driving in the intervention corridors. The comparison corridors also had 1.85 times as many violations per hour than the intervention corridors when the data are controlled for the corridors’ respective violation rates prior to enforcement. (Fig. 5 shows the rate of violations per observation hour.) Also, analysis of driver behavior in the intervention corridors found that crash risk decreased and driver behavior was less illegal and less intimidating, among other things. The TACT initiative improved driver behavior by successfully reaching its intended audience. TACT evaluators demonstrated this by using a survey to measure the extent to which the initiative changed the awareness of the target audience. In each of the four communities selected for the project, TACT administrators distributed surveys to the public at driver licensing offices both before and after the enforcement waves. For example, the percentage of respondents on the intervention corridors that reported general exposure to media about giving trucks more space nearly quadrupled, from about 18 to 67 percent. These data contrast with data for the comparison corridors, where the percentage only increased from about 17 to 20 percent. (Fig. 6 shows the percentages of respondents that reported hearing or seeing TACT-related media outreach.) Additionally, the evaluation found significant increases in the percentages of respondents on the intervention corridors that specifically reported hearing the radio message and seeing the TACT road sign, television, and newspaper messages. Furthermore, surveys of drivers also showed a significant increase in drivers reporting that they leave more space when passing trucks (the intended behavioral change theme of the project) from about 16 to 24 percent for the intervention corridors as compared with a slight increase from about 15 to 16 percent for the comparison corridors. The Evaluation of TACT Was Generally Well- Designed and Links Results to Its Intended Goal of Crash Reduction We found that the evaluation of TACT was generally well-designed, since it appropriately used an experimental design to attribute outcomes to TACT’s initiatives. An experimental design permits researchers to attribute outcomes to the effects of the program and rule out other influences. Often with this kind of evaluation design, the participants in the intervention group are exposed to the initiative, while similar participants in the comparison group are unexposed. Aside from the initiative, participants experience the same influences. That is, they face conditions that are alike during the same period. More specifically, the evaluation of the TACT initiative exposed drivers in the intervention corridors to paid and earned media and high-visibility law enforcement waves, while simultaneously leaving unexposed comparable drivers in similar comparison corridors. Then the evaluation compared outcomes in the two groups. This procedure was repeated in two additional corridors to make sure that any detected differences in outcomes were not unique to the first two corridors. Our 2003 review of STRS recommended that DOT establish a systematic process for evaluating the effectiveness of the program. Therefore, the evaluation of TACT’s methodology represents a positive step toward meeting our 2003 recommendation. The evaluation report concludes that the initiative was a success, but it did not report on TACT’s effect on the long-term results of the initiative, such as the impact on the number of crashes, despite earlier plans to do so. Both TACT implementation plans and a NHTSA official stated that the evaluation would assess the impact of the initiative on the number of crashes in the intervention corridors. However, as the evaluation report states, it is difficult to determine changes in crashes given the low number of crashes in Washington State; therefore, intermediate measures for evaluating the initiative had to be relied upon. NHTSA officials stated that although the evaluation was unable to report on long-term results, the program’s finding of improved driver behavior around trucks would logically indicate an expected decrease in truck-related crashes, injuries, and fatalities. Furthermore, NHTSA does not evaluate individual Click It or Ticket campaigns, which are considered to successfully modify behavior, for their effect on long-term results such as fatality reduction. Figure 7 shows how TACT linked short-term results (such as awareness and knowledge of the dangers of driving around trucks) and intermediate results (such as changed driver behavior around trucks) to the long-term results of fewer truck-related crashes, injuries, and fatalities. TACT Is Better Designed to Successfully Reach Agency Goals Than Past STRS Initiatives The design of TACT provided a better opportunity for successfully reaching desired results and goals than past STRS initiatives. Our 2003 report on STRS found that some of FMCSA’s education and outreach initiatives were not directly connected to agency goals and recommended that future initiatives be so connected. While program initiatives that exclusively rely on education and outreach, such as distributing informational pamphlets or advertising, can increase awareness and encourage the intended behaviors, thereby linking to a program’s goals, attempts to modify the behaviors of drivers are more effective when educational initiatives are combined with enforcement. This conclusion is supported by the evaluation of past initiatives to change driver behavior, particularly of efforts to increase safety belt use. For example, a 2002 study by NHTSA included data from Texas, which showed that while the baseline percentage of individuals wearing safety belts (80 percent) increased slightly with advertising alone, the combination of advertising and enforcement caused the number to increase another 6 percent. TACT’s use of media, road signs, and other educational outreach tools therefore directly linked to the STRS goal of decreasing unsafe driver behavior around commercial vehicles by truck drivers and passenger-vehicle drivers, and incorporating high-visibility law enforcement increased the initiative’s potential for successfully reaching that goal. In effect, TACT represents a positive step toward meeting our 2003 recommendation that STRS initiatives clearly link to STRS goals. FMCSA Plans Expanded Development of High- Visibility Law Enforcement Campaigns Similar to TACT, but Lacks a Clear Strategy and Expertise Following the success of TACT in Washington State, FMCSA is developing plans encouraging states to adopt similar initiatives in other states; however, its strategy for expanding TACT and its ability to manage these initiatives remain unknown. FMCSA officials stated that they plan a nationwide rollout of initiatives similar to TACT by 2009, and that in the interim, they are currently developing another TACT pilot in Pennsylvania. FMCSA, however, has yet to articulate a strategy for expanding TACT into a nationwide program or to identify funding. Additionally, FMCSA’s ability to administer future TACT initiatives is uncertain, since FMCSA has limited experience with high-visibility law enforcement campaigns. Finally, FMCSA plans to spend the majority of its STRS funds on initiatives that are purely educational, even though little information is available to show that these activities will improve driver behavior and contribute to reducing fatalities. FMCSA Plans to Implement More TACT-Like Initiatives but Has Yet to Articulate Its Strategy FMCSA plans to expand initiatives similar to TACT to new states and, eventually, nationwide. FMCSA officials stated that they plan to issue a Federal Register notice in fiscal year 2008 before rolling out TACT on a nationwide basis in 2009. In the interim, FMCSA is currently developing plans to implement another TACT pilot in Pennsylvania, using primarily MCSAP grants and state funds. There, FMCSA will contract with the Pennsylvania State Police to develop and operate a high-visibility law enforcement campaign in at least two intervention corridors and two comparison corridors in an area with a high concentration of commercial- vehicle fatalities and crashes. Pennsylvania will also be responsible for evaluating its pilot. Agency officials anticipate this pilot taking 18 months to complete. FMCSA also plans to conduct two additional pilots in fiscal year 2007, but has not yet identified states. Additionally, FMCSA issued a Federal Register notice in March 2006 stating that states could use MCSAP High Priority grants to comply with provisions of SAFETEA-LU that require states to conduct comprehensive and highly visible traffic enforcement and commercial-vehicle safety inspection programs in high- risk locations and areas. FMCSA added that these initiatives could be similar to TACT. FMCSA officials stated that they will develop guidance for states to follow, but gaps remain in their strategy for expanding TACT nationwide. Agency documents state that the Washington State TACT pilot and the future Pennsylvania initiative will form the foundation of a best practices guide to share with states. However, FMCSA has yet to articulate how it will expand TACT from several planned pilot initiatives in 2007 to a nationwide program 2 years later, or how this expansion will be funded. Additionally, although FMCSA enabled states to apply for MCSAP High Priority grants to develop initiatives similar to TACT, FMCSA did not provide states with the guidance to do so. Applications for these funds were due before the Washington TACT evaluation report was published; therefore, states seeking to begin similar initiatives needed to design their own initiatives without the benefit of Washington’s experience. Finally, FMCSA officials stated that no state applied for a fiscal year 2006 grant before the application deadline in the Federal Register; however, FMCSA will accept applications until the end of fiscal year 2006 or until the available funds are awarded. Although FMCSA has plans for a nationwide expansion of TACT, the majority of FMCSA’s STRS funds will be spent on other activities. Program planning documents state that FMCSA has decided to transition STRS to focus on developing initiatives similar to TACT in other states, but FMCSA plans to invest just $150,000 of its $500,000 fiscal year 2006 STRS budget to do this. FMCSA officials told us that STRS funds would pay for the evaluation component of this initiative, and FMCSA will supplement activities with MCSAP funds. The $150,000 fiscal year 2006 STRS investment in these future initiatives is significantly less than the approximately $664,000 in STRS funds provided solely to TACT in fiscal years 2004 and 2005. FMCSA’s Ability to Manage Future Initiatives Is Unclear and NHTSA’s Role Is Still Evolving FMCSA’s ability to administer future high-visibility law enforcement campaigns and NHTSA’s role in future STRS initiatives are unclear. As we previously mentioned, a goal of the TACT pilot was for FMCSA to learn about the operation of high-visibility enforcement programs from NHTSA, and to support this goal, FMCSA detailed a liaison to NHTSA following congressional direction. FMCSA, however, missed valuable opportunities for learning because of the time it took to fill the position, since the liaison came aboard late in the TACT program and returned to FMCSA before NHTSA conducted its annual Click It or Ticket enforcement campaign. After discussing our findings with FMCSA officials, they clarified that other FMCSA staff participated in TACT and knowledge transfer was not limited to the liaison. Furthermore, NHTSA’s participation in future STRS activities is still evolving. As we previously mentioned, SAFETEA-LU authorized $3 million to NHTSA and $1 million annually to FMCSA for administering education and outreach activities associated with commercial-vehicle safety for the 4- year period from 2006 through 2009. However, the Conference Report accompanying the DOT appropriations act for fiscal year 2006 indicates that the conferees did not fund the amounts authorized. Instead, they funded $4 million to FMCSA alone for these purposes. Given its limited experiences with programs designed to modify driver behavior, however, FMCSA’s plans call for continuing cooperation with NHTSA in future aggressive driving programs. For example, staff in FMCSA’s Washington Divisional Office told us that their agency lacks NHTSA’s experience with initiatives that change driver behavior and does not have staff with a background in the area, especially at the division office level. This is important because TACT’s evaluation report states that having an experienced evaluation team that can develop and implement a comprehensive evaluation design was critical to the success of the project. As we previously mentioned, NHTSA has experience in operating successful campaigns to increase safe behavior by motorists. Additionally, FMCSA has only a small number of staff dedicated to its education and outreach programs. NHTSA staff with whom we spoke initially stated that the agency’s involvement will end with the issuance of the TACT program evaluation report. Currently, however, NHTSA staff said they will provide FMCSA with general assistance, and FMCSA has formed a transition team to help ensure that the necessary expertise will be available to future initiatives. NHTSA officials added that specific experience with behavioral issues is not required to replicate the TACT initiative. They said that a program plan, a media plan, an enforcement plan, and an evaluation plan are required. FMCSA’s Short-term Plans Focus on Initiatives That Do Not Include Enforcement and That Have Not Been Shown to Be Effective FMCSA plans to spend the majority of its 2006 STRS funds on updating the STRS Web site and producing outreach materials. These funds will be spent on initiatives that have limited potential for reducing fatalities and provide limited opportunities for evaluation, representing a return to an earlier era of STRS. FMCSA will spend $200,000 in updating its Web site, $100,000 on education and outreach materials promoting sharing the road, and $50,000 on printing. FMCSA plans to update its Web site with information on preventing aggressive driving, which will include Spanish-language content. The Web site also will include a user survey to gauge satisfaction and will be able to ask up to five questions about a user’s knowledge of STRS initiatives. Currently, FMCSA can only collect information on the number of visits to the Web site. In addition, FMCSA plans to distribute education and outreach materials promoting sharing the road. These initiatives were not financially supported during fiscal years 2004 and 2005, when NHTSA had responsibility for STRS. As we previously stated, purely educational initiatives may conceptually link to FMCSA’s goal of reducing accidents and fatalities, but initiatives such as TACT have a better potential to improve driver behavior by incorporating local enforcement efforts with educational outreach. Figure 8 shows four categories of FMCSA’s planned STRS spending in fiscal year 2006. Table 1 lists FMCSA’s planned outreach activities within two of these categories. It is unclear if evaluations of these planned STRS education and outreach activities will provide meaningful insight into their effectiveness. FMCSA officials told us that they hired a contractor to develop evaluations of STRS education and outreach activities, but plans to evaluate the impact of these activities on fatality and injury rates have yet to be developed. This contractor will be required to (1) develop an evaluation study that gathers baseline data and (2) assess whether the education and outreach materials and activities reached the intended audience, changed attitudes and behaviors, and helped the program meet its safety goals. However, in discussing these plans, a NHTSA official told us that it will be difficult to measure the impact of educational materials on driver behavior. Furthermore, in our 2003 report, we stated that previous evaluations of STRS activities shed little light on their short-term, intermediate, and long- term outcomes. This was due, in part, to FMCSA’s heavy reliance on self- reported data and to FMCSA’s not establishing a baseline of driver behavior and knowledge before the program started. By contrast, TACT’s evaluation visually assessed driver behavior before and after motorists received education and enforcement. If FMCSA cannot evaluate the effect of these activities on driver behavior, then the planned activities may represent a return to the practices that we questioned in our 2003 report. Conclusions The TACT initiative represented a significant departure from previous STRS initiatives and, by following the high-visibility law enforcement campaign model, incorporated program elements that experts believe are most effective in changing driver behavior. Its systematic evaluation and clear link to agency goals were important steps toward addressing concerns with STRS that we raised in the past. Furthermore, the positive results shown by the TACT evaluation and the ongoing problem of crashes between trucks and passenger vehicles demonstrate that there is merit in further developing and implementing high-visibility law enforcement campaigns similar to TACT. FMCSA’s plans for future aggressive driving initiatives are still evolving, but the agency is currently developing a second pilot in Pennsylvania and has a goal of rolling out TACT-like initiatives nationwide in 2009. However, FMCSA has yet to develop a clear strategy describing how it will expand initiatives similar to TACT from a series of pilots into a nationwide program or to describe how these programs will be funded. Furthermore, some of FMCSA’s plans for addressing unsafe driving do not focus on expanding education and enforcement initiatives such as TACT. Instead, FMCSA has chosen to spend the majority of its fiscal year 2006 STRS funds on initiatives that are purely educational, which safety experts agree are less effective than when educational outreach is combined with enforcement. Because FMCSA has not identified a cohesive strategy to expand TACT and not focused on proven approaches such as high-visibility law enforcement campaigns, it is unclear how FMCSA’s STRS initiatives will contribute to FMCSA’s goal of expanding TACT and reducing crashes and fatalities. Finally, there are doubts about FMCSA’s ability to ensure the success of STRS in the future. Although funding responsibility for STRS returned to FMCSA in 2006 and FMCSA participated in the initial planning for TACT, NHTSA and the Washington Traffic Safety Commission significantly supported TACT’s implementation and evaluation. Additionally, FMCSA may have missed valuable opportunities to learn about the operation of TACT and other similar programs because its involvement was limited by not being able to use its legislatively mandated liaison to the fullest extent possible. DOT, through staff from both NHTSA and FMCSA, demonstrated that it has the ability to develop state initiatives that change driver behavior around trucks by successfully implementing TACT. Even so, it is uncertain that DOT will effectively use these resources in the future, given that the relationship between NHTSA and FMCSA is still evolving and that FMCSA has limited staff and experience in administering high-visibility law enforcement campaigns. Recommendations for Executive Action To ensure that the Share the Road Safely program continues to improve driver behavior around commercial vehicles, thereby potentially reducing fatalities, we recommend that the Secretary of Transportation direct the Administrators of the appropriate agencies to take the following three steps: develop a comprehensive strategy describing how FMCSA will implement and fund an expansion of TACT-like initiatives from several pilots into a nationwide program and detail how STRS initiatives contribute to this goal; complete and execute plans to evaluate STRS outreach activities that are purely educational and discontinue activities with no demonstrable impact on behavior; and monitor whether FMCSA has sufficient staff and expertise to successfully develop and administer future high-visibility law enforcement campaigns, and, if it does not, determine the best methods for DOT to use its resources and expertise to modify driver behavior and address the problem of aggressive driving around trucks. Agency Comments We provided DOT with a draft of this report for review and comment. DOT officials, including FMCSA's Outreach Division Chief and NHTSA's Behavioral Technology Research Chief, provided oral and written comments and generally agreed with our recommendations. These FMCSA and NHTSA officials clarified FMCSA’s role in developing initial plans for an education and enforcement project after we issued our 2003 report and before Congress provided NHTSA with Share the Road Safely funding. FMCSA officials also provided additional information on, and documentation of, a contract to develop an evaluation of FMCSA’s education and outreach programs, including Share the Road Safely educational initiatives. Finally, the officials provided information on a team of FMCSA and NHTSA staff established in May 2006 to assist FMCSA with the expansion of TACT as fiscal responsibility for STRS transitions from NHTSA to FMCSA. We incorporated this information as well as technical comments throughout the report as appropriate. We will send copies of this report to interested congressional committees, the Secretary of Transportation, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or siggerudk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Scope And Methodology To address our first objective and describe what the Department of Transportation (DOT) has done with the Share the Road Safely (STRS) program since 2003, we interviewed DOT officials to determine the changes made in the program since May 2003. Additionally, we interviewed officials from the Federal Motor Carrier Safety Administration (FMCSA), the National Highway Traffic Safety Administration (NHTSA), and the Washington State Traffic Safety Commission to report on the implementation and administration of Washington State’s Ticketing Aggressive Cars and Trucks (TACT) pilot project. To determine whether the design of TACT was reasonable, we reviewed TACT programming documentation to determine if the design of the program links program initiatives to goals and if the design follows the high-visibility law enforcement campaign model for behavior change. We did not assess the design of other STRS initiatives because they were not actively funded in fiscal years 2004 and 2005, and because we reported on these activities in our 2003 report. To address our second objective—to determine what DOT evaluations have shown and whether the methods were acceptable—we reviewed evaluation results and analyzed evaluation plans to determine if short-term, intermediate, and long-term outcomes were measured and if external factors were considered and controlled for in the assessment. We reviewed and summarized the results of the Washington State pilot evaluation and determined if program initiatives linked to agency goals. In addition, we reviewed the evaluation results to determine if the evaluation illustrates that the pilot met its criteria for success. Due to the nature of the TACT program, we could not determine in this report whether the Share the Road Safely program achieved reductions in the number of deaths and severity of injuries as requested by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Specifically, SAFETEA-LU asked us to update our prior evaluation of STRS to determine if the program has achieved reductions in the number and severity of commercial-vehicle crashes, including reductions in the number of deaths and the severity of injuries sustained in these crashes. NHTSA officials, however, told us that the evaluation did not assess these long-term results because the numbers of injuries and fatalities on the pilot’s intervention corridors were too low to reliably measure any appreciable change. Therefore, we did not discuss in this report the program’s impact on specific numbers of fatalities and injuries. To assess DOT’s plans for the future of STRS, we interviewed program administrators at DOT and reviewed relevant budget and planning documents to determine DOT’s direction for STRS. We interviewed the FMCSA staff member assigned to transfer knowledge about administering high-visibility law enforcement campaigns from NHTSA to FMCSA to assess the staff member’s ability to accomplish this task. We also compared the design of future FMCSA initiatives with findings we listed in previous reports on programs designed to modify driver behavior and increase a program’s effectiveness at reducing crashes, injuries, and fatalities associated with commercial vehicles. We conducted our review from October 2005 through July 2006 in accordance with generally accepted government auditing standards. TACT’s Implementation of the Click It or Ticket Model Click It or Ticket criteria Data collection, before, during, and immediately after media and enforcement phases. Earned and paid publicity announcing strict enforcement. Highly visible enforcement each day of enforcement period. Analysis of crash locations in determining the need for improvement and for targeting efforts. Areas should be defined so that residents have a sense of belonging to a community. Enforcement agencies should partner with local government, public service organizations, the media, and businesses to generate overwhelming program intensity. Maximum involvement among the state, county, and local enforcement agencies serving the community. Areas should try to include as large a percentage of the population as resources permit. Up-front commitment to the program is needed from top management in each participating enforcement agency. Officer training should be conducted. A high-level enforcement official should take the lead in carrying the message to the public. Organizers must have the full support of elected officials. The program should be coordinated with the courts, since their caseloads will be affected directly by the number of citations issued. Enforcement messages repeated over and over during the publicity period. Continual use of earned media. Paid advertisement campaigns. Radio advertisements timed to run during drive times. Television advertisements are run at times when most viewers are present. Enforcement campaigns usually last 2 weeks. During this period, zero-tolerance enforcement is carried out. Enforcement visible for entire enforcement period. Description of TACT Methodology for Analyzing Video Footage of Driver Behavior To determine whether driver behavior changed, TACT administrators measured the incidence and rates of unsafe driver behavior in the vicinity of commercial vehicles. Washington State Police troopers collected these data by videotaping traffic from unmarked cars. Troopers drove behind commercial vehicles and provided narration indicating the type of behavior observed each time an unsafe act was seen. Unsafe behaviors included making unsafe lane changes, cutting in front of a truck, following another vehicle too closely, engaging in unsafe merging, and speeding. Troopers also provided narration detailing whether they would issue citations for driving violations. Later, 99 video clips were randomly selected and shown to three sets of reviewers consisting of police officers, truck drivers, and Washington Traffic Safety Commission employees. Reviewers filled out a score sheet for each video clip indicating how dangerous they believed the driver behavior was and whether it deserved a citation. Evaluators quantified these responses to generate a score indicating the seriousness of the unsafe driving act. Staff Acknowledgments GAO Contact Staff Acknowledgments Catherine Colwell, Assistant Director, and Samer Abbas, Analyst-in-Charge, managed this assignment and made significant contributions to all aspects of the work. Daniel Concepcion also made significant contributions to all aspects of this report. In addition, Joel Grossman assisted in our assessment of the TACT initiative’s design and evaluation. Tamera Dorland provided writing assistance, Bert Japikse provided legal support, and Joshua Ormond and Theresa Perkins assisted with graphics.
In 2004, over 5,000 people died on our nation's roads in crashes involving large trucks. The Department of Transportation's (DOT) Federal Motor Carrier Safety Administration (FMCSA) operates truck safety programs, including Share the Road Safely (STRS), which has a goal to improve driving behavior around large trucks. At congressional direction, the National Highway Traffic Safety Administration (NHTSA) assumed responsibility for funding STRS in 2004, but returned STRS to FMCSA in 2006. The current transportation authorization bill requested GAO to update its 2003 evaluation of STRS. This report (1) describes the STRS initiatives DOT has implemented since 2003 and their design, (2) reviews evaluations of STRS initiatives, and (3) assesses DOT's plans for the future of STRS. GAO interviewed DOT and state officials, and reviewed program plans and evaluations. During 2004 and 2005, Share the Road Safely funding was used to implement one initiative, a pilot in Washington State that focused on aggressive driving behaviors near or by large trucks. Known as Ticketing Aggressive Cars and Trucks (TACT), it combined education, such as highway message signs, and high-visibility law enforcement to reduce aggressive driving. TACT received about $892,000 in federal and state funds. TACT was generally modeled on successful behavior modification programs, including Click It or Ticket (a program to encourage safety belt use), but was more complex to implement than past initiatives since many behaviors constitute aggressive driving and Washington State lacked a single aggressive driving law. In addition, NHTSA sought to demonstrate to FMCSA staff how to operate similar initiatives in the future. To this end, FMCSA sent a liaison to NHTSA as requested by Congress. Lastly, initiatives that were a part of STRS in 2003 were still pursued by FMCSA, but were not funded. DOT and Washington State officials conducted an evaluation of TACT that demonstrated that the initiative was successful and well-designed. The evaluation found that TACT significantly reduced the number and severity of unsafe driving acts near or by trucks. While the evaluation did not assess changes in crashes, improved driver behavior should logically lead to fewer crashes, injuries, and fatalities. GAO found that TACT's design of combining education with law enforcement better lent itself to reaching agency goals of fatality reduction than previous STRS initiatives that were purely educational. FMCSA plans to expand development of new TACT-like initiatives, but lacks resources and experience to do so. In addition, FMCSA plans to spend most of its 2006 STRS funds on educational initiatives, which lack information showing whether they improve driver behavior. In terms of TACT expansion, FMCSA is currently developing a TACT-like pilot in Pennsylvania and plans to roll out initiatives similar to TACT nationally by 2009. FMCSA, however, has few people dedicated to education and outreach and lacks NHTSA's experience with behavior modification initiatives. While FMCSA designated a liaison to learn about TACT-like initiatives, GAO continues to have concerns about FMCSA's limited experience with these initiatives. NHTSA has considerable experience with such initiatives, but its role in STRS is still evolving. Finally, FMCSA plans to spend the majority of its fiscal year 2006 STRS funds on initiatives that do not have evaluations showing their impacts.
Background The national airspace system is a complex, interconnected, and interdependent network of systems, procedures, facilities, aircraft, and people that must work together to ensure safe and efficient operations. DOT, FAA, airlines, and airports all affect the efficiency of national airspace system operations. DOT works with FAA to set policy and operating standards for all aircrafts and airports. As the agency responsible for managing the air traffic control system, FAA has the lead role in developing technological and other solutions to airspace issues. FAA also provides funding to airports. The funding that major airports receive from FAA to make improvements at the airports is conditioned on open and nondiscriminatory access to the airlines and other users, and the airlines are free to schedule operations at any time throughout the day, except at airports that are subject to limits on scheduled operations. The airlines can also affect the efficiency of the airspace system by the number and types of aircraft that they choose to operate. As we have previously reported, measuring the capacity of the airspace system and achieving its most efficient use are both difficult challenges because they depend on a number of interrelated factors. The capacity of the aviation system is not a simple measurable element—in addition to being related to airports’ infrastructure, capacity is affected at any given time by such factors as weather conditions and airline flight schedules. For example, because some airports have parallel runways that are too close together for simultaneous operations in bad weather, the number of aircraft that can take off and land is reduced when weather conditions worsen. Achieving the most efficient use of the national airspace system is contingent on a number of factors, among them the procedures that FAA uses to manage traffic, how well FAA’s air traffic control equipment performs, the proficiency of the controllers to efficiently use these procedures and equipment to manage traffic, and how much users are charged for the use of the airspace and airports. FAA has had a long history of attempting to address congestion by managing demand through administrative controls. FAA began establishing limits on the number of takeoffs and landings at five airports—Chicago O’Hare International, Newark, JFK, LaGuardia, and Washington Reagan National—in 1968. The High Density Rule, as it was known, instituted limits, or caps, on the number of takeoff and landings of the incumbent airlines serving each of the these airports. DOT lifted the restrictions at Newark in 1970, and in 2000, with the passage of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21), caps on operations were to be eliminated at Chicago O’Hare by July 2002, and at LaGuardia and JFK by January 2007. AIR-21 also immediately exempted certain types of aircraft from the caps, a change that resulted in unanticipated increases in demand, especially at LaGuardia. In 2000, airlines took advantage of AIR-21’s small regional jet exemptions and rapidly initiated a large number of new flights to and from LaGuardia. FAA chose to impose a moratorium on additional flights at LaGuardia in November 2000 to limit delays and reduced flights at LaGuardia to a level consistent with the airport’s capacity under optimal weather conditions. On the basis of this experience and FAA’s inability to adopt a final congestion management rule for LaGuardia, FAA issued a December 2006 order to maintain the cap of 75 hourly scheduled operations at LaGuardia until a final rule can be adopted. Chicago O’Hare also experienced increased operations after its caps were eliminated, prompting FAA to again limit operations at the airport beginning in spring 2004 through a series of voluntary agreements and ending with a new rule in late summer 2006. These caps on Chicago O’Hare’s operations are effective through October 2008, which coincides with the scheduled opening of the airport’s new runway in November 2008. In response to the near-record delays in summer 2007, which followed the expiration of the High Density Rule for the New York airports and increasing volumes of domestic air traffic, DOT convened a special aviation rulemaking committee (New York ARC) in the fall of 2007 specifically to address delays and other airline service issues in the New York metropolitan area. The New York ARC, which consisted of stakeholders representing government, airlines, airports, general aviation users, and aviation consumers, was tasked with identifying available options for changing current policy and assessing the potential impacts of those changes on airlines, airports, and the traveling public. The New York ARC had three specific objectives: (1) to reduce congestion, (2) to allocate efficiently the scarce capacity of New York area airports, and (3) to minimize the disruption associated with implementing any of the suggested improvements. The New York ARC issued its findings and options for reducing congestion to the Secretary of Transportation in December 2007. One of the committee’s working groups assessed 77 operational improvement initiatives for the New York area and identified key items to focus on within the list of 77, such as reducing excess spacing on final approach when landing. Data Show That Delays and Cancellations are Increasing, but Provide an Incomplete Picture of the Extent and Sources of Delays Nationwide, according to DOT data the annual number of domestic airline flight delays and cancellations has increased about 62 percent (from 1.2 million to 2.0 million), while the annual number of scheduled flights has increased about 38 percent (from 5.4 million to 7.5 million) since 1998. In the New York area, the trend is even more pronounced, as the number of domestic flight delays and cancellations at the three main commercial airports has increased about 111 percent, while the number of domestic operations has increased about 57 percent since 1998. DOT statistics indicate that 2007 was the second worst year on record for U.S. airlines’ on-time performance, and the trends in the percentage of flight delays and cancellations appear to be worsening. As shown in figure 1, about 20 percent of flights in the system were delayed and nearly 3 percent were canceled in 1998, compared to about 24 and 2 percent in 2007, respectively. The data also show that flight delays and cancellations have been steadily increasing since 2002, although the percentage of cancellations in 2007 is still lower than it was from 1998 through 2001. However, cancellations have become more problematic in recent years as the airline industry is now operating with fewer empty seats on flights. As a result, passengers on canceled flights must wait longer to be rebooked, and in some cases may be forced to spend the night before resuming travel the next day. Flights delays are also becoming longer. According to DOT data, the average length of a flight delay increased from more than 49 minutes in 1998 to almost 56 minutes in 2007, an increase of nearly 14 percent throughout the system. Despite this relatively small increase in average flight delay length, far more flights were affected by long delays in 2007 than in 1998. For example, the number of flights delayed by 180 minutes or more increased from 25,726 flights in 1998 to 64,040 flights in 2007, or about 150 percent. In addition, DOT’s data indicate that the number of flights in which an aircraft has departed the gate, but remained for an hour or more on the ground awaiting departure, has increased over 151 percent since 1998. Because the entire airspace system is highly interdependent, delays at one airport may lead to delays rippling across the system and throughout the day. This delay propagation appears to be increasing and leading to more delays in the system overall. For example, researchers at George Mason University’s Center for Air Transportation Systems have found that 46 percent of delays in the system in 2007 were caused by flight delays occurring earlier in the day. Flight delays in the New York metropolitan region also appear to have a disproportionate impact on delays experienced throughout the rest of the airspace system. During a typical day, approximately one-third of the aircrafts in the national airspace system move through the New York airspace. According to preliminary research conducted by the MITRE Corporation for FAA, an average of 40 percent of the flight delays in the system are from delays that originate in the New York metropolitan area. Compared to the rest of the country, where flight delays and cancellations have been steadily increasing, the magnitude and upward trend of the problem in the New York region is greater than the rest of the airspace system. For example, over a third of all flights in the New York metropolitan region in 2007 were delayed or canceled, according to DOT statistics. Figure 2 shows that the percentage of late arriving and canceled flights at each of the three major New York area airports was considerably higher than the systemwide averages. Since 2003, the percentage of late arriving and canceled flights has been increasing faster in the New York area than in the rest of the system. Since 1998, the New York area’s three major airports have often been among the airports with the lowest on-time performance records. In 2007, DOT reported that LaGuardia, Newark, and JFK had the lowest on-time performance rates among major domestic airports, followed by Chicago O’Hare International Airport, Philadelphia International Airport, and Boston Logan International Airport. Table 1 shows the ranking of major airports by the lowest on-time arrival performance in 2007. While DOT data show that the trends in delays and cancellations are getting worse, current on-time performance data do not capture the full extent of delays and cancellations or the extent to which passengers’ average travel times have increased in recent years. For example, airlines have, in many cases, opted to lengthen scheduled flight times to enhance on-time results, particularly along heavily congested and frequently delayed routes. DOT data do not account for the increased average flight times that are masked by these schedule changes. Also, available DOT data may not necessarily reflect passengers’ experience of delay because DOT tracks flights, not passengers. Passengers can experience delays to their trips because of missed connections resulting from delayed or oversold flights or lengthy delays due to flight cancellations—elements that are not measured in current statistics. According to a recent study by George Mason University, roughly one in four passengers experienced a passenger trip delay in 2007 and the average duration of delay experienced by these passengers was 1 hour 54 minutes, an increase of 24 minutes over 2006. In addition, the study found that the average delay for passengers on canceled flights was 11 hours in 2007. Passenger delays are affected by record-level airline load factors (percentage of seats occupied on aircraft), which result in fewer available empty seats on subsequent flights for those passengers who experience canceled flights. According to DOT’s Air Consumer Report, flight problems involving cancellations, delays, or missed connections were the number one consumer complaint in 2007. DOT Data Provide an Incomplete Picture of the Sources of Delays The data collected by DOT on the sources of delays provide information about where delays occur and what causes them, but the data are incomplete. The primary purposes for collecting these causal data are to inform the traveling public and categorize delays and cancellations so that the parties most capable of addressing the causes of delays and cancellations can take corrective action. Since 2003, airlines have reported the cause of delay to DOT in one of five broad categories: late arriving aircraft, airline, national aviation system, extreme weather, and security. Late arriving aircraft means a previous flight using the same aircraft arrived late, causing the subsequent flight to depart late. In 2007, approximately 38 percent of delays were assigned to this category. Airline delays include any delay or cancellation that was within the control of the airlines, such as aircraft cleaning, baggage loading, crew issues, or maintenance. Roughly 29 percent of the delays in 2007 were attributed to airline delays. National aviation system delays and cancellations refer to a broad set of circumstances affecting airport operations, heavy traffic volume, and air traffic control. This category also includes any nonextreme weather condition that slows the operation of the system, such as wind or fog, but does not prevent flying. The national aviation system accounted for about 28 percent of delays in 2007. Extreme weather includes serious weather conditions that prevent the operation of a flight. Examples of this kind of weather include tornadoes, snow storms, and hurricanes. In 2007, nearly 6 percent of delays were assigned to extreme weather. Security accounted for less than 1 percent of delays in 2007. Examples of security delays include evacuation of an airport, reboarding due to a security breach, and long lines at the passenger screening areas. Since 2003, despite the increasing number of delays, there have been no significant changes in the trends of these sources of delay. Figure 3 shows the DOT-reported sources of delay in 2007. The distribution of delay by source is very different in New York than for the country as a whole and reflects the New York area’s greater level of congestion. For example, national aviation system delays account for nearly 58 percent of all delays in New York as compared to approximately 28 percent for the country as a whole in 2007 (see fig. 4). As noted earlier, the three major New York area airports have experienced more than a 50 percent increase in traffic levels since 1998, while runway capacity at these airports has not changed. As a result, FAA must resort to a complement of traffic management initiatives, such as ground delay or flow control programs, which are used to restrict the flow of traffic and, accordingly, lead to delays. For several reasons, the data provide an incomplete picture of the underlying causes of delays. First, the DOT-reported categories are too broad to provide meaningful information on the root causes of delays. For example, delays attributed to the airlines could consist of causes such as a late crew, aircraft maintenance, or baggage loading, but these more specific causes are not captured in DOT data. Second, the largest source of systemwide delay—late arriving aircraft, which represents 38 percent of the total delay sources (as fig. 3 shows)—masks the original source of delay. For example, the original source of delay for a late arriving aircraft may be the result of other sources—such as a severe weather condition, the airline, security, or the national airspace system—or a combination of one or more of these sources. Finally, the data do not capture what many economists believe is the fundamental cause of much of the flight delays— a mismatch between the demand for and capacity to provide aviation services. While the data provide airlines’ view of the reason that particular flight segments were delayed, DOT does not report data on the extent to which flights are simply overscheduled in particular places at particular times relative to the capacity of the airports and air traffic control system to provide aviation services. The DOT Inspector General analyzed airline schedules at 15 airports and found that 6 of the airports had flights scheduled either at or over maximum airport capacity at peak hours of the day during the summer of 2007. When this is the case, assigning the cause of delay to one of the five DOT categories masks that the fundamental cause is this mismatch of demand for and supply of these services. DOT and FAA Are Implementing Actions Intended to Reduce Delays DOT and FAA are implementing several actions intended to reduce flight delays beginning in summer 2008. Due to the high proportion of delays at the three major New York area airports and their effect on the rest of the airspace system, many of these actions are specifically designed to address congestion in the New York area. For purposes of our discussion, we grouped the various actions into one of two categories—capacity- enhancing initiatives and demand management policies—both of which are intended to reduce flight delays. Capacity-enhancing initiatives are intended to increase the efficiency of existing capacity by reducing delay and maximizing the number of takeoffs and landings at an airport. By contrast, demand management policies influence demand through administrative measures or economic incentives. Some of these capacity- enhancing initiatives and demand management policies will be fully or partially implemented by summer 2008, but others will not be completed or even initiated until later this year or beyond. DOT and FAA have announced multiple capacity-enhancing initiatives designed to reduce delays in the New York region for this summer and beyond. In general, adding substantial new airspace system capacity is costly and time consuming. Thus, in March 2007, DOT and FAA convened a workgroup that identified 17 short-term initiatives that better utilize existing capacity at the airport or system level through procedural and other changes in airport and airspace operations and could be completed by summer 2008. Eleven of the 17 short-term initiatives have been completed, and FAA plans to implement the remaining initiatives, which require more planning and coordination, by September 2008. See appendix I for a list of the 17 short-term initiatives and their status. The initiatives range from new procedures and reroutes for handling air traffic during severe weather conditions to efforts to reduce excessive spacing on final approach before landing, and to an airspace flow program that allows New York departures to move more freely while delays are redistributed to airports within the region. In addition to the 17 short-term initiatives, other capacity-enhancing initiatives are under way. These include improving coordination with DOD for airlines’ use of military airspace and redesigning the airspace around the New York, New Jersey and Philadelphia metropolitan area. FAA is in the process of drafting letters of agreement that would help establish more formal processes for communicating with DOD for the release of specific portions of military airspace on an as-needed basis. In December 2007, FAA initiated the first phase of the planned 5-year implementation of the airspace redesign, with new departure headings at Newark and Philadelphia airports. In April 2008, FAA appointed a New York Airspace “Czar”—whose official title is Director for the New York Area Program Integration Office—to coordinate regional airspace issues and projects. Table 2 lists the capacity-enhancing initiatives and their status. More detailed information on the actions— including descriptions, geographic focus, and status—can be found in appendix II. DOT and FAA have also introduced demand management policies—most notably, hourly schedule caps on takeoffs and landings at the three major New York area airports—to its pool of delay reduction efforts. DOT and FAA believe that caps on scheduled operations are necessary at some airports where available capacity cannot meet demand. The caps are currently in place to limit scheduled operations at all three major New York area airports, with hourly scheduled operations capped at 81 at both JFK and Newark, and at 75 at LaGuardia. The most recent caps at JFK and Newark are scheduled to be in place until October 2009. At LaGuardia, a December 2006 order maintained caps that had been in place since November 2000. The institution of caps, however, does not necessarily mean that total operations at each of the three airports will decrease. For example, at JFK, the total number of daily scheduled operations will increase by 50 flights per day over summer 2007 levels, when no caps were in place, but scheduled operations will be spaced more evenly throughout the day in an attempt to minimize peak period congestion. Two other demand management policies under way include an amendment to the Rates and Charges policy and proposed rules to establish slot auctions at all three New York area airports. The amendment to the Rates and Charges policy clarifies that airport operators may establish a two-part landing fee structure, consisting of both an operation charge and an aircraft weight-based charge, and include rule changes that would expand the costs congested airports could recoup through airfield charges. The proposed slot auctions for the three New York area airports would lease the majority of operations (takeoffs and landings, or slots) to incumbent operators and help develop a market by annually auctioning off leases for a limited number of slots during the first 5 years of the rule. These two demand management policies are being developed, but it is unlikely that they will be in effect by this summer. DOT and FAA just recently announced the final Rates and Charges policy amendment, so it is unlikely the policy will have an impact this summer. Furthermore, existing use and lease agreements between airlines and airport operators could prevent any changes to rates and charges for many years, until existing lease agreements expire. DOT and FAA are currently reviewing comments for the proposed rule to establish slot auctions at LaGuardia and will be collecting comments on the proposed rule to establish slot auctions at JFK and Newark until July 21, 2008; thus it is unlikely the final rules will be issued during the summer. Table 3 lists the demand management policies and their status. More detailed information on the actions—including descriptions, geographic focus, and status—can be found in appendix II. DOT’s and FAA’s Actions May Help Reduce Delays, but the Extent of Delay Reduction in Summer 2008 Will Likely Be Limited DOT’s and FAA’s capacity-enhancing initiatives have the potential to reduce congestion and thereby avoid delays, according to FAA and stakeholders we consulted, but the effect will likely be limited for the summer 2008 traveling season. DOT’s and FAA’s demand management policies—in particular, caps on scheduled operations at all three New York area airports—are expected to have some delay avoidance impact in the near term. DOT and FAA set the caps at Newark and LaGuardia at a level intended to avoid an increase in delays above that experienced in 2007 and set the caps at JFK to generate a 15 percent reduction in average departure delays over 2007 levels. The projected impact of the various actions undertaken by DOT and FAA is also expected to be muted because several will not be in place until next year or beyond. Finally, other mitigating economic factors could lead to fewer operations in 2008, which might also lead to fewer delays. Although DOT and FAA have not analyzed the potential near-term benefit of the capacity-enhancing initiatives, FAA officials and stakeholders that we spoke with anticipate that the capacity-enhancing initiatives will generally have a positive, but fairly small, impact on reducing delays in the near term. For example, while FAA has not analyzed the estimated impact of the 17 short-term initiatives, aviation stakeholders, including airport operators, airlines, and aviation industry associations, believe that these initiatives will have a positive impact in summer of 2008. However, most think the initiatives—when taken together—will result only in incremental improvements and in certain situations and alone will not provide sufficient near-term gains to accommodate the peak hour schedules at the New York area airports’ current or forecast levels of demand. Furthermore, given that the final plan for coordinating the use of military airspace is still under development, the potential impact of this effort remains unknown. However, airlines agree that increasing use of military airspace through advanced coordination holds promise, and the release of military airspace over recent holiday weekends has been beneficial. Finally, although the impact of the newly appointed aviation czar is also unknown, some airlines and New York airport operators have supported the appointment of a czar, but also expressed concern that the czar, who is currently lacking a dedicated budget or staff, will not have sufficient authority to direct and coordinate delay reduction efforts across FAA and DOT offices. Of the capacity-enhancing initiatives, FAA has estimated the potential future delay reduction benefits of one—the New York-New Jersey- Philadelphia Airspace Redesign. FAA estimates that the airspace redesign will result in a 20 percent reduction in national airspace system delays for the New York/New Jersey/Philadelphia study area airports as compared to taking no action. According to FAA, estimated delay reduction will vary by airport and will be achieved only once the redesign has been fully implemented. The airspace redesign, scheduled to be completed in 2012, is highly controversial because residents living in affected areas have raised concerns about potential increases in aircraft noise and other environmental effects. Demand management policies, which do not require long-term investments, will likely have a more immediate but similarly limited effect on relieving congestion and reducing delays. Because of increasing congestion at JFK and Newark, in the fall of 2007, FAA used models to analyze the airlines’ proposed 2008 summer schedules and determine potential future delays at these airports and the effect of caps. The proposed summer schedules submitted by the airlines for these airports would have constituted substantial scheduling increases over summer 2007. On the basis of these proposed schedules, DOT and FAA set the caps at JFK at a level that is projected to decrease average departure delays by 15 percent over 2007 levels. However, the caps at LaGuardia and Newark are set at a level to avoid an increase in delays over 2007 levels. For example, at Newark, FAA estimates about a 23 percent reduction in the average delay per operation relative to a situation with no cap. Newark’s caps were designed to ensure that delays did not get significantly worse in 2008 based on the airlines’ proposed summer schedules and the potential for increased operations diverted from JFK. Thus, the caps at Newark are not expected to bring a delay reduction benefit as compared to delays experienced in 2007. At LaGuardia, which already had caps in 2007, FAA estimated that the long-term implementation of caps would reduce delays by 32 percent as compared to no cap. Caps at the New York area airports will help the region avoid additional delays in the near term, but there are also policy trade-offs to consider. In general, FAA, airlines, and aviation experts have stated that when available capacity cannot meet demand, managing operations at the airport level is necessary to reduce congestion and limit delays in the short run. FAA noted that imposing caps is an effective, but not efficient, way to reduce delays. Airlines generally support caps as a short-term solution for addressing congestion at the New York airports because of the worsening delays at these airports. FAA stated that some airlines may support caps at airports they already serve because caps generally protect incumbent airlines and limit competition from airlines that are interested in beginning service at these airports (or new entrants). However, some airport operators strongly oppose flight caps because they state that caps could constrain the economic growth of the surrounding region. In addition, some airport operators and aviation experts are concerned that using caps as a long-term solution can mask the need for capacity enhancements and shift the focus away from important long-term solutions that may provide a more lasting solution to the delay problem. The proposed slot auction rules for the three major New York area airports are currently out for comment and will not be implemented by this summer, but even if they were in place, they would not directly reduce delays. DOT and FAA intend the slot auctions to help create a market for slots in the New York area that allows new entrants better access to the airports and encourage airlines currently holding slots to place a greater value on the use of their slots. By itself, a slot auction will not reduce delays. But DOT and FAA believe that by helping to reveal the economic value of slots, the policy may help to develop a more robust secondary market for slots, which will, in turn, lead to greater efficiency in their allocation and use. DOT and FAA believe that doing so may increase the size of aircraft used at the airports and thereby increase the number of passengers served. The proposed rules for the three New York area airports include different slot auction options. Only one of the two options for LaGuardia would have a direct delay reduction impact. Specifically, this option would require approximately 18 slots to be retired over 5 years, and would result in an estimated 1 minute of delay reduction for each takeoff and landing at the airport. One slot auction proposal for Newark and JFK would reallocate 10 percent of eligible capacity via annual auctions over 5 years, and FAA would retain the net auction proceeds for use on unspecified capacity improvements in the New York area. The second slot auction option at JFK would reallocate 20 percent of eligible slots over 5 years, and the net auction proceeds would be granted to the carrier whose previously held slots were auctioned. Under this option, carriers whose slots are returned for auction would not be allowed to bid on their own slots. Some airline officials and airport operators stated that airlines have made substantial investments at these airports that would be diminished if they lose operating rights. Airlines and New York airport operators strongly oppose the proposed slot auctions because they do not think that FAA has the legal authority to implement these auctions. The potential impact of the Rates and Charges policy— a policy that is unlikely to be implemented by this summer because the final notice was only announced on July 8, 2008— was not analyzed by DOT and FAA. However, DOT and FAA assert that, if implemented, the amendment to the Rates and Charges policy may help to reduce congestion, and thus delay, by encouraging airlines to use larger aircraft and schedule fewer operations during peak usage hours. Some airport operators support this policy because it provides them with more flexibility in setting landing fees and another option for addressing delays, but the extent to which airports can or will implement the policy is unknown. Some airlines, airport operators, and aviation experts assert that an airport’s implementation of a two-part landing fee under the Rates and Charges policy may not reduce delays because the policy requires these fees to remain revenue neutral. In other words, for congested airports, the policy will not enable the differential between peak and off-peak prices to be large enough to change airline behavior while adhering to revenue neutrality. Some airlines and airport operators opposed the amendment because they think that it could discriminate against airlines whose fleets include mostly small aircraft because the amendment creates a fee differential for small to medium-sized aircraft while having a negligible effect on larger aircraft. Airlines and certain airport operators also expressed concern that under such a policy, service to small cities would be dropped because carriers would favor using larger aircraft to serve larger cities. Several airlines stated that the Rates and Charges policy does not address the bigger problem of lack of capacity in the airspace system. Finally, other interrelated factors beyond government initiatives, such as the financial state of the aviation industry, increasing jet fuel prices, and the downturn in the economy, may also result in fewer delays during 2008, but their impact is uncertain. The Air Transport Association expects a 1 percent reduction in the number of passengers for the summer 2008 travel season as compared to the 2007 summer travel season, and many airlines are planning more substantial reductions in capacity and schedules for the fall and winter 2008 seasons. Economic conditions, rising fuel costs, and airline initiated capacity cuts could affect demand for air travel or available capacity in the coming months. These factors also reduce congestion and, accordingly, delays and could make it difficult to determine how much of the delay reductions, if any, might be attributed to the capacity-enhancing initiatives or demand management policies planned for summer 2008. In closing, DOT and FAA should be commended for taking steps to reduce mounting flight delays and cancellations for the 2008 summer travel season. However, delays and cancellations this summer could still be significant given the likely limited impact of DOT’s and FAA’s actions. Capacity-enhancing initiatives can provide some limited benefit in the near term, but they do not fundamentally expand capacity. Demand management policies, especially those that artificially restrict demand— like schedule caps—may limit increases in delays, but should not be viewed as a meaningful or enduring solution to addressing the fundamental imbalances between the underlying demand for and supply of airspace capacity. The growing air traffic congestion and delay problem that we face in this country is the result of many factors, including airline practices, inadequate investment in airport and air traffic control infrastructure, and how aviation infrastructure is priced. Addressing this problem involves difficult choices, which affect the interests of passengers, airlines, airports, and local economies. If not addressed, congestion problems will intensify as the growth in demand is expected to increase over the next 10 years. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. GAO Contact and Staff Acknowledgments For further information on this testimony, please contact Susan Fleming at (202) 512-2834 or flemings@gao.gov. Individuals making key contributions to this testimony include Paul Aussendorf, Amy Abramowitz, William Bates, Jonathan Carver, Jay Cherlow, Lauren Calhoun, Delwen Jones, Heather Krause, Sara Ann Moessbauer, and Maria Wallace. Appendix I: New York Short-Term Initiatives Daily planning teleconferences to provide a common situational awareness for customers—such as airlines, airport operators, the military, and general aviation—on the planned daily operations at JFK. Simultaneous runway approaches to 31L/R will allow approximately 4 to 6 more aircraft to land on this runway configuration when weather conditions are classified as instrument meteorological conditions (IMC). 3. Accessing J134/J149 from Eliot Intersection (for use during Severe Weather Avoidance Programs) When thunderstorms affect the west departure routes, aircraft will be rerouted using the Eliot departure fix. Benefits have not been identified, but are available for use as weather events dictate. 4. Pass Back Departure Restrictions—700 mile restriction Pass back restrictions were removed on October 11, 2007, beyond 700 miles for traffic destined for the New York airports. Departure restrictions to airports often lead to delays as controllers have to wait to release aircraft. Eliminating this airport restriction and allowing en route controllers to build in the spacing improves airport efficiency. Briefings and trainings at major facilities are planned to speed implementation of changes associated with the “proximity event” category. Intent is to help educate controllers that reducing excessive spacing between aircraft on final approach can help reduce delay and should not be considered an error, because it does not pose a safety risk. Under certain conditions, control of the holding pattern airspace will transfer from the New York Air Route Traffic Control Center (ZNY) to the New York TRACON (N90). This allows aircraft to transition out of the holding pattern using terminal separation standards (3 miles) as opposed to the en route separation standards (5 miles). 7. NY Area Severe Weather Avoidance Procedure Action Team Items—Route Availability Planning Tool (RAPT) When affected by thunderstorms, controllers and traffic flow managers will use a weather forecasting technology to identify the availability of departure routes, and provide traffic management specialists with the ability to more quickly open and close routes and to reroute aircraft. Creating another westbound departure route parallel to J80 has the potential to mitigate westbound delays from JFK. A reallocation of the lower part of sector 73 at the New York Air Route Traffic Control Center will allow the remaining sector to focus on aircraft departing Philadelphia and New York. Move current BOS arrivals via J79 to the east and reduce congestion at the MERIT departure fix. SWAP escape routes in Canadian airspace are used and coordinated daily with Canada’s civil air navigation services provider (NAV CANADA). Used mostly during the summer because of thunderstorms and winds in the United States. Allows for more efficient arrivals from the north into Newark by moving or eliminating crossing traffic. No added capacity benefits are expected. Do expect to get some added operational efficiency for aircraft while in the en route portion of flight. A procedure that allows for simultaneous arrivals on runways 4L and 4R, when weather permits. Traffic management procedure to allow EWR arrival aircraft to fly at higher altitudes and in a less circuitous route. No added capacity benefits are expected. Procedures currently allow for these runway configurations to be used in Visual Meteorological Conditions (VMC). Waiver has been signed to allow arrivals to land on Runway 29 while landing on Runway 4R. Appendix II: Status and Reported Benefits of Capacity-Enhancing Initiatives and Demand Management Policies The New York Aviation Rulemaking Committee (ARC) recommended a list of 77 items for consideration and implementation in the New York area. From these, FAA identified 17 short-term initiatives for immediate action. Eleven of the 17 short-term initiatives are currently complete. The others are planned for completion by the end of fiscal year 2008. Not analyzed but likely to be small. FAA is working with DOD to explore the current use of special use airspace, develop proposals for increased civil use of military airspace, and evaluate letters of agreement that provide operational direction for the shared uses of special use airspace. FAA’s efforts to standardize use of military airspace with DOD are ongoing and the outcome is uncertain. Final plan unknown, therefore benefit unknown. The Airspace Redesign of the NY/NJ/PHL metropolitan area involves changes to airspace configurations and air traffic management procedures. The selected alternative (Integrated Airspace Alternative with Integrated Control Complex) integrates the entire airspace with a common automation platform. Air traffic controllers can reduce aircraft separation rules from 5 to 3 nautical miles over a larger geographical area than the current airspace structure allows. Implementation began on December 19, 2007, with the introduction of additional departure headings at Philadelphia International and Newark International airports. FAA has stated that it does not believe there will be additional changes implemented until fall 2008. Final implementation by 2012. When the redesign is fully implemented in 2012, FAA estimated a 20 percent reduction in national airspace system delay in the study area as compared to taking no action. Estimated arrival and departure delay reduction varies between airports. ARC participants agreed that appointing a New York aviation czar to coordinate regional airspace issues and all projects and initiatives addressing problems of congestion and delays in New York would be beneficial. As a result, the Director of the New York Integration Office position was created. Marie Kennington- Gardiner has been appointed Director of the New York Integration Office. In January 2008, FAA issued an order setting a cap on the number of hourly operations at JFK. The order took effect March 30, 2008, and will expire October 24, 2009. Operations are capped at 81 per hour. FAA estimates that caps would reduce average departure delays by 5.5 minutes, or 15 percent. The number of departure delays of 60 minutes or more would decrease 31 percent. Based on proposed summer 2008 schedules, estimated delays could have increased by up to 150 percent. In March 2008, FAA proposed an order to cap flights at Newark. The final order was issued on May 21, 2008, and takes effect on June 20, 2008, and expires October 24, 2009. Scheduled operations capped at 81 per hour by summer 2008. Slight reduction in arrival delays offset by slight increase in departure delays with no estimated net change in average delay between 2007 and 2008. The purpose is to keep delays from worsening at Newark in 2008 because of caps at JFK. Based on proposed summer 2008 schedules, estimated arrival delays would increase by as much 50 percent in 2008 without the limits. In December 2006, FAA published a temporary order maintaining the same caps and exemptions in place since November 2000. In April 2008, FAA also published an order limiting unscheduled operations to 3 per hour. Scheduled operations will be capped at 75 per hour during summer 2008. FAA estimates 32 percent reduction in average delay as compared to no cap. As the caps were already in place, no new benefit is expected in summer 2008. In April 2008, FAA issued a supplemental rulemaking to lease the majority of slots at the airport to the incumbent operators and to develop a market by annually auctioning off leases for a limited number of slots during the first 5 years of the rule. Two options to annually auction these slots were proposed. Comment period ended June 16, 2008. DOT is reviewing comments. Will depend on the option selected. Option 1 (slot retirement of 1.5 slots per year) estimated to result in 1 minute of average delay reduction. Option 2 does not retire slots. DOT believes the proposal will help reveal the economic value of slots, and may increase the size of aircraft used at the airports, and thereby increase the number of passengers served. In May 2008, FAA issued a notice of proposed rulemaking to assign to existing operators the majority of slots at Newark and JFK, and create a market by annually auctioning off a limited number of slots in each of the first 5 years. In comment period until July 21, 2008. FAA states that the immediate impact will be to prevent a return to, or worsening of, the conditions and delay experienced during summer 2007. By itself, a slot auction will not reduce delays. However, DOT believes the proposal will help reveal the economic value of slots, and may increase the size of aircraft used at the airports, and thereby increase the number of passengers served. Announced in July 2008, the policy clarifies the ability of airport operators to establish a two-part landing fee structure consisting of both an operation charge and a weight-based charge, giving airports the flexibility to vary charges based on the time of day and the volume of traffic. It also permits the operator of a congested airport to charge users a portion of the cost of airfield projects under construction and expands the authority of an operator of a congested airport to include in the airfield fees of congested airports a portion of the airfield fees of other underutilized airports owned and operated by the same proprietor. U.S. Final policy issued July 8, 2008. Not assessed, it is unknown to what extent airports can or will implement this policy or the airlines’ response if it is implemented. ome ction, DOT has ted dditionl benefit nrelted to dely redction. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Flight delays and cancellations have plagued the U.S. aviation system. According to the Department of Transportation (DOT), more than one in four flights either arrived late or was canceled in 2007--making it one of the worst years for delays in the last decade. Delays and cancellations were particularly evident at certain airports, especially the three New York metropolitan commercial passenger airports--Newark Liberty International (Newark), John F. Kennedy International (JFK), and LaGuardia. To avoid a repeat of last summer's problems, DOT and the Federal Aviation Administration (FAA) have worked with the aviation industry over the past several months to develop and implement several actions to reduce congestion and delays for the summer 2008 travel season. This testimony addresses (1) the trends in the extent and principal sources of flight delays and cancellations over the last 10 years, (2) the status of federal government actions to reduce flight delays and cancellations, and (3) the extent to which these actions may reduce delays and cancellations for the summer 2008 travel season. This statement is based on an analysis of DOT data on airline on-time performance, a review of relevant documents and reports, and interviews with officials from DOT, FAA, airport operators, and airlines, as well as aviation industry experts and associations. DOT and FAA provided technical comments which were incorporated as appropriate. DOT data show that flight delays and cancellations have increased nationwide and especially in the New York region; however, the data provide an incomplete picture of the source of delay. Since 1998, the total number of flight delays and cancellations nationwide has increased 62 percent, while the number of scheduled operations has increased about 38 percent. Flight delays and cancellations in the New York region are even more pronounced. Specifically, since 1998, the number of flight delays and cancellations in the New York region has increased about 111 percent, while the number of operations has increased about 57 percent. DOT data on the sources of delays provide an incomplete picture. For example, in 2007, late arriving aircraft accounted for 38 percent of delays nationwide, but this category indicates little about what caused the aircraft to arrive late, such as severe weather. To reduce delays and congestion beginning in summer 2008, DOT and FAA are implementing several actions that for the purposes of this review GAO is characterizing as capacity-enhancing initiatives and demand management policies. Some of these actions are already in effect, such as 11 of the 17 short-term initiatives designed to improve capacity at the airport or system level and the hourly schedule caps on operations at the New York area airports. The other actions are being developed but are unlikely to be in effect by this summer. For example, DOT and FAA are soliciting comments on the proposed rule to establish slot auctions at JFK and Newark until July 21, 2008. DOT's and FAA's capacity-enhancing initiatives and demand management policies may help reduce delay, but the collective impact of these actions on reducing delay in 2008 is limited. For example, the benefit of the 17 initiatives--which range from efforts to reduce excessive spacing on final approach before landing to new procedures for handling air traffic during severe weather conditions--is generally expected to come from the initiatives' combined incremental improvements over time and in certain situations. The demand management policies may have a more immediate but limited effect on delays since the caps at Newark and LaGuardia were set at a level that was generally designed to avoid an increase in delay over 2007 levels. For example, the caps at Newark are set at a level that that is not expected to bring a delay reduction as compared to delays in 2007.
Background The S&T Directorate consists of four offices responsible for managing and executing DHS’s R&D programs: (1) the Office of Programs, Plans and Budgets (PPB); (2) Office of Research and Development (ORD); (3) Homeland Security Advanced Research Projects Agency (HSARPA); and (4) Systems Engineering and Development (SED), as seen in figure 1 below. Under Secretary for Science and Technology Development (ORD) Agency (HSARPA) & Development (SED) Prepares deployment, systems development, (including DOE labs) On June 29, 2005, a Chief Financial Officer position was created for the S&T Directorate to consolidate and execute budgetary planning. Because the budgetary responsibility for the S&T Directorate was moved out of the Office of Programs, Plans, and Budgets, its name was changed to the Office of Programs, Plans and Requirements (PPR). This new position and name change are not reflected in this figure. In the first few years of DHS’s existence, the S&T Directorate focused on the urgency of organizing itself to meet the nation’s homeland security research and development requirements, and had few resources devoted to developing its management infrastructure, including the management controls to guard against conflicts of interest. In our 2004 report on DHS’s potential use of the national laboratories, we noted that when the S&T Directorate began operating in March 2003, they sought and hired scientists, engineers, and experts in needed disciplines from federal laboratories, universities, and elsewhere in the federal government. These individuals were brought into the S&T Directorate to use their knowledge in ways that would help the Directorate achieve its mission quickly and effectively. DHS officials told us that these individuals came to DHS out of a sense of urgency and motivated by a strong sense of patriotism. Some of these scientists were hired from the national laboratories, and they came with the understanding that they would return to their laboratories following their time at DHS. As part of their responsibilities, portfolio managers led and facilitated Integrated Project Teams (IPT), which included representatives from ORD, HSARPA, and SED. In addition to identifying R&D projects and budgets, IPTs were responsible for determining which office (ORD, HSARPA, or SED) within the S&T Directorate would be responsible for them. These determinations were important because it influenced whether the project and associated funds went to the public or private sector. According to a March 2004 Office of Inspector General report, ORD generally executes programs that involve the national laboratories and which the private sector should not, could not, or would not perform. HSARPA generally executes programs for which technology development involves the private sector. SED generally executes programs employing proven technologies and resulting in transition to operational systems. As previously discussed, IPA employees are generally subject to the same laws and regulations that govern the ethical conduct of other federal employees. Section 208 of Title 18 of the United States Code, a criminal statute, generally precludes federal employees from personally and substantially participating in any particular matter that would have a direct and predictable effect on their financial interests, or the financial interest of any organization attributable to them. An employee’s participation is “substantial” if their participation is meaningful to the matter. An employee can be personally and substantially involved in a variety of ways, including making a recommendation, rendering advice, or making a decision on a particular matter. The law can be waived if the employee first makes a full disclosure of the conflict of interest to the official responsible for his or her appointment, “and receives in advance a written determination made (i.e., waiver) by such official that the interest is not so substantial as to be deemed likely to affect the integrity of the services which the government may expect.” Executive departments and agencies are required to forward documentation of such waivers to OGE. Waivers cannot be granted to cover past activities. In addition to avoiding conflicts of interest, executive branch employees must avoid even the appearance of a conflict of interest. However, when there is potential for such an appearance of a conflict, an employee can be granted an “authorization” to work on a matter even in situations where a reasonable person with knowledge of the relevant facts can question the employee’s impartiality in a matter. As mentioned earlier, OMB requires agencies to establish a set of management controls and GAO issues standards for internal control in the federal government. In addition, GAO developed the Internal Control Management and Evaluation Tool to help managers and evaluators determine how well an agency’s internal control is designed and functioning and help determine what, where, and how improvements, when needed, may be implemented. The five standards for internal control are: control environment, risk assessment, control activities, information and communications, and monitoring. Two of these standards, control environment and control activities, include key factors related to conflicts of interest. If effectively implemented, these internal controls can help to guard against employees participating in actions that present a personal conflict of interest. Examples of relevant key factors that address the establishment and maintenance of an effective control environment of an agency are: establishment and use of a formal code of conduct and other policies communicating appropriate ethical and moral behavioral standards and addressing acceptable operational practices and conflicts of interest; establishment of an ethical tone at the top of the organization and communicated throughout the agency; and implementation of policies and procedures for hiring employees. Internal control activities are the policies, procedures, techniques, and mechanisms that help ensure that management’s directives to mitigate identified risks are carried out. Examples of relevant key factors that address internal control activities are: existence of appropriate policies, procedures, techniques, and mechanisms with respect to each of the agency’s activities; providing appropriate training and other control activities to give employees the tools they need to perform their duties and responsibilities to meet the demands of changing organizational needs; and documentation of transactions and other significant events is complete and accurate and facilitates tracing the transaction or event and related information from authorization and initiation, through its processing, to after it is completed. DHS’s S&T Directorate Can Do More to Improve Its Management Controls Related to Conflicts of Interest for Its IPA Portfolio Managers DHS’s S&T Directorate has implemented several management controls to help its IPA portfolio managers comply with conflict of interest laws and regulations. Most of these were implemented during the course of our review. Since the S&T Directorate was created in 2003, individuals employed in the S&T Directorate under the IPA have completed an “assignment agreement”, as required by OPM. Having procedures in place for hiring employees and implementing them is one aspect of an effective management control environment. The assignment agreements include a section on conflicts of interest and employee conduct. As part of the assignment agreement, each applicant must acknowledge that: “applicable Federal, State or local conflict-of-interest laws have been reviewed with the employee to assure that conflict-of-interest situations do not inadvertently arise during this assignment”; and “the employee has been notified of laws, rules and regulations, and policies on employee conduct which apply to him/her while on this assignment.” We reviewed the IPA assignment agreements for all of the IPA portfolio managers and found that the IPA portfolio managers acknowledged these provisions. The S&T Directorate’s leadership took steps to establish an ethical tone and communicate it through a March 15, 2004, memorandum from DHS’s Undersecretary for S&T to all S&T Directorate employees emphasizing that they should strictly adhere to all applicable ethics laws. The memo summarized ethics laws, called attention to the consequences of noncompliance, provided points of contact for those with questions, and explained that S&T employees “have the responsibility to be scrupulous in complying with all applicable ethics laws.” Further, the memo specifically mentioned that employees hired under the IPA may not participate in matters involving their “home” institution (which, in the S&T Directorate, has often been a DOE national laboratory). The memo explained provisions of 18 U.S.C. § 208, stating that an employee may not participate “personally and substantially” in a particular matter that may affect an entity in which he has a financial interest and that “personal and substantial participation can occur if the employee participates in a decision, approval, disapproval, recommendation, investigation, or the rendering of advice on the matter.” According to DHS’s DAEO, the IPAs in the S&T Directorate were employed before a process was in place to screen them for personal conflict of interest issues. On June 30, 2005, the S&T Directorate issued new, internal procedures for hiring employees under the IPA. These new procedures outline the responsibilities of the parties involved in the hiring process and detail the steps necessary to hire an IPA. These steps include a preliminary review of financial disclosure forms by DHS’s Office of General Counsel (OGC) to determine whether conflicts of interest exist based on the roles and responsibilities of the proposed position. Along with these new hiring procedures, the S&T Directorate began requiring applicants being considered under the IPA to complete written disqualification statements meant to bar their involvement in any matter that could reasonably be perceived to affect the interests of their national laboratory or other employer. In addition, once hired, IPAs can complete a memorandum that provides their supervisor with a written recusal from “certain Government matters” that affect the institution to which they will return after their employment at DHS, and allows them “to describe the screening arrangement” they are implementing to ensure that they comply with their “obligation to recuse.” In this memorandum, the employee then lists each asset, entity, or other interest that gives rise to a disqualifying interest under 18 U.S.C. § 208. DHS officials told us that S&T Directorate employees, including those hired under the IPA, are offered the same new employee and annual ethics training as are all new DHS employees. Having training and orientation programs for new employees, with ongoing training for all employees, are key activities for establishing effective controls. On January 7, 2005, the Assistant Secretary of PPB also held a mandatory meeting for all IPAs in the S&T Directorate. S&T Directorate officials told us that this meeting was called to discuss the ethics issues that apply specifically to employees hired under the IPA, including the conflict of interest statutes. Other important management controls that could help ensure portfolio managers comply with conflict of interest laws are not yet in place in the S&T Directorate. Importantly, the process for determining where R&D projects and funds are directed, including the role of the IPA portfolio managers, has never been finalized. Establishment of a process for each agency activity is one of the key factors for meeting internal control standards. Though IPTs were created to help make this determination, as previously discussed, we were told that each IPT worked differently and there were no requirements to operate in the same way. In addition, neither the S&T Directorate nor its draft process requires documentation of how determinations are made about where R&D projects and funds are directed. Further, the S&T Directorate is only now seeking waivers, where appropriate, and considering whether to grant authorizations or take other actions for their portfolio managers hired under the IPA. As we discussed, under 18 U.S.C. § 208(b)(1), the official responsible for an employee's appointment may grant a waiver in advance allowing participation in certain matters if he or she makes a written determination that the affected financial interest “is not so substantial as to be deemed likely to affect the integrity” of the employee's services. In May 2005, an S&T Directorate official stated to us that they first needed to “scrutinize all of their positions to determine whether an actual or apparent conflict requires such action.” In August 2005, senior S&T officials told us that, in conjunction with DHS’s DAEO and OGE, they had begun the process of determining whether to issue waivers to IPA portfolio managers. During our exit briefing with DHS in September 2005, DHS officials indicated that one option might be to not hire anyone for which a waiver may be needed. In DHS’s December 2005 letter to us commenting on our report, it noted that the S&T Directorate is now seeking waivers for at least 6 of its IPAs. Finally, IPA portfolio managers in the S&T Directorate are not routinely offered specific training that focuses on the application of the ethics statutes and regulations to the unique financial relationship between the IPA portfolio managers and their “home” institution. The January 2005 meeting held with all IPAs in the S&T Directorate to discuss the specific ethics issues related to their circumstances is not scheduled to be repeated. Ensuring that management conveys the message on a periodic basis that integrity and ethical values must not be compromised is part of maintaining an effective control environment. Because of IPA portfolio managers’ ties to their “home” institution, and that their responsibilities at DHS may involve issues that affect their “home” institution, ensuring that these managers receive regular training that targets the application of conflict of interest laws to IPAs may keep them alert to those actions that could constitute a violation of such laws. IPA Portfolio Managers’ Role in Determining Where R&D Projects and Funds Were Directed Was Unclear The recent changes and further improvements to the S&T Directorate’s ethics-related management controls are critical because we found that the role of the IPA portfolio managers in determining where R&D projects and associated funds were directed was unclear. This was due to several factors, as discussed in more detail below. First, the process that was to be followed by IPA portfolio managers when determining where R&D projects and funds are directed, and the decision- making role of the IPA portfolio managers within such a process, has never been finalized. DHS provided us with a draft version of this process as part of a Web-based tool. However, IPTs were not required to follow this draft process and team members from the two IPTs that we examined said that they were becoming familiar with the process. In this draft, DHS stated that IPTs, facilitated by portfolio managers, were to “decide” which office within the S&T Directorate would execute a project, (i.e., ORD, HSARPA, or SED). The draft stated that if the members of the IPT could not reach agreement, the project would be reviewed by the Executive Review Board (ERB), which consisted of the Assistant Secretary, Programs, Plans, and Budgets, and the Directors of SED, ORD, and HSARPA. However, in September 2005, senior S&T Directorate officials told us that the information regarding the decision-making role of the IPT in the draft Web- based tool was inaccurate, indicating that IPTs can only make recommendations to the ERB, but not a final decision. However, as we noted, 18 U.S.C. § 208 guards against “personal and substantial participation” through various actions which include “decision” and “recommendation.” Second, DHS officials, portfolio managers, and IPT members were unable to provide us with any documentation, such as meeting minutes, to indicate the actual role that the five IPA portfolio managers from the national laboratories played in the decision-making process within the IPTs. Third, the testimony regarding the extent of the IPA portfolio managers’ involvement in the decision-making process was inconsistent and, at times, vague. For example, one IPA portfolio manager told us that he/she recused himself/herself from any decision that may have involved his/her national laboratory, although this manager noted that he/she was present and “facilitated” the IPT meetings when such decisions were made. Other IPA portfolio managers told us that they participated to varying degrees. For example, one told us that he/she was involved in the IPT decisions regarding which S&T Directorate office would execute a project only when the other IPT members could not reach agreement. Another told us that he/she participated in all IPT discussions that helped make this determination. However, because there was no documentation of the decision-making process, we could not determine the extent of the IPA portfolio managers’ actual involvement on any particular funding matter, or whether their involvement affected their “home” institution, such as a national laboratory. In March 2005, we discussed our review with OGE to obtain their views on the ethics issues, both in general and as they may specifically apply to the S&T Directorate. During these discussions, OGE officials told us that they planned to begin their first audit of DHS’s ethics program in late 2005. Because we could not determine whether or not the IPA portfolio managers participated “personally and substantially” in the decision-making process, however, we contacted the Acting Director of OGE in September 2005 and suggested that OGE review this matter further in conjunction with its planned ethics program review of DHS. In December 2005, OGE officials told us that they plan to examine, among other matters, the transparency and accountability issues in DHS’s ethics program raised by our findings. Conclusions In the first few years of its existence, the S&T Directorate focused on rapidly organizing itself to meet the nation’s homeland security R&D requirements. During this time, DHS had few resources devoted to developing the S&T Directorate’s management infrastructure, including management controls to guard employees against conflicts of interest. Although the S&T Directorate has recently implemented management controls to help protect against conflicts of interest, and is currently considering others, more needs to be done. In the absence of a process for deciding what entities will implement R&D projects, the role that IPA portfolio managers played has been inconsistent and the potential exists that they may have unknowingly violated conflict of interest laws. By developing and carrying out a process to decide which office will execute a project, and clearly defining the roles and responsibilities of those involved in the process, the S&T Directorate may help its IPA portfolio managers avoid such situations in the future. In addition, documenting how the decisions are made while implementing this process may help protect both DHS and its employees if questions are raised. Ensuring that the S&T Directorate continues to have access to the best personnel with needed expertise is important to the success of DHS’s mission. The IPA provides the S&T Directorate with a mechanism to hire some of these people. However, because IPA portfolio managers have an arrangement for future employment with an entity that could benefit from the S&T Directorate’s work, determining whether (1) waivers of the conflict of interest laws are appropriate, (2) IPA portfolio managers should be authorized to work on these issues regardless of any appearance of a conflict, or (3) DHS should take other steps to facilitate the use of their expertise to achieve the S&T Directorate’s mission, could help ensure that these valuable employees are protected against violating conflict of interest laws. Further, once hired, IPA employees must understand how the ethics laws address their unique situations; namely, that they have an agreement for future employment with an entity that stands to benefit from the S&T Directorate’s funding. Regular training for IPA portfolio managers that targets the conflict of interest laws could help them understand what actions are not permitted. Finally, to help ensure that DHS’s ethics-related management controls are implemented and working in a satisfactory manner, it is critical that DHS establish a monitoring and oversight program. Such a monitoring mechanism will allow the S&T Directorate to assess its ethics-related management controls in order to facilitate awareness and mitigation of risk in DHS, while providing a greater degree of impartiality and integrity. Recommendations for Executive Action To help IPA portfolio managers comply with the conflict of interest law, we are recommending that the Secretary of Homeland Security direct the Undersecretary of the S&T Directorate to improve the S&T Directorate’s management controls related to potential conflicts of interest by finalizing the S&T Directorate’s R&D process and defining and standardizing the role of the IPA portfolio managers in this process; developing a system to document how decisions are made within the determining, in consultation with DHS’s DAEO and OGE, whether waivers of 18 U.S.C. § 208 or authorizations related to the appearance of a conflict of interest are appropriate, or other actions are needed; providing regular ethics training for IPA portfolio managers that focuses on the application of the ethics statutes and regulations to their unique financial situation; and establishing a monitoring and oversight program of ethics-related management controls. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Homeland Security. DHS concurred with our recommendations and noted some actions that they plan to take. If implemented effectively, these actions would be responsive to some of our recommendations. For example, the S&T Directorate plans to (1) coordinate with the DAEO and OGE in seeking waivers under 18 U.S.C. § 208 for some of the IPAs in the S&T Directorate; (2) enhance its ethics-related training for IPAs; and (3) strengthen its monitoring and oversight programs for ethics-related management controls. Although DHS agreed with all of our recommendations, it believed that we misstated the facts in asserting that IPA employees do not routinely receive specific training regarding conflicts of interest. We revised the report to indicate that the ethics training we believe is still needed should focus on the application of the ethics statutes and regulations to the unique financial relationship between the IPA portfolio managers and their “home” institutions. Second, we are encouraged that the S&T Directorate has reviewed the individual circumstances of all of the IPAs in the S&T Directorate and is seeking waivers under 18 U.S.C. § 208 for at least six of these individuals. However, as stated in our report, the S&T Directorate has not finalized the process for determining where research and development projects and associated funds are directed, nor has it defined and standardized the role of the IPA portfolio managers in this process. Further, the ability of IPA portfolio managers themselves to influence or control where projects and funds are directed has been inconsistent and, at times, vague within the S&T Directorate. Thus, IPA portfolio managers continue to be vulnerable to violating the conflict of interest laws. DHS’s comments are provided in appendix III. In addition, we received technical comments from DHS, which we incorporated as appropriate. We also provided a draft to OGE. On December 8, 2005, we met with OGE officials, including the Deputy Director of the Office of Agency Programs, who provided us with technical comments, which we made as appropriate. We are sending copies of this report to the Secretary of Homeland Security and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-6806 or stalcupg@gao.gov. Major contributors to this report included Ben Crawford, Terry Draver, John Krump, James Lager, Andrea Levine, Sarah Veale, and Michael Volpe. DHS Research and Development Funding Distribution in Fiscal Year 2004 In fiscal year 2004, the most recent year in which the Science and Technology (S&T) Directorate could provide us with detailed breakdowns of its obligated funds, about 41 percent of the $761 million obligated for its research and development (R&D) funding was distributed to Department of Energy and federal laboratories (mostly the Office of Research and Development’s programs) and about 40 percent to the private sector (mostly the Homeland Security Advanced Research Projects Agency’s programs), as seen in figure 2 below. Scope and Methodology The objectives of our review were to examine (1) the management controls that have been established within the Department of Homeland Security’s (DHS) Science and Technology (S&T) Directorate to help guard against conflicts of interest for portfolio managers hired under the Intergovernmental Personnel Act (IPA), and (2) the role of the IPA portfolio managers (particularly those from the national laboratories) in determining where research and development (R&D) projects and associated funds are directed. To address our objectives, we analyzed DHS documentation of management controls related to conflicts of interest and other relevant documents. These documents included such materials as agency directives, official memos, human capital procedures, fiscal years 2007-2011 Planning, Programming, and Budgeting Cycle guidance, DHS reports and testimony to Congress, and IPA agreement forms for the Directorate’s employees hired under the IPA. In addition, we reviewed the most current, but incomplete, draft of an electronic version of the Research, Development, Testing and Evaluation process to be used by the S&T Directorate. We reviewed relevant laws and regulations, including the Homeland Security Act of 2002, Title 18 U.S.C. Section 208(a); (b); and 5 C.F.R. pt. 2635. In addition, we used GAO’s Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool. We also reviewed prior work from DHS’s Office of the Inspector General (OIG) and GAO on the DHS S&T Directorate and ethics-related issues. We interviewed officials in the S&T Directorate, including the Deputy Secretary for S&T and head of Programs, Plans, and Budgets (PPB); Director of the Office of Research and Development (ORD); Acting Director of the Homeland Security Advanced Research Projects Agency (HSARPA); S&T portfolio managers, five of whom were employed by DHS on IPA agreements from the national laboratories; and the human capital director for S&T. We did not interview the Director of Systems, Engineering, and Development (SED) because SED works with mature technologies at or near the deployment stage, rather than technologies needing R&D by an entity like the national laboratories. More specifically, we examined the role of the IPA portfolio managers from the national laboratories in determining where R&D projects and associated funds were directed during the period from December 2004 through May 2005. In addition, we interviewed DHS’s Designated Agency Ethics Officer, attorneys in DHS’s General Counsel’s Office, and DHS’s OIG. We judgmentally selected two portfolios within the S&T Directorate to examine in more detail the existence of their process and management controls and compare any differences in the application of such processes and controls. These portfolios were: (1) the Biological Countermeasures portfolio, which is the largest portfolio in the S&T Directorate and is run by an IPA; and (2) the Border and Transportation Security (BTS) portfolio, a smaller portfolio managed by a career federal employee. We interviewed the members of these Integrated Project Teams, which included representatives of PPB, HSARPA, ORD and SED. In addition, we reviewed the fiscal years 2004 and 2005 Execution Plans for the Biological Countermeasures portfolio, the fiscal year 2004 Execution Plan for the BTS portfolio, and the fiscal year 2004 BTS portfolio funding allocations by type of entity. (e.g., national laboratory, university, private industry, etc.) We also met with the Acting Director of the Office of Government Ethics (OGE) and her staff to discuss the ethics issues we were reviewing at DHS. OGE exercises leadership in the executive branch to prevent conflicts of interest on the part of government employees and to resolve those conflicts of interest that do occur. The responsibilities of the Director of OGE include, among other things, consulting with agency ethics counselors and other responsible officials regarding the resolution of conflict of interest problems in individual cases, and ordering corrective action on the part of agencies and employees which the Director deems necessary. Written comments from DHS are included in appendix III. We performed our work from September 2004 through December 2005 in accordance with generally accepted government auditing standards.
The Department of Homeland Security's (DHS) Science and Technology (S&T) Directorate was established to focus on areas such as addressing countermeasures for biological threats. To do this, it hired experts from the national laboratories under the authority of the Intergovernmental Personnel Act (IPA). The Directorate is organized into portfolios, led by portfolio managers. Questions have been raised about potential conflicts of interest for these individuals, since a portion of the Directorate's research funds have gone to the national laboratories. GAO was asked to examine (1) the management controls established within the Directorate to help guard against conflicts of interest for IPA portfolio managers; and (2) the role of the IPA portfolio managers, particularly those from national laboratories, in determining where research and development projects were directed. DHS's S&T Directorate is working to improve its management controls to help guard against conflicts of interest for its IPA portfolio managers, but it can do more. In the first few years of DHS's existence, the S&T Directorate focused on the urgency of organizing itself to meet the nation's homeland security research and development requirements, and had few resources devoted to developing its management infrastructure, including the management controls to guard against conflicts of interest. In the past year, steps have been taken to improve these controls. For example, in June 2005, DHS implemented a new process for hiring IPA employees. Although the S&T Directorate is taking steps to improve its ethics-related management controls, several conditions still need to be addressed to better ensure that its IPA portfolio managers comply with the conflict of interest laws. First, the process for determining where research and development projects and funds are directed, including the role of the IPA portfolio managers, has never been finalized. Second, the S&T Directorate does not require documentation of how determinations are made about where research and development projects and funds are directed. Third, S&T Directorate officials are only now seeking waivers, where appropriate, and considering whether to take other actions that would allow IPA portfolio managers to participate in certain matters. Finally, DHS officials told us that S&T Directorate employees, including those hired under the IPA, are offered the same new employee and annual ethics training as are all DHS employees. However, employees hired under the IPA do not receive regular training that addresses their unique situation; namely that they have an agreement for future employment with an entity that may benefit from the S&T Directorate's funding. The role of the IPA portfolio managers, five of whom came from the national laboratories, in determining where research and development projects and associated funds were directed was unclear. This was due to several factors. First, as previously discussed, the S&T Directorate has never finalized a standard process for determining where research and development projects and funds are directed, or the decision-making role of the IPA portfolio managers within such a process. Second, the extent of the IPA portfolio managers' participation in making these determinations was unclear because there was no documentary evidence of how these determinations were actually made. Third, the testimonial evidence on the extent of the IPA portfolio managers' involvement was inconsistent and, at times, vague. Because we could not determine whether or not the IPA portfolio managers participated "personally and substantially" in the decision-making process, which is precluded by 18 U.S.C. 208, GAO contacted the Acting Director of the Office of Government Ethics (OGE) in September 2005. GAO suggested that OGE review this matter further in conjunction with its planned ethics program review of DHS. In December 2005, OGE officials told us that they plan to examine, among other matters, the transparency and accountability issues in DHS's ethics program raised by our findings.
Background In 2003, of the roughly 3,900 nonfederal, short-term, acute care general hospitals in the United States, the majority—about 62 percent—were nonprofit. The rest included government hospitals (20 percent) and for- profit hospitals (18 percent). States varied—generally by region of the country—in their percentages of nonprofit hospitals (see fig. 1). For example, states in the Northeast and Midwest had relatively high concentrations of nonprofit hospitals, whereas in the South the concentration was relatively low. The five states we reviewed varied in number and ownership composition of hospitals (see table 1). For example, in California and Indiana, nonprofit hospitals accounted for over half of each state’s hospitals. In Texas, government hospitals made up the state’s largest percentage, although the distribution between nonprofit, for-profit, and government hospitals was similar; in Florida, most hospitals were either nonprofit or for-profit, while 11 percent were government. The average size of hospitals in our study, as measured by patient operating expenses, varied across the three ownership groups. (See table 2.) On average, nonprofit hospitals were larger than for-profit hospitals. The pattern held in all five states but the magnitude of the difference varied. For example, in California, nonprofit hospitals were twice as large as for-profit hospitals, whereas in Texas, this difference was smaller. Hospitals’ Qualifications for Federal and State Tax- exempt Status Hospitals may be extended a federal tax exemption by IRS if they meet the Internal Revenue Code’s qualifications for charitable organizations under section 501(c)(3). Hospitals that qualify for nonprofit status are exempt from federal income taxes and typically receive other advantages, including access to charitable donations—which are tax deductible for the individual or corporate donor—and tax-exempt bond financing. To qualify for federal tax-exempt status, a hospital must demonstrate that it is organized and operated for a “charitable purpose,” that no part of its net earnings inure to the benefit of any private shareholder or individual, and that it does not participate in political campaigns on behalf of any candidate or conduct substantial lobbying activities. Before 1969, IRS required hospitals to provide charity care to qualify for tax-exempt status. Since then, however, IRS has not specifically required such care, as long as the hospital provides benefits to the community in other ways. This “community benefit” standard came into existence with an IRS ruling, which concluded that a hospital’s operation of an emergency room open to all members of the community without regard to ability to pay promoted health in a way consistent with other activities— such as advancement of education and religion—that qualify other organizations as charitable. In addition, the 1969 ruling identified other factors that might support a hospital’s tax-exempt status, such as having a governance board composed of community members and using surplus revenue to improve facilities, patient care, medical training, education, and research. Nonprofit hospitals may also receive exemptions from state and local income, property, and sales taxes, which, in some cases, are of greater value than the federal income tax exemption. Some states have defined community benefits for nonprofit hospitals, but their statutes vary considerably in their specificity and scope. Appendix II provides more information on statutory definitions of community benefits in the states we reviewed. Government Payments for Uncompensated Care and Other Costs Hospitals may receive direct payments from different government sources to help cover their unreimbursed costs, including those for charity care, bad debt, and low-income patients. For example, Medicare and Medicaid make payments to hospitals that serve a disproportionate share of low- income patients under their respective disproportionate share hospital (DSH) programs. Medicare bad debt reimbursement partially reimburses hospitals for bad debt incurred for Medicare patients. Other state payments may also be available to hospitals, although their specific types vary widely. For example, hospitals may receive payments from special revenues such as tobacco settlement funds, uncompensated care pools that are funded by provider contributions, and payment programs targeted at certain services such as emergency services. (See app. III for more information on payments for uncompensated care and other costs.) Burden of Providing Uncompensated Care Varied among Hospital Groups, but Burden Was Generally Concentrated in a Small Number of Hospitals In our review of hospitals’ provision of uncompensated care in five states, we analyzed cost data from two perspectives—namely, each hospital group’s percentage of (1) total uncompensated care costs in a state and (2) patient operating expenses devoted to uncompensated care. The former relationship showed hospitals’ uncompensated care costs in dollars, aggregated by groups; whereas the latter relationship showed hospitals’ uncompensated care costs as a proportion of their operating expenses, thereby accounting for differences in hospital number and size among the hospital groups. In general, government hospitals, as a group, accounted for the largest percentage of total uncompensated care costs and devoted the largest share of patient operating expenses to uncompensated care costs. The uncompensated care cost burden was not evenly distributed within each hospital group but instead was concentrated in a small number of hospitals. Government Hospitals Generally Accounted for the Largest Percentage of the Uncompensated Care Costs in States Reviewed Government hospitals, as a group, accounted for the largest percentage of the total uncompensated care costs in three of the five states—California, Georgia, and Texas. Nonprofit hospitals, as a group, accounted for the largest percentage of the uncompensated care costs in Florida and Indiana. For-profit hospitals, as a group, provided 20 percent or less of total uncompensated care costs in each state we reviewed. (See table 3). In each of the five states, the nonprofit hospital groups accounted for a larger percentage of total uncompensated costs compared with the for- profit hospital groups. This difference was due, in part, to the larger number of nonprofit hospitals and their larger size relative to the for-profit hospitals. For example, in California, the nonprofit group’s percentage of total uncompensated care costs was almost four times higher than that of the for-profit group, but this is not surprising, as nonprofit hospitals outnumbered for-profit hospitals almost 2 to 1 and were twice the size in patient operating expenses. Government Hospital Groups Generally Devoted Largest Share of Patient Operating Expenses to Uncompensated Care, but Shares Varied across States In four of the five states reviewed, government hospitals devoted substantially larger shares, on average, of their patient operating expenses to uncompensated care than did nonprofit and for-profit hospitals. (See fig. 2.) In those four states, the differences in average percentages between the government hospital groups and the nonprofit hospital groups ranged from about 4.3 percentage points in Georgia to 11.3 percentage points in Texas. In contrast, in the fifth state, Indiana, the nonprofit hospital group devoted the largest share, on average, of patient operating expenses to uncompensated care. Between the nonprofit and for-profit hospital groups, the nonprofit hospitals’ average percentages were greater in four of the five states—ranging from 1.2 percentage points greater in Florida to 2.3 percentage points greater in Indiana. In contrast, in the fifth state, California, the nonprofit group’s average percentage was similar to that of the for-profit group. The five states varied in their hospitals’ shares of patient operating expenses devoted to uncompensated care, ranging from an average 4.1 percent for all Indiana hospitals to an average 8.3 percent for Texas hospitals. (See table 4.) Similar state-to-state variation found in other studies was due, in part, to differences in states’ proportions of uninsured populations, variation in Medicaid eligibility or payment levels, and the presence of state programs that provide health insurance to low-income uninsured individuals. Specifically, prior research showed that hospitals located in states with more uninsured individuals and hospitals in states with relatively more eligibility-restricted Medicaid programs may have higher levels of uncompensated care. Our data are consistent with these studies’ findings on the uninsured. For example, in our five-state review, Texas had the highest percentage of uninsured—25 percent—and the highest share, on average, of patient operating expenses devoted to uncompensated care, whereas Indiana had the lowest percentage of uninsured—13 percent—and the lowest average share. For Each Hospital Group, Uncompensated Care Costs Were Concentrated in a Small Number of Hospitals For each group, uncompensated care costs were concentrated in a small number of hospitals. We observed this pattern when examining the percentages of patient operating expenses devoted to uncompensated care costs as well as hospitals’ shares of total uncompensated care costs in a state. For the three hospital ownership groups, we ranked hospitals according to their share of patient operating expenses devoted to uncompensated care. We found that, for all three hospital groups, the top quarter of hospitals devoted substantially greater percentages of their patient operating expenses to uncompensated care, on average, compared with the bottom quarter of hospitals. (See fig. 3.) For example, in California’s nonprofit hospital group, the top quarter of hospitals devoted an average of 7.2 percent compared with 1.4 percent for the bottom quarter of hospitals. Similarly, in Florida’s government hospital group, the top quarter of hospitals devoted an average 19.6 percent compared with an average 5.2 percent for the bottom quarter of hospitals. From state to state, the difference in ranges between top and bottom quarters was also substantial. For example, in Indiana’s government group, the average share of operating expenses devoted to uncompensated care for hospitals in the top quarter was about 3 times larger than for those in the bottom quarter; whereas in California, the average share for the top quarter of hospitals was almost 13 times higher than that of the bottom quarter. When examining hospitals’ shares of total uncompensated care costs in a state, we found that uncompensated care costs remained concentrated in a disproportionately small number of hospitals. Specifically, each state’s top quarter of hospitals accounted for a disproportionately large share of the state’s uncompensated care costs. For example, in Texas, the top quarter of hospitals accounted for about 50 percent of total uncompensated care costs, yet accounted for only 18 percent of the total beds. (See table 5). Moreover, in Texas, six major government teaching institutions accounted for 34 percent of total uncompensated care costs, which amounted to over half of the contribution of the hospitals in the top quarter. This pattern was also true for California, Florida, and Georgia. For example, in California, 13 major teaching hospitals accounted for 42 percent of total uncompensated care costs. In contrast, in Indiana, total uncompensated care costs were distributed more evenly across a greater number of hospitals. Several factors explain which hospitals were likely to be in their group’s top and bottom quarters. For example, in our five-state analysis, we found that whether a hospital was a teaching institution was an important predictor of whether it would be in the top quarter of a state’s government hospital group. Hospitals that had teaching programs were more likely to be in the top quarter of a government hospital group. In contrast, teaching status was not an important predictor for either the nonprofit or for-profit hospital groups’ top quarter. For nonprofits, hospitals in rural areas were more likely to be in the top quarter than hospitals located in urban areas. Other factors that were outside the scope of this study, such as differences in the proportion of uninsured populations in the hospital market, may have also influenced the likelihood of a hospital’s inclusion in the top or bottom quarter. Hospitals Reported Providing a Wide Range of Other Community Benefits In addition to providing uncompensated care, hospitals may provide other services to their communities for which they are not reimbursed. In our review of hospitals’ Web sites and reports about community benefits— published documents specifying the types and value of services hospitals provide to communities—we found that, regardless of ownership status, hospitals reported providing a wide range of community benefits. Variations in the types of community benefits hospitals reported providing could be explained by differences in the community benefits hospitals chose to provide as well as by variations in the applicability, specificity, and breadth of state requirements. Certain hospital industry guidance defines community benefits as the unreimbursed goods and services hospitals provide that address their communities’ health needs, including health education, screening, and clinic services, among others. Consistent with this industry definition, we found through our review of reports and Web sites that hospitals reported providing similar types of services, including: community health education such as parenting education, smoking cessation, fitness and nutrition, health fairs, and diabetes management; health screening services such as screening for high cholesterol, cancer, clinic services, including clinics targeted to specific groups in the community, such as indigent patients; medical education for physicians, nurses, and other health professionals; financial contributions, including cash donations and grants, to community organizations; coordination of community events and in-kind donations—such as food, clothing, and meeting room space—to community organizations; and hospital facility and other infrastructure improvements. Community health education and health screenings were listed by most of the reports and Web sites we reviewed. Clinic services, support groups, community event coordination, cash contributions to charities, and medical education for health professionals were listed by over half of the reports we reviewed. Because of the wide variation in hospitals’ reporting of community benefits, we were not able to discern clear patterns in the provision of these benefits across hospital ownership groups. The variation could be explained by differences in the community benefits hospitals chose to provide as well as by variations in the applicability, specificity, and breadth of state requirements. Specifically, the five states reviewed require all hospitals to report financial data, including data on the cost of charity care they provide. However, as shown in table 6, California, Indiana, and Texas also have statutory requirements for nonprofit hospitals to develop plans for meeting their communities’ health needs and to report annually on the types and value of the community benefits they provide. Of these three states, only Texas and Indiana require nonprofit hospitals to report using standardized forms and have the explicit statutory authority to impose fines for noncompliance as part of the requirements. The Texas form is more specific, as it includes line-items that capture the hospitals’ unreimbursed costs associated with providing traditionally “unprofitable” health services such as trauma care and community clinics, education of medical professionals, medical research, and cash and in-kind donations made by the hospital to local charities. Indiana’s form provides nonprofit hospitals more flexibility in delineating the types and value of their community benefits but includes supplementary guidance to nonprofit hospitals about what should be considered community benefits, including financial or in-kind support of public health programs, community- orientated wellness and health promotion programs, and outreach clinics in economically depressed communities. California has no form for annual community benefit reports but requires that hospitals classify the services provided into broad, statutorily defined categories, including cash and in- kind donations to public health programs, efforts to contain health care costs and enhance access, and services that help maintain a person’s health. According to state officials or state hospital association representatives in the five states we reviewed, for-profit and government hospitals are not required to report on the community benefits they provide outside of the requirements to report financial data, including data on the cost of charity care they provide. However, as we found through our review, some of these hospitals report publicly—for promotional purposes—on the community benefits they provide, either through published reports or by posting general information on their Web sites. Moreover, the three states with community benefit reporting requirements—California, Indiana, and Texas—conduct limited monitoring of nonprofit hospitals’ community benefit reports. For example, according to officials from state agencies, none of the three states conducts audits of nonprofit hospitals’ self-reported community benefits information, although Texas reviews the reports to ensure that “reasonable” types of services are listed as community benefits. In addition, these states do not routinely use the data collected through community benefit reports to review hospitals’ tax-exempt status. Concluding Observations Our comparison of the hospital ownership groups’ uncompensated care costs, as a percentage of patient operating expenses, was instructive. Differences between the nonprofit and for-profit groups were often small when compared with the substantial differences between the government group and the other two groups. Moreover, the burden of uncompensated care costs was not evenly distributed among hospitals, which meant that a small number of nonprofit hospitals accounted for substantially more of the uncompensated care burden than did others receiving the same tax preference. As for the other community benefits hospitals reported providing, we were not able to discern a clear distinction among the government, nonprofit, and for-profit hospital groups. Hospitals in the five states reported conducting a variety of activities, which the hospitals themselves considered community benefits. We were unable to assess the value of these benefits or make systematic comparisons between hospitals or across states. These observations illustrate a larger point that I and others raised at the hearing last month—namely, that current tax policy lacks specific criteria with respect to tax exemptions for charitable entities and detail on how that tax exemption is conferred. If these criteria are articulated in accordance with desired goals, standards could be established that would allow nonprofit hospitals to be held accountable for providing services of benefit to the public commensurate with their favored tax status. Mr. Chairman, this concludes my prepared statement. I will be happy to answer questions you or the other Committee Members may have. Contact and Acknowledgments For further information regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101. Kristi Peterson, Thomas Walke, Joanna Hiatt, Kelly DeMots, Mary Giffin, Emily Rowe, Craig Winslow, and Hannah Fein contributed to this statement. Appendix I: Scope and Methodology To examine the provision of uncompensated care by the three hospital ownership groups, we obtained 2003 uncompensated care data from five states—California, Florida, Georgia, Indiana, and Texas. We obtained all other data, such as cost-to-charge ratios, patient operating expenses, and all descriptive statistics, from 2002 and 2003 Medicare hospital cost reports. We selected the five states because they represented geographically diverse areas; had a number of hospitals in each ownership group sufficient to make comparisons; and collected hospital-specific uncompensated care data, which not all states maintain. The 2003 state uncompensated care data and 2002 and 2003 Medicare hospital cost reports were the most recent available at the time of our analysis. We also interviewed health officials from all five states as well as officials from the Centers for Medicare & Medicaid Services (CMS), the American Hospital Association, and the Federation of American Hospitals. We limited our analysis to nonfederal, short-term, acute care general hospitals for which a cost report was available. This analysis included critical access hospitals that provide general acute care. Our study included about 92 percent of nonfederal, short-term, acute care hospitals in the five states. We defined uncompensated care as the sum of charity care and bad debt costs as reported in the state data. To determine uncompensated care costs, we multiplied uncompensated care charges by a hospital-specific cost-to-charge ratio. Although specific definitions of charity care varied, states generally defined it as charges for patients deemed unable to pay all or part of their bill, less any payments made by, or on behalf of, that specific patient. States generally defined bad debt as the uncollectible payment that a patient is expected to, but does not pay. Our definition of uncompensated care does not include any contractual allowances or cost shortfalls. In addition, we did not subtract any charity care-specific block grants or donations a hospital may receive, as this information was not available for all states. We analyzed uncompensated care cost data from two perspectives—— namely, each hospital ownership group’s percentage of (1) total uncompensated care costs in a state, and (2) average patient operating expenses devoted to uncompensated care. To examine factors that could explain differences in the provision of uncompensated care by hospital ownership groups, we examined certain hospital characteristics including a hospital’s size, teaching status, and location. We used patient operating expenses to measure hospital size. For teaching status, we defined major teaching hospitals as those hospitals having an intern/resident-to-bed ratio of 0.25 or more and minor teaching hospitals as those having an intern/resident-to-bed ratio greater than 0 and less than 0.25. We defined a hospital as urban if it was located in a metropolitan statistical area and as rural if it was not located in a metropolitan statistical area. We supplemented our analysis with a review of the literature to determine other factors that could explain differences in the provision of uncompensated care by hospital ownership groups. We assessed the reliability of the hospital Medicare cost reports and the reliability of state uncompensated care cost data from California, Florida, Georgia, Indiana, and Texas in several ways. First, we performed tests of data elements. For example, we examined the values for uncompensated care costs and patient operating expenses to determine whether these data were complete and reasonable. We also verified that the dollar amount of uncompensated care in the 2003 data was consistent with the amount in 2002. Second, we reviewed existing information about the data elements. For example, we compared descriptive statistics we calculated from the Medicare hospitals cost reports with statistics published by CMS. Third, we interviewed state and agency officials knowledgeable about the data in our analyses and knowledgeable about hospital uncompensated care costs. We determined that CMS and all five states performed quality assurance tests on the data before releasing them. Overall, we determined that the data we used in our analyses were sufficiently reliable for our purposes. To examine hospitals’ provision of community benefits other than uncompensated care, we reviewed 21 hospital reports and Web sites for information about such benefits in five states. Specifically, we reviewed 12 publicly available reports about the community benefits provided by nonprofit and for-profit hospitals and 3 reports for for-profit hospital systems representing multiple hospitals. We also reviewed 6 government hospitals’ Web sites to determine the extent to which they publicized the provision of services that are generally considered community benefits. We also examined laws in five states regarding community benefit requirements for nonprofit hospitals, reviewed the literature, and interviewed state officials and hospital association representatives. We conducted our work from February 2005 through May 2005 in accordance with generally accepted government auditing standards. Appendix II: Statutory Definitions of Community Benefits in the Five States Reviewed Table 7 summarizes the statutory definitions of community benefits for nonprofit hospitals in the states we reviewed. We found that the statutes vary considerably in their specificity and scope. In addition, of the five states we reviewed, only the Texas statute contains an explicit link between the statutory definition of community benefits and hospitals’ qualifications for state tax exemptions. Appendix III: Government Payments for Uncompensated Care and Other Unreimbursed Costs Hospitals may receive direct payments from different government sources to help cover their unreimbursed costs. Such payments may include special Medicare and Medicaid payments, known as disproportionate share hospital (DSH) payments, Medicare bad debt reimbursement, and other state payments. Medicare DSH: The Medicare DSH adjustment provides payments to hospitals that serve a disproportionate share of low-income patients. The Congress mandated this adjustment in 1986 to address the concern that hospitals that serve such patients have higher Medicare costs per case because they have higher overhead and labor costs and their patients are in poorer health with more complications and secondary diagnoses. Hospitals qualify for the Medicare DSH adjustment based on their low- income patient share. The low-income patient share is computed as the percentage of a hospital’s Medicare inpatient days attributable to patients that are eligible for both Medicare part A and Supplemental Security Income plus the percentage of total inpatient days attributable to patients eligible for Medicaid, but not Medicare part A. For hospitals that qualify for a DSH adjustment, their actual adjustment is based on several factors, including the number of acute care beds, number of patient days for low- income patients, and location (rural or urban). See table 8 for Medicare DSH payments in 2003 to the hospitals in the selected states we analyzed. Medicaid DSH: The Medicaid statute requires that states make DSH adjustments to the payment rates of certain hospitals treating large numbers of low-income and Medicaid patients. The Medicaid DSH adjustment was established by the Congress in 1981 and establishes broad guidelines for hospital eligibility to receive Medicaid DSH and for the methods used to compute the amount of payment. States have discretion in designating DSH hospitals and calculating adjustments for them. States also vary in terms of program rules and resource levels as well as the degree to which they target payments to different types of hospitals. Medicaid DSH is the largest source of financial support for hospital uncompensated care and is funded jointly by the states and the federal government. State approaches to financing the state portion of Medicaid DSH include obtaining funds from hospitals through provider taxes or intergovernmental transfers in order to establish the state’s contribution required to obtain the federal match for Medicaid DSH funding. Therefore, it is not always possible to determine what portion of Medicaid DSH payments to individual hospitals is the net additional payment to the hospital. Medicare bad debt reimbursement: Medicare partially reimburses acute care hospitals for bad debts resulting from Medicare beneficiaries’ nonpayment of deductibles and copayments after providers have made reasonable efforts to collect unpaid amounts. If a hospital can document that a Medicare patient is indigent, the hospital can then forgo collection efforts from the patient. Medicare pays hospitals 70 percent of their reimbursable bad debts, except critical access hospitals, for which it pays 100 percent of their reimbursable bad debts. See table 9 for total Medicare bad debt reimbursements in 2003 to the hospitals in the selected states we analyzed. Other state sources: Other state sources of payment to hospitals for uncompensated or unreimbursed care vary widely, and may include special revenues such as tobacco settlement funds, uncompensated care pools that are funded by provider contributions, and payment programs targeted at certain services such as emergency services. For example, Massachusetts has used a portion of the state’s tobacco settlement fund to help cover uncompensated care costs. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Before 1969, IRS required hospitals to provide charity care to qualify for tax-exempt status. Since then, however, IRS has not specifically required such care, as long as the hospital provides benefits to the community in other ways. Seeking a better understanding of the benefits provided by nonprofit hospitals, Congress requested that GAO examine whether nonprofit hospitals provide levels of uncompensated care and other community benefits that are different from other hospitals. This statement focuses on, by ownership group, hospitals' (1) provision of uncompensated care, which consists of charity care and bad debt, and (2) reporting of other community benefits. The hospital ownership groups were (nonfederal) government, nonprofit, and for-profit. To compare the three hospital ownership groups, GAO obtained 2003 data from five geographically diverse states with substantial representation of the three ownership groups in each state. GAO analyzed cost data from two perspectives--each hospital group's percentage of (1) total uncompensated care costs in a state and (2) patient operating expenses devoted to uncompensated care. Government hospitals generally devoted substantially larger shares of their patient operating expenses to uncompensated care than did nonprofit and for-profit hospitals. The nonprofit groups' share was higher than that of the for-profit groups in four of the five states, but the difference was small relative to the difference found when making comparisons with the government hospital group. Further, within each group, the burden of uncompensated care costs was not evenly distributed among hospitals but instead was concentrated in a small number of hospitals. This meant that a small number of nonprofit hospitals accounted for substantially more of the uncompensated care burden than did others receiving the same tax preference. Hospitals in the five states--nonprofit, for-profit, and government hospitals--reported providing a variety of services and activities, which the hospitals themselves defined as community benefits. Community benefits include such services as the provision of health education and screening services to specific vulnerable populations within a community, as well as activities that benefit the greater public good, such as education for medical professionals and medical research. GAO was unable to assess the value of these benefits or make systematic comparisons between hospitals or across states. These observations illustrate a larger point--namely, that current tax policy lacks specific criteria with respect to tax exemptions for charitable entities and detail on how that tax exemption is conferred. If these criteria are articulated in accordance with desired goals, standards could be established that would allow nonprofit hospitals to be held accountable for providing services and benefits to the public commensurate with their favored tax status.
Background Of the approximately 167.7 billion pounds of raw milk produced in the United States in 2000, about 55.5 billion pounds were processed into fluid milk products—such as whole, 2-percent, 1-percent, and skim milk; flavored milks; and buttermilk—that yielded approximately $22 billion in retail sales. The rest of the raw milk was used to produce manufactured products, such as butter, cheese, ice cream, powdered milk, and yogurt. In the United States, a complex pricing system has evolved that affects prices paid for raw milk used to produce processed milk (fluid drinking milk) and manufactured dairy products, such as cheese and butter. Various milk regulators—USDA, some states, and the Northeast Dairy Compact— establish minimum prices that must be paid for raw milk to help stabilize the milk supply. In addition to USDA, the states, and the NEDC, other entities affect milk prices, including cooperatives, which may provide services to farmers such as collecting farmers’ milk; milk processors, which convert raw milk to fluid milk; manufacturers of dairy products; and retailers, which stock and sell dairy products to consumers. Each of these groups contributes to the value of fluid milk and dairy products sold at the retail level, and each receives a portion of the difference between the prices that farmers receive and the retail price. Federal and State Dairy Programs USDA's milk marketing and milk price support programs, as well as some states' dairy programs, are intended to ensure an adequate supply of milk by establishing milk prices and other milk marketing rules, which, in turn, are intended to stabilize milk marketing conditions and thus assist individual farmers as well as consumers. In effect, these programs ensure that farm prices do not fall below a minimum level and provide a safety net for individual farmers who lack the market power of other entities, such as wholesale milk processors. Currently, about 70 percent of the milk produced in the United States is regulated under the federal milk marketing order program created in 1937 and administered by USDA. Under this program, on the basis of national dairy market information, USDA sets the minimum prices that must be paid by processors for raw fluid grade milk in specified marketing areas, or orders. These prices vary by the type of dairy product for which the milk is used; the minimum price for raw milk used for fluid drinking purposes also varies by location. Even though USDA sets minimum prices for raw milk, buyers of milk can and sometimes do pay farmers prices in excess of the established minimums—prices known as “over-order premiums.” Market forces play a role in determining any such premiums. Under the federal milk marketing order program, USDA has a classified pricing system for setting minimum prices, on a monthly basis, for milk that is based upon its intended use, as shown in table 2. Federal milk marketing order class prices are determined by using product price formulas that compute milk component values based on wholesale dairy product prices. For example, Class III formulas use monthly average butter, cheese, and dry whey prices to determine values for butterfat, protein, and other solids, respectively. Class IV formulas use monthly average butter and nonfat dry milk prices to determine values for butterfat and nonfat solids, respectively. The Class II price is determined by adding an amount—a Class II differential—to the Class IV price, while the Class I price is determined by adding a Class I differential to the higher of the Class III or IV price. Class I prices can vary from one milk marketing order to another. Dairy farmers selling raw milk within a federal milk marketing order receive an average, or “blend,” price that is the weighted average of the prices of Class I through IV milk, with the weights determined by the amount of milk sold for each class of use in each marketing order. The average price farmers receive, therefore, depends in part on the extent to which the total raw milk supply in a specific order is used to make fluid milk as opposed to the three classes of manufactured products. Dairy farmers located in one milk marketing order sometimes ship their milk to another order to obtain a higher price. Depending on the amount of milk shipped, a producer may qualify for a receiving order’s blend price. If the producer meets the receiving milk marketing order's blend price requirements, not only can the milk shipped qualify for the blend price— all of that producer's milk can qualify for the blend price. However, farmers must consider whether the cost of transporting a sufficient amount of milk to qualify for the receiving order's blend price outweighs the benefit of receiving a higher blend price. Some states, such as California, Maine, Nevada, New York, Pennsylvania, and Virginia, have established their own minimum farm-level milk pricing programs that cover all or portions of their states. These states have established commissions or boards to perform functions similar to those of USDA. For example, Virginia’s milk commission, created in 1934, establishes monthly producer prices to ensure dairy farmers an adequate return on their investment and to preserve market stability. Similarly, Nevada’s dairy commission, established in 1955, sets minimum prices for raw milk sold to processing facilities located within that state. The federal milk price support program, established in 1949, also influences farm-level prices. This program supports farm-level prices by providing a standing offer from USDA to purchase butter, cheese, and nonfat dry milk at specified prices. The prices offered for these dairy products are intended to provide sufficient revenue so that dairy product manufacturers can pay farmers, on average, a legislatively mandated support price. This program is intended to make the support price a floor price for raw milk used for manufacturing purposes, and it is unlikely that manufactured product prices will fall below the floor for very long. Because the price for raw milk used for fluid purposes is based, in part, on the price of raw milk used for manufacturing purposes, the price support program influences the price that farmers receive for raw milk used for fluid purposes as well. Dairy Compacts In addition to the federal and state milk marketing order programs that set minimum milk prices, in 1996, the Congress authorized the Northeast Interstate Dairy Compact for the six New England states. The Compact supplements the federal milk marketing order and state programs by setting the monthly minimum price to be paid for raw milk used for fluid milk marketed in the six-state area. In July 1997, the Compact set a minimum price of $16.94 per hundredweight for raw milk used for Class I, or fluid milk, and that minimum price has not changed. In months when the federally set minimum price for Class I milk for the Northeast Milk Marketing Order falls below the Compact price, the Compact price takes effect. In other months, when the federally set Class I price is higher than the Compact Class I price, the federally set Class I price takes effect. Since the Compact was established, federally set minimum prices for the area of the Compact that is subject to federal milk marketing regulation have ranged from $13.50 to $20.50 per hundredweight but have usually been below the Compact price of $16.94 per hundredweight. In those months when the Compact Class I price is higher than the federally set Class I price, processors having sales of fluid milk in the six NEDC states are required to pay a monthly over-order obligation per hundredweight equal to the difference between $16.94 and the federally set Class I price. Processors multiply the monthly over-order obligation by the volume of their total fluid milk sales in the six NEDC states in hundredweight by this difference and pay this amount to the commission that administers the Compact. After deducting administrative fees and other expenses, the commission distributes the balance of the proceeds in accordance with the amount of milk produced that was actually used for fluid milk, as opposed to cheese or other manufactured products. The commission makes disbursements to farmer cooperatives and milk handlers, located both within and outside the NEDC states, who then make individual payments to farmers based on their production. Thus, dairy farmers from other states, such as New York, that supply raw milk used to make fluid milk that is sold in the Compact states also benefit from the Compact’s minimum prices. The 1996 farm bill provided the Compact with considerable flexibility to establish regulations to carry out its intended purpose. The legislation authorized the establishment of a commission composed of delegates from the six NEDC states to administer the Compact. The state delegates are appointed by each of their respective states and include farmer, milk processor, and consumer representatives. In addition to being empowered to establish Compact prices, the commission may investigate costs associated with producing and selling milk; examine the economic forces affecting producers, including trends in production, consumption, and the financial conditions of dairy farmers; and prepare and provide periodic reports to the states regarding its efforts. While the commission is required to report annually to USDA, USDA is not required to investigate or report on the commission’s efforts. States in other regions of the country, including some southern states, are considering the adoption of similar compact arrangements. The proposed Dairy Consumers and Producers Protection Act, a bill that was introduced in the Congress in May 2001, if enacted, would reauthorize the NEDC. The bill would also allow additional states to enter the NEDC. In addition, it would establish a southern dairy compact consisting of 17 states, as well as a Pacific Northwest dairy compact and an Intermountain dairy compact, each consisting of 3 states. The proposed bill, like the 1996 farm bill, would provide the compact commissions with broad flexibility to carry out their objective of ensuring the continued viability of the dairy industry in their states. Therefore, it cannot be known in advance whether commissions for these new compacts would regulate milk pricing in their respective states in a manner similar to the way that the NEDC commission has regulated milk pricing. Other Factors Affecting Milk Prices In addition to federal and state programs and the NEDC, other entities affect prices paid for milk at the wholesale and retail levels. For example, about 83 percent of all raw milk produced in the United States is marketed through dairy cooperatives that are owned by farmer-members. Cooperatives perform services for their members and buyers of milk such as (1) transporting milk among different milk producing areas, (2) scheduling milk deliveries, (3) testing milk, and (4) paying members for their marketings. Costs for these services are paid by processors and dairy product manufacturers that purchase milk from the cooperatives at prices above federally specified minimum prices and then process or manufacture, package, and distribute fluid milk and manufactured dairy products to retailers. The costs that processors and manufacturers incur in purchasing raw milk from farmers or cooperatives and in processing or manufacturing, packaging, and distributing fluid milk and manufactured dairy products are included in prices charged to retailers for these products. Finally, the prices that retailers set for selling milk and dairy products are affected by the retailers’ operating costs, such as labor, rent, and utilities; their strategies for pricing milk and manufactured dairy products; and the demand for those products. Isolating the Intraregional Impacts of the NEDC Is Difficult Although dairy sector indicators that we examined changed after the NEDC’s milk pricing regulations took effect in July 1997, it is difficult to determine how much of the change is attributable to the Compact. Such a determination is difficult because the Compact’s impact on these indicators cannot be easily isolated from the effects of other factors. For example, while retail milk prices increased by 15 to 20 cents per gallon in July 1997, and there is general agreement that the Compact contributed to these increases, the lack of an economic model that fully accounts for the influences of other factors, such as costs for processing fluid milk, makes it difficult to determine how much of that price increase can be attributed to the NEDC. Similarly, while the Compact has resulted in payments being made to dairy farmers that reflect the difference between USDA’s marketing order minimum prices and the NEDC’s minimum price, it is difficult to determine whether some portion or all of these payments would have been made to dairy farmers anyway, depending on market conditions. Although economic reasoning suggests that the Compact would be likely to cause increased milk production and reduced fluid milk consumption in the six NEDC states, analyses of relevant data on dairy farm structure, milk production, and milk consumption show little change in historic trends after the Compact’s implementation. Retail milk prices increased by as much as 20 cents per gallon immediately following the NEDC’s establishment—which is an amount comparable to the immediate increase in the minimum farm-level price for raw milk to be used for and sold as fluid milk in the six New England states when the NEDC’s price regulations became effective. For example, the NEDC minimum price of $1.46 per gallon was 18 cents higher than USDA’s June 1997 Class I price for Boston of $1.28 per gallon and 26 cents higher than USDA’s July 1997 Class I price of $1.20 per gallon. While this might appear to be a substantial increase, when compared with USDA’s average Class I price of $1.41 per gallon during the prior year, the NEDC price did not actually represent such a large increase. However, without a model of farm-to-retail price transmission that accounts for how quickly and how fully farm-level price changes are passed on to wholesale and retail levels, we cannot estimate how much of the retail price change was due to the Compact. Furthermore, while retail milk prices in Boston and other selected cities in the NEDC states remain at the higher levels experienced since the Compact took effect, national average retail prices have also increased, but at rates lower than in the NEDC states. Even so, it is not certain what portion of the retail price increase in the NEDC states is attributable to the Compact, given that both the retail and farm-level prices for milk have fluctuated since July 1997. Some portion of the price increase could also be due to other factors, such as changes in the costs for processing or retailing milk, marketing strategies, or consumer demand. In addition, it is difficult to estimate the extent to which fluid milk processors would have paid more than the minimum farm-level price for milk without the NEDC; that is, we do not know the extent to which the NEDC price substituted for market-driven over-order premiums. Several studies analyzing the NEDC’s impact on retail milk prices concluded that the NEDC has increased prices. However, the amount of the increase attributed to the NEDC varies from study to study, depending on assumptions made by the different researchers and the time periods that they examined. For example, estimates ranged from a low of 2.7 cents to as much as 20 cents per gallon. Data on farm income are limited, and while dairy farmers have received NEDC payments, it is unclear to what extent these payments replaced market-driven over-order premiums that farmers might have been paid in the absence of the Compact. We estimate that through calendar year 2000, the NEDC payments made to dairy farmers in the six NEDC states totaled about $99 million, assuming that all dairy farmers located in these states had their milk processed at fully regulated NEDC plants. The NEDC payments that an average dairy farmer in one of the six states would have received would have fluctuated widely from month to month and from year to year, however, depending on the difference between USDA’s Class I price and the Compact price of $16.94 per hundredweight and the percentage of milk used for fluid milk in the NEDC states. For example, in 1998 the average NEDC over-order producer price payment was 67 cents per hundredweight. This would have provided dairy farmers supplying raw milk used to produce fluid milk sold in the NEDC states about 25 cents per hundredweight, based on the percentage of raw milk used for fluid milk. We estimate that these payments provided the average dairy farmer in the six NEDC states about $3,892 above the minimum amount that the farmer would have received in 1998 had USDA’s Class I price of $16.78 been in effect. In 2000, the average NEDC over-order producer price payment was $2.14 per hundredweight. This amount would have provided a farmer supplying raw milk used to produce fluid milk sold in the NEDC states about 91 cents per hundredweight, based on the percentage of raw milk used for fluid milk. These payments provided an average dairy farmer in the six NEDC states about $15,301 above the minimum amount that the farmer would have received in 2000 had USDA’s Class I price of $14.80 been in effect. These estimates are comparable to data developed by the Compact commission, which indicate that dairy farmers in the six NEDC states and New York received over-order payments totaling about $146 million from July 1997 through June 2001. In particular, the NEDC data indicate that about 4,200 dairy farmers, including 1,300 in New York, received average annual payments of about $9,800. Whether these payments were sufficient to alter the financial health of dairy farmers supplying raw milk used to produce fluid milk sold in the NEDC states is difficult to determine, however. USDA data are inconclusive as to whether the Compact had a positive impact on NEDC dairy farmer income, while NEDC analyses conclude that the Compact stabilized and enhanced farmer income. A limited number of studies have been conducted on the Compact’s impact on farm income. In its 1998 report, the Office of Management and Budget (OMB) estimated a 6- to 8-percent increase in farm income from July through December 1997. In addition, an economist at the University of Vermont modeled the effect on Vermont dairy farmers of establishing a floor for Class I prices and concluded that stabilizing prices by having a price floor could have a positive impact on dairy farmer income. The NEDC’s impact on farm structure is unclear. The number of dairy farms decreased, and the average size of herds increased, both prior to and following the NEDC’s establishment in both the Compact states and the rest of the country. For example, the number of licensed dairy farms in the six NEDC states decreased by 32 percent between 1992 and 2000, from 4,079 to 2,772, while the number of licensed dairy farms in the rest of the country decreased by 37 percent during the same period, from 127,456 to 80,253. Regarding herd size, the average herd in the six NEDC states increased 36 percent, from 58 to 79 milk cows, between 1992 and 2000. In the rest of the United States, the average herd increased 57 percent during the same period, from 56 to 88 milk cows. According to USDA, this decline in the number of farms, along with the increase in herd size, most likely reflects fundamental changes in dairy farming caused by such factors as technological and genetic advances. Although economic reasoning suggests that higher farm-level milk prices would result in increased raw milk production, we have no basis on which to estimate the specific impact that the Compact has had on milk production in the six NEDC states. Data on milk production show an increase in total milk produced and milk produced per dairy cow, but these trends began prior to the Compact’s establishment, making it difficult to estimate the specific impact of the NEDC. Farmers in the NEDC states increased their total milk production by 2.9 percent, from 4.5 billion pounds in 1993 to 4.7 billion pounds in 2000, while farmers in the rest of the nation increased production by 10.3 percent, from 148 billion pounds to 163.3 billion pounds during the same period. The average amount of milk produced per cow in the NEDC states increased by about 11.6 percent during the same period, from 15,633 pounds to 17,440 pounds. Milk production per cow in the rest of the United States increased by about 15.9 percent during this period, from 15,726 pounds to 18,226 pounds. Studies of the NEDC’s impact on milk production, including OMB’s study and an analysis by researchers at the University of Vermont, estimated that the Compact has resulted in a slight increase in milk production in the NEDC states. We cannot estimate the specific impact that the Compact has had on fluid milk consumption in the six NEDC states, in part because we cannot estimate how much of the retail price change since July 1997 has been due to the Compact. Data on fluid milk consumption show a decrease in per capita milk consumption, which reflects trends both within the NEDC states and in the rest of the country that began prior to the NEDC’s establishment. Per capita consumption of fluid milk was higher in USDA’s New England Milk Marketing Order than in some other USDA marketing orders prior to the Compact. Even so, consumption of fluid milk had been slowly declining, both in that marketing order and in the rest of the country, as the consumption of other fluid beverages increased and as the population aged. For example, annual per capita milk consumption for the New England Milk Marketing Order declined by 4 percent from 1993 to 1999, or from about 233 to 223 pounds. Similarly, annual per capita milk consumption for all the other USDA marketing orders declined 6 percent from 1993 to 1999, or from about 214 to 202 pounds. In its 1998 study, OMB’s analysis of the NEDC’s impact on fluid milk consumption during the last half of calendar year 1997 showed a 0.5-percent decline; while in a July 2000 study, an economist at Pennsylvania State University estimated that the NEDC had no appreciable impact from mid-1997 through 1999. Additional details about the intraregional impacts of the NEDC are included in appendix III. The NEDC Has Not Increased Net Federal Costs for the Milk Price Support Program, but Its Impact on a Major Nutrition Assistance Program Is Less Certain According to USDA, the NEDC has not resulted in a net increase in the federal government’s costs for its milk price support program, while it is not certain whether it has affected federal costs for one of its major nutrition assistance programs. The Compact commission must, by law, compensate USDA for any estimated increase in costs to its price support program that are caused by the Compact, and the NEDC commission has done so. Regarding its nutrition assistance programs, USDA estimates that federal costs to its largest nutrition assistance program—the Food Stamp Program—could have increased but federal costs for its other nutrition assistance programs have likely not increased. As required by the 1996 farm bill, when the rate of increase in milk production in the NEDC states exceeds the rate of increase in national milk production, the Compact commission must compensate USDA for any additional costs to the milk price support program that result, and the commission has done so. According to USDA officials, the NEDC did not result in a rate of increase in production greater than the national rate of increase in 1997, during the first 6 months of the Compact. USDA calculated that in 1998, milk production in the NEDC states was 1.8 percent greater than the average of the prior 2 years, compared with a national increase of 1.3 percent. The NEDC compensated USDA $1.8 million for this higher rate of increase in production. USDA calculated that in 1999, milk production in the six states was 3.6 percent greater than the average of the prior 2 years, compared with a national increase of 3.2 percent. The NEDC compensated USDA $1.4 million for this higher rate of increase in production. USDA calculated that milk production in the six NEDC states increased by 0.1 percent in 2000, compared with a national increase of 5.1 percent. Thus, compensation was not required. USDA is not certain whether the Compact has affected federal costs for the Food Stamp Program, which is USDA’s largest nutrition assistance program. According to USDA, if (1) retail milk prices in the NEDC states increased sufficiently to increase national average retail milk prices, and (2) the Compact was the cause of the full amount of the price increases in the NEDC states, then the Compact might have increased federal Food Stamp Program costs because program benefits are sensitive to the national average retail milk price. Benefit levels and federal Food Stamp Program funding have increased since July 1997, because of, among other things, increased national average retail milk prices. However, according to USDA, it is difficult to establish the Compact’s impact on retail milk prices in the six NEDC states, and thus it is difficult to establish the Compact's role in affecting national average retail milk prices. If the Compact would have caused benefit levels to increase to the next dollar, USDA estimates that the Compact increased annual federal program costs by about $60 million. If the Compact did not cause benefit levels to increase to the next dollar, any increased retail milk prices caused by the Compact would have been absorbed by program participants in the NEDC states. Regarding USDA’s other major nutrition assistance programs, such as the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), and the National School Breakfast and Lunch Programs, USDA has concluded that federal costs have not increased as a result of the NEDC. Federal WIC program costs have not increased because WIC is a discretionary grant program. Federal school breakfast and lunch program costs have not increased because the level of federal reimbursements is based on the average price of a large variety of food items, which is relatively insensitive to changes in the retail price of milk. Because federal funding for WIC and school breakfast and lunch programs have not increased, those state or local agencies or organizations that provide program benefits or program participants have had to absorb any increase in retail milk prices caused by the NEDC. Although the 1996 farm bill does not require the Compact to do so, the commission compensates the six states for the increased milk costs incurred by the WIC and school programs that are estimated to be attributable to the NEDC. Through December 2000, the NEDC provided state WIC programs a total of $3.8 million, and the schools a total of $662,606. Should compacts be expanded to include additional states, smaller increases in retail milk prices within the compact states would be necessary to increase the national average retail milk price and, hence the level of Food Stamp Program benefits and federal funding. For example, USDA estimates that should retail milk prices increase by about 20 cents per gallon—an amount similar to the immediate increase in the NEDC states when the commission established an NEDC price—within a compact of states that represents 50 percent of the nation’s fluid milk sales, monthly food stamp allotments would be forced up by, on average, $1 to $2. USDA estimates that these increases in monthly food stamp allotments would increase annual Food Stamp Program costs by $60 to $120 million. Nonfederal costs to its other major nutrition assistance programs, including WIC and the school programs, could also increase should compacts be expanded. While two studies analyzed the NEDC’s impact on USDA’s nutrition assistance programs, the studies relied on a limited amount of data on retail milk prices in the six NEDC states, and their results are inconclusive. Additional details about the NEDC’s impact on USDA’s milk price support and nutrition assistance programs are included in appendix IV. Estimated 1999 Interregional Impacts of Various Compact Alternatives Increased as Compacts Grew in Size Our estimates of the interregional impacts of dairy compacts in 1999 on such measures as farm-level prices, milk production, and farm revenue range from minimal to somewhat larger, depending on the size of the compact and the assumptions that we used to run the economic model. The NEDC states account for only about 3 percent of the milk produced in the nation, and we estimate that in 1999 it had little to no impact in other regions of the country on farm-level prices or milk production and, hence, on farm revenue. An expanded NEDC would account for approximately 18 percent of the milk produced in the nation, and we estimate that in 1999 it would have had a larger but still relatively small impact on farm-level prices, milk production, and farm revenue in other regions of the country. An expanded NEDC, in conjunction with a southern compact, would account for approximately 27 percent of the quantity of milk produced in the nation, and we estimate that in 1999 it would have had a somewhat larger impact on farm-level prices, milk production, and revenue in other regions of the country. These estimates are comparable to other economists’ estimates of the interregional impacts of dairy compacts of different sizes. In general, if dairy farmers located within a compact region received higher farm-level milk prices than they would otherwise have received, they would respond by increasing their raw milk production. Moreover, these higher farm-level milk prices would likely lead to higher fluid milk retail prices in the compact region—prices that would lower consumer purchases of fluid milk in that region. These two effects in the compact region—greater raw milk production and lower consumer fluid milk purchases—would increase the national supply of raw milk that was available for the manufacture of other dairy products, such as cheese, butter, and nonfat dry milk. In turn, this increase in the national supply of milk for manufacturing purposes would result in lower farm-level prices for raw milk to be used for manufacturing purposes. Because minimum Class I prices are based on the prices paid for raw milk to be used for manufacturing purposes, farmers in noncompact regions would receive lower farm-level prices for all classes of milk and, thus, lower blend prices. Other things being equal, dairy farmers in noncompact regions would respond to lower farm-level prices by reducing their milk production.These two effects in noncompact regions—lower farm-level prices and reduced production—would cause farm revenue there to fall. This impact would be particularly significant for dairy farmers in regions such as the Upper Midwest, where most milk is used for manufacturing purposes. Farmers in noncompact regions who ship their milk to compact regions may be eligible to receive the compact region’s farm-level price for that milk, which could offset the loss in revenue associated with lower farm- level prices for milk. If the compact price is sufficiently high, any increased transportation or shipping costs could be offset. To assess the likely interregional impacts of each compact alternative or scenario, we derived an initial set of estimates that represents the impact of that alternative in 1999, given an initial set of assumptions. We then changed key assumptions to analyze how sensitive our initial estimates were to such changes. In general, these sensitivity analyses demonstrated that our initial estimates were not very sensitive to changes in the key assumptions. (For a more detailed description of our initial and subsequent sets of assumptions, see app. II.) Accordingly, we present our estimates of the impacts of the different compact scenarios as ranges that include our initial set of estimates and the results of our sensitivity analyses. In addition, we present these estimates as changes from our 1999 baseline estimates, which represent the estimated values of farm-level and wholesale-level dairy indicators in that year in the absence of any dairy compact—our “no-compact scenario.” Our estimates apply only to 1999, and they may not represent the interregional impacts of compacts in all years. In particular, these estimates are based on data for the period prior to USDA’s milk marketing order regulatory reforms in January 2000, which have affected some dairy sector indicators, such as farm-level milk prices. Furthermore, farm-level milk prices in 1999 were higher than they were in some other years, and we anticipate that, other things being equal, compacts have less of an impact in years when farm-level prices are relatively high. In addition, although our estimated impacts of compacts on noncompact regions for both 1999 and 2000 are relatively small, the impacts on some individual dairy farmers, such as small producers with marginal profitability, in noncompact regions could be significant. Finally, as in any modeling effort, there is some uncertainty about a model's structure and the data and assumptions used. In addition, the model that we used was limited in its ability to distinguish between shipments of bulk raw milk and packaged fluid milk into regions that import milk to meet their demand because the model is an annual model, and such shipments are frequently seasonal. (See app. II for a discussion of this as well as other modeling limitations.) Despite this uncertainty and limitation, we believe that the process for developing our estimates was rigorous and that the model is comprehensive and sound. Given these conditions, our estimates should be interpreted as indicative of the order of magnitude of changes in farm and wholesale economic values, rather than as precise estimates. Appendix V provides more detailed information about our estimates of the impacts of the three compact alternatives on 1999 farm-level prices, production, and revenue in noncompact regions, as well as on national average wholesale-level prices and national wholesale-level production and expenditures. Interregional Impacts of the NEDC in 1999 Were Small We estimate that the NEDC resulted in small economic impacts in noncompact regions in 1999. Specifically, we estimate that the largest reductions in farm-level revenue under the NEDC compared with the no- compact scenario occurred in California and the Upper Midwest region: from $4 million to $11 million and from $4 million to $9 million, respectively. Table 3 provides our estimates of the extent to which the NEDC reduced farm-level revenue—that is, the value of all milk sold by dairy farmers—in these two regions and in all noncompact regions combined. These estimated impacts on farm-level revenue were small because dairy farmers in the NEDC states produced only about 3 percent of the nation’s milk supply. As a result, any increased supply of milk that was available for manufacturing purposes in 1999 from NEDC farmers was small compared with the nation’s total milk supply for manufacturing purposes. Therefore, the impact on farm-level prices and milk production, and hence on farm revenue, for producers outside the compact region was also small. For example, we estimate that, as a result of the NEDC, farm-level prices in all noncompact regions remained unchanged or fell by no more than 2 cents per hundredweight, or less than 0.20 percent, while milk production for all noncompact regions combined fell by less than 0.06 percent. We estimate that, in 1999, the NEDC’s impact on the national average wholesale prices of manufactured dairy products was also minimal. For example, we estimate that the wholesale prices per hundredweight for American cheese were 3 to 9 cents lower and for butter about 3 to 23 cents lower than they would have been under the no-compact scenario. These estimated differences, even at the upper ends of these ranges, represent about 0.06 and 0.19 percent, respectively, of our estimated 1999 wholesale American cheese and butter prices under the no- compact scenario. Interregional Impacts of an Expanded NEDC in 1999 Would Also Have Been Relatively Small We estimate that, in 1999, the interregional impacts of an expanded NEDC that included five additional states would have been a little larger than the impacts of the NEDC, but still small. Specifically, we estimate that, compared with our analyses using the no-compact scenario, dairy farm revenue in 1999 would have been reduced the most in the Upper Midwest region—by $13 million to $24 million. Table 4 provides our estimates of the extent to which an expanded Northeast Compact would have reduced farm-level revenue in the Upper Midwest and in all noncompact regions combined. As under the NEDC scenario, we estimate that the impact of an expanded NEDC would have been relatively small because dairy farmers in the 11 states included in the expanded Compact produced only about 18 percent of the nation’s milk supply. As a result, any increased supply of milk that would have been available for manufacturing in 1999 from those farmers, although a little larger than with the NEDC, would have still been small compared with the nation’s total milk supply for manufacturing. Therefore, the impact on farm-level prices and milk production, and hence on farm revenues, for producers outside the expanded Compact region would have been small. For example, under the expanded NEDC scenario we estimate that, compared with our no-compact scenario, farm-level prices in noncompact regions would have fallen by no more than 6 cents per hundredweight or less than 0.5 percent, while milk production for all noncompact regions combined would have been lower by about 0.21 percent or less. We estimate that, in 1999, the impact of an expanded NEDC on the national average wholesale prices of manufactured dairy products would have been a little larger than the impact of the NEDC, but still relatively small. For example, we estimate that the wholesale prices per hundredweight for American cheese would have been about 18 to 41 cents lower and for butter 46 to 88 cents lower than under our no-compact scenario. These differences, even at the upper ends of these ranges, represent less than 0.3 and 0.7 percent, respectively, of our estimated 1999 wholesale American cheese and butter prices under the no-compact scenario. Interregional Impacts of an Expanded NEDC Combined With a Southern Compact in 1999 Would Have Been Somewhat Larger We estimate that, in 1999, the interregional impacts of an expanded NEDC in conjunction with a southern compact—a total of 23 states—would have been somewhat larger than the impact of our other compact scenarios. Specifically, under this scenario and using the same assumption about fluid milk trade between regions as used in the previous scenarios, compared with our no-compact scenario we estimate that dairy farm revenue in 1999 would have been reduced the most in California and in the Upper Midwest and Mideast regions: $26 million to $118 million, $26 million to $63 million, and $21 to $43 million, respectively.Table 5 provides our estimates of the extent to which an expanded NEDC in conjunction with a southern compact would have reduced farm-level revenue for milk in these regions and for all noncompact regions combined, in 1999. The estimated impact of the expanded NEDC in conjunction with a southern compact is relatively larger because dairy farmers in the states included in these compacts produced about 27 percent of the nation’s milk supply. As a result, any increased supply of milk that would have been available for manufacturing purposes in 1999 from farmers in these states would have been somewhat larger than under the previous scenarios. Therefore, the impact on farm-level prices and milk production, and hence on farm revenues, for producers outside the compact regions would have been somewhat larger. For example, we estimate that farm-level prices in noncompact regions could have fallen by as much as 36 cents per hundredweight or about 2.6 percent compared with our no-compact scenario, while milk production for all noncompact regions combined could have fallen by as much as 0.75 percent. We estimate that, in 1999, the impacts of an expanded NEDC in conjunction with a southern compact on the national average wholesale prices of manufactured dairy products would have been somewhat larger than the impacts of our other scenarios. For example, compared with the estimated wholesale prices under our no-compact scenario, we estimate that the prices per hundredweight would have been about 62 cents to $1.41 lower for American cheese and between 21 cents higher and $6.53 lower for butter. At the upper end of these ranges, these differences represent about 1.0 percent and 5.5 percent, respectively, of our estimated 1999 wholesale American cheese and butter prices under our no-compact scenario. For the expanded NEDC plus a southern compact scenario, we found that our estimates of interregional impacts were sensitive to our assumption about how much milk can be shipped between noncompact and compact regions. In particular, our estimated impacts for 1999 of an expanded NEDC in conjunction with a southern compact on noncompact regions would have been greater if we had used a more restrictive assumption that limits the amount of milk that can be shipped from noncompact into compact regions. Specifically, we estimate that using a more restrictive assumption increases our estimate of how much farm-level prices, milk production, and farm revenues in noncompact regions would have fallen in 1999 under this scenario compared with under our no-compact scenario. Table 6 shows our estimated reductions in farm revenues for raw milk in California and the Upper Midwest and Mideast regions, and all noncompact regions combined under our restricted fluid milk trade assumption compared with under our no-compact scenario. We also estimate that the impact in 1999 of an expanded NEDC in conjunction with a southern compact on the national average wholesale prices of some manufactured dairy products would have been greater under the more restrictive fluid milk trade assumption than without that restrictive trade assumption. For example, under the more restrictive trade assumption, we estimate that the price per hundredweight for American cheese would have been about $1.27 to $1.86 lower than under our no-compact scenario. However, for butter we estimate that the impact with the restrictive trade assumption would have been smaller than the estimated impact without the restrictive trade assumption. Under the restrictive trade assumption, we estimate that the price per hundredweight for butter would have changed from 7 cents higher to $2.80 lower than under our no-compact scenario. At the upper end of these ranges, these differences represent about 1.3 and 2.3 percent, respectively, of our estimated 1999 wholesale prices for American cheese and butter under our no-compact scenario. Using Farm-Level Prices for 2000 as Opposed to 1999 Has a Limited Influence on the Estimated Impacts of Compacts As noted previously, the farm-level prices that we used in our model can affect our estimates of the impacts of compacts on dairy sector indicators such as farm-level revenue. In 1999, the national average blend price was $14.09 per hundredweight of milk; in 2000, the national average blend price was $12.11 per hundredweight of milk. With lower farm-level prices in 2000 than in 1999, the difference between a compact price in our model and the federal milk marketing order Class I minimum price was larger in 2000 than in 1999. As a result, the increase in milk production and decrease in fluid milk purchases that would have likely occurred within a compact region in 2000 would be expected to be greater than when farm- level prices were higher, as they were in 1999. This situation, in turn, would imply a greater increase in the supply of milk available for manufacturing dairy products in 2000, which, other things being equal, would lead to lower farm-level prices, reduced milk production, and lower farm-level revenue in noncompact regions. However, on the basis of preliminary data for 2000, we estimate that the impacts of our three compact scenarios, which are based on our initial set of assumptions, are generally similar to our initial estimates for each scenario in 1999. Even though our estimates are generally similar, when we impose our more restrictive fluid milk trade assumption on our scenario of an expanded NEDC in conjunction with a southern compact, our estimates of the impact on the Upper Midwest are slightly greater for 2000 than for 1999. The similarities between our 2000 and 1999 estimates suggest that other factors may be affecting our estimates for 2000. In those years when noncompact farm-level prices are lower than compact farm-level prices, a factor offsetting the potentially larger interregional impacts of compacts is the ability to market noncompact region milk in compact regions for use as fluid milk. Farmers in noncompact regions whose milk is marketed in compact regions for use as fluid milk may be eligible to receive the compact regions’ farm-level blend price for that milk. When noncompact region farm-level blend prices are low, the gain to farmers from shipping milk to compact regions is greater than when noncompact region prices are high. This gain can partially offset the larger negative impact that compacts can have on revenue in noncompact regions when farm-level prices are low because of the increased supply of milk available for manufacturing purposes. Table 7 provides our estimates of the impacts of the compact scenarios on 2000 farm-level revenue in the Upper Midwest region and all noncompact regions. Appendix VI contains our estimates of the interregional impacts of compacts on 2000 farm-level and wholesale-level dairy sector indicators. Other Economic Analyses of Interregional Dairy Compacts Have Produced Similar Estimates We reviewed other studies of the interregional impacts of the NEDC and larger dairy compacts and found that the results are comparable with ours, even though they used different methodologies. In a 1999 analysis, USDA estimated that the impact of the NEDC on farm-level prices and dairy farm revenue in noncompact regions during the years 2000 through 2005 would have been minimal. For example, USDA estimated that in 2000 the NEDC would either have no impact on producer prices in noncompact regions or reduce producer prices by about 1 cent per hundredweight of milk, or by about 0.07 percent, depending on the noncompact region of the country. An analysis conducted by researchers at the University of California, Davis, also estimated that the NEDC reduced producer prices in noncompact regions by about 2 cents per hundredweight of milk, or by about 0.15 percent, on the basis of 1999 data. The researchers concluded that the NEDC had such a small impact because the NEDC states produced such a small portion of the nation’s milk supply. They also estimated that if the Compact had been expanded to include additional states that produced 9 percent of the nation’s milk supply, producer prices in noncompact regions would have fallen by about 5 cents per hundredweight, or by about 0.35 percent. An analysis conducted by a researcher at Pennsylvania State University of an expanded NEDC in conjunction with a southern compact that produced 27 percent of the U.S. milk supply also concluded that compacts have a relatively small impact.Using a range of assumptions about milk prices and data for 1997, the researcher projected that in 2000, the compact would decrease producer prices in noncompact regions by 4 to 14 cents per hundredweight, or by about 0.3 to 1 percent. Researchers at the University of Wisconsin- Madison, using a 1997 version of the Interregional Dairy Competition Model that we used in our analysis, also estimated that the NEDC had a small impact on producer prices. This analysis estimated that farm-level prices would fall from 5 to 10 cents per hundredweight under an expanded NEDC scenario; 13 to 15 cents per hundredweight under a southern compact scenario; and 14 to 28 cents per hundredweight under a combined expanded NEDC and southern compact scenario. In an analysis of the impact of compacts prepared for the International Dairy Foods Association, one researcher estimated that the NEDC reduced farm-level revenue in noncompact regions in 2000 by about $29 million, while an expanded 29-state compact would reduce farm-level revenue in noncompact regions by about $374 million. A more detailed discussion of these studies is included in appendix VII. Concluding Observations By affecting the minimum prices that dairy farmers within the Compact region receive for their raw milk, the NEDC may have enhanced dairy farmer income in the six NEDC states, and other states such as New York, that supply raw milk used for and sold as fluid milk in the NEDC states. It is not certain, however, whether the NEDC will help ensure the continued vitality of dairy farming in the New England dairy region. Data indicate that the number of dairy farms in the six states continued to decrease following the NEDC's establishment in July 1997. With regard to retail prices, the NEDC contributed to increased retail fluid milk prices within the six states, although the extent of its contribution is uncertain. Even so, available evidence and analyses indicate that the NEDC has had little impact on dairy farmers or consumers in noncompact regions. Proposals are pending before the Congress for larger compacts. Our analysis shows that as the share of the U.S. milk supply covered by compacts increases, the estimated interregional impacts on farm-level prices and revenue increase as well. Furthermore, these estimated impacts could be different under new marketing conditions. Our estimates of the interregional impacts of compacts are based primarily on data from before January 2000, when USDA’s regulatory reforms took effect. Data since January 2000 indicate that the dairy industry is in the process of adjusting to these substantial changes. Equally as important, our estimates of the interregional impacts are based on three compact scenarios, the largest of which includes fewer than the number of states currently being considered for inclusion in dairy compacts. A thorough understanding of the impacts of these other potential compacts on dairy sector indicators cannot be developed until sufficient data become available following the dairy industry’s adjustment to regulatory reform. Agency Comments and Our Response We provided USDA and the Executive Director of the NEDC with a draft of this report for review and comment. On September 5, 2001, we met with USDA's Chief Economist, Dairy Programs, Agricultural Marketing Service, and other officials from USDA's Agricultural Marketing Service, Economic Research Service, Farm Service Agency, Food and Nutrition Service, National Agricultural Statistics Service, and the Department's Office of the Chief Economist to obtain their oral comments. USDA officials stated that they recognized the difficulty of undertaking a study of this nature and said that our work represents a reasonable effort to estimate the intraregional and interregional impacts of dairy compacts. They provided a number of technical corrections and suggestions, which we incorporated as appropriate. We also discussed the draft report with the NEDC Executive Director, who stated that we had dealt with the issues in a constructive and comprehensive manner. The NEDC Executive Director also provided us with written comments. While concurring with our estimate of the interregional impacts of the NEDC, the Executive Director expressed concern that the University of Wisconsin-Madison dairy model did not measure the benefits that New York dairy farmers receive when they supply milk to the NEDC states. We concur that the model does not measure the impacts of compacts on noncompact states that are within the same region as compact states. As the model is designed, New York and the NEDC states, as well as several other states, are included in the same (Northeast) region. The NEDC Executive Director's written comments and our detailed responses appear in appendix VIII. We performed our work between September 2000 and September 2001 in accordance with generally accepted government auditing standards. Appendix I contains a detailed description of our scope and methodology. We are sending copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry; the House Committee on Agriculture; other appropriate congressional committees; the Secretary of Agriculture; the Executive Director of the NEDC; the Director, OMB; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-3841 if you or your staff have any questions about this report. Another GAO contact and key contributors to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology In May 2000, Senator Herbert Kohl requested that we examine the economic impacts of the Northeast Interstate Dairy Compact (NEDC) and other potential compacts on a variety of dairy sector indicators. Specifically, because legislation authorizing the Compact is to expire on September 30, 2001, and the Congress is considering legislative alternatives for reauthorizing the NEDC and authorizing other states to enter into such compact arrangements, Senator Kohl asked us to provide information on the intraregional impacts of the NEDC (that is, within the six NEDC states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont) on dairy sector indicators such as (1) retail milk prices, (2) milk producer income, (3) dairy farm structure, (4) milk production, and (5) milk consumption; the impact of the NEDC on the costs to the federal government of its milk price support and nutrition assistance programs; and the interregional impacts of the NEDC, an expanded NEDC, and an expanded NEDC in conjunction with a southern compact (that is, on noncompact milk-producing regions) on selected indicators such as farm- level and wholesale-level indicators such as prices, production, and revenue. NEDC’s Intraregional Impacts To determine the intraregional impacts of the NEDC, we sought, but did not find, a readily usable economic model that comprehensively estimates these impacts while holding constant other factors that also affect the selected dairy sector indicators. Further, due to time and resource constraints, we were not able to develop a model or series of models to estimate these impacts. As a result, we analyzed federal, state, and other data on these indicators, for a period of time before and after the NEDC’s minimum pricing regulations became effective, to determine any changes in historic trends in the NEDC states. In each case, we also obtained these data for the rest of the United States so that we could compare trends in New England with those in the rest of the country. We also reviewed available studies on the NEDC’s potential impacts on the indicators. Specifically, to determine the impacts on retail milk prices, we obtained and analyzed retail milk price data from (1) A.C. Nielsen, a private data collection and analysis company, for the Boston market as well as for other major U.S. cities for November 1996 through September 2000; (2) the departments of agriculture in Connecticut, Maine, and New Hampshire for November 1996 through October 2000; and (3) the International Association of Milk Control Agencies for those states that have independent milk pricing agencies for January 1994 through November 2000. We also reviewed available economic analyses of the NEDC’s impact on retail milk prices and interviewed USDA’s Agricultural Marketing Service and NEDC officials to obtain their views on the NEDC’s impact on retail milk prices. To examine the intraregional impacts of the NEDC on milk producer income, we compared USDA’s Economic Research Service balance sheet and income statement data from 1991 through 1999 for a representative composite dairy farmer in the Service’s northeastern region with data for a farmer located outside the northeastern region. The Economic Research Service’s northeastern region includes Connecticut, Delaware, Massachusetts, Maryland, Maine, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. States outside the northeastern region include all states with the exception of these 11 states and Alaska and Hawaii. The Service was not able to provide data for a representative composite farmer in the six NEDC states alone because the sample size was not sufficiently large to produce reliable balance sheet and income data. The Economic Research Service develops these data through surveys of sampled farm operations. It collects data on operating costs—such as feed, equipment purchases, and product distribution—as well as data on returns, such as income received from sales of field crops and livestock. The Service uses information obtained from sampled farms to estimate the average costs of milk production in the United States and in various regions in the country. Costs can vary significantly from farm to farm because of differences in farm location, size, and production practices. As a result, the costs and returns for an individual farm can vary considerably from the average. In addition to using Economic Research Service data to estimate the impact of the NEDC on farm income, we also estimated the average payment a licensed dairy farmer in one of the six NEDC states may have received between July 1997 and the end of calendar year 2000 as a result of the NEDC. To do this, we used (1) monthly NEDC balance sheets that reflect the total amount of milk eligible for the NEDC milk price and NEDC over-order producer price payment amounts available for dairy farmers, (2) USDA’s National Agricultural Statistics Service milk production data for the six NEDC states, and (3) American Farm Bureau Federation data on the number of licensed dairies in the six states. To determine the average payment, we estimated what proportion of the milk eligible for the NEDC milk price could be attributed to a licensed NEDC dairy farmer’s milk. We also reviewed available economic analyses of the potential impacts of the NEDC on dairy farmer income. Last, we obtained data developed by the NEDC commission on amounts distributed to farmers as a result of the Compact, and its assessment of the Compact's impact on farmer income. To determine the intraregional impacts of the NEDC on dairy farm structure, we obtained National Agricultural Statistics Service data on the total number of cows in the NEDC and the rest of the United States, as well as state-by-state data on the number of farms having at least one milk cow between 1992 and 2000. We obtained data from the American Farm Bureau Federation on the number of licensed dairies in the United States, by state, between 1992 and 2000. We also reviewed information on factors that affect the structure of dairy farms, and interviewed officials from the Agricultural Marketing Service, the Economic Research Service, the National Agricultural Statistics Service, and the NEDC commission to obtain their views of the Compact’s impacts on farm structure. To determine the intraregional impacts of the NEDC on milk production, we reviewed National Agricultural Statistics Service data on the average amount of milk produced by state and the average amount of milk produced per dairy cow between 1993 and 2000. We also reviewed available economic analyses of the impacts that the NEDC may have had on milk production. In addition, we interviewed officials from the Agricultural Marketing Service, the National Agricultural Statistics Service, and the NEDC to obtain their views on the NEDC’s impact on milk production. To examine the intraregional impacts of the NEDC on milk consumption, we reviewed Agricultural Marketing Service data on the total amount of sales of packaged fluid milk products in federal milk marketing orders and California between 1996 and 1999. Such sales are representative of the consumption of fluid milk products and account for about 93 percent of fluid milk sales in the United States. In addition, we reviewed data on factors affecting milk consumption and available economic studies of the NEDC’s impact on milk consumption. NEDC’s Impacts on Federal Programs To examine the impacts of the NEDC on the costs of the federal government’s milk price support program, we reviewed USDA Farm Service Agency analyses of estimated amounts of milk production in the six NEDC states compared with the rest of the United States. We also reviewed USDA and Compact data on payments made to USDA by the NEDC. In addition, we interviewed Farm Service Agency and NEDC officials to obtain information on payments made by the NEDC. To examine the intraregional impacts on nutrition assistance programs, we interviewed USDA Food and Nutrition Service officials and obtained that agency’s analyses of the potential impact of the NEDC on its programs. We also interviewed officials responsible for each of the six states’ Special Supplemental Nutrition Program for Women, Infants and Children and school nutrition programs. Finally, we reviewed available economic analyses of the estimated impact of the NEDC on nutrition assistance programs. Compacts’ Interregional Impacts To examine the interregional impacts—that is, the economic impacts in other regions of the country—of the NEDC, an expanded NEDC, and an expanded NEDC in conjunction with a southern compact, we conducted policy simulations using the University of Wisconsin-Madison’s Dairy Sector Interregional Competition Model calibrated to reflect the dairy industry in 1999 (IRCM99). We contracted with the University to have Dr. Thomas L. Cox, Professor of Agricultural and Applied Economics and a primary developer of the model, conduct the policy simulations. Working with Dr. Cox and consulting with other prominent dairy economists from different regions of the country, we developed a set of parameters for use in simulating different compacts’ impacts on dairy sector indicators. We modeled three different compacts—the NEDC, an expanded NEDC, and an expanded NEDC in conjunction with a southern compact— consisting of an increasing number of states. The states in the NEDC are Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. The states that we assumed to be included in the expanded NEDC are these six states and Delaware, Maryland, New Jersey, New York, and Pennsylvania. The states that we assumed to be included in the expanded NEDC in conjunction with a southern compact are the above 11 states and Alabama, Arkansas, Georgia, Kansas, Kentucky, Louisiana, Missouri, Mississippi, North Carolina, South Carolina, Tennessee, and Virginia. The above states included in an expanded NEDC and a southern compact were selected because they had enacted legislation, as of the end of February 2001, that authorized their entry into a dairy compact should the Congress establish one. While West Virginia had also enacted such legislation as of the end of February 2001, we did not include that state in a southern compact for the purposes of our analysis because of the difficulties associated with accounting for that state’s milk production in compact versus noncompact regions of the country. Furthermore, because West Virginia produces a relatively small amount of milk in comparison with other states included in compact regions, the effect of excluding West Virginia is negligible. The agricultural economists and other dairy experts with whom we consulted concerning model specifications and assumptions include the following: Kenneth W. Bailey, Associate Professor, Department of Agricultural Economics and Rural Sociology, the Pennsylvania State University; Joseph V. Balagtas, Research Assistant, Department of Agricultural and Resource Economics, University of California, Davis; Scott Brown, Research Assistant Professor, Food and Agricultural Policy Research Institute, the University of Missouri; Harold M. Harris, Jr., Professor, Department of Agricultural and Applied Economics, Clemson University; Harry Kaiser, Professor, Department of Applied Economics and Management, Cornell University; Richard L. Kilmer, Professor, Food and Resource Economics, the Leigh Maynard, Assistant Professor, Department of Agricultural Economics, the University of Kentucky; Neil Pelsue, Extension Associate Professor, Department of Community Development and Applied Economics, the University of Vermont; William A. Schiek, Economist, Dairy Institute of California; Mark Stephenson, Senior Extension Associate, Department of Applied Economics and Management, Cornell University; Daniel Sumner, Professor, Agricultural and Resource Economics Department, University of California at Davis; Cameron S. Thraen, Associate Professor, Agricultural, Environmental, and Development Economics, the Ohio State University; and Christopher Wolf, Assistant Professor, Department of Agricultural Economics, Michigan State University. In addition, we consulted with several agricultural economists at USDA, including economists in the Office of the Chief Economist and the Economic Research Service. Our process for developing the assumptions that we used to model the interregional impacts of dairy compacts included reviewing economic literature to identify estimates of (1) regional supply elasticities, (2) wholesale demand elasticities, (3) and transportation costs. We also obtained data on market over-order premiums and compact over-order producer prices. Finally, we interviewed USDA and other officials to obtain information on regulations governing milk shipments among federal marketing orders and noncompact and compact regions. After identifying assumptions for modeling the three different compact scenarios, we developed an initial estimate of the economic impacts of the different compacts. We then conducted sensitivity analyses by varying the values of our key assumptions. We provided our preliminary estimates to several agricultural economists to obtain their views, and incorporated many of their comments in subsequent modeling before developing our final range of estimates. Our final estimates of the impacts of the different compact scenarios are presented as ranges that include our initial estimates as well as estimates from our sensitivity analyses. (A detailed discussion of the model and assumptions, data, and data sources used is included in app. II.) In addition to modeling the interregional impacts of the different compact scenarios, we reviewed economic analyses that have been conducted on the potential interregional impacts of dairy compacts. We present these reviews in appendix VII. Appendix II: Methodology for Estimating the Interregional Impacts of Dairy Compacts This appendix describes our methodology for estimating the interregional impacts of three compact scenarios: the six-state NEDC, an expanded NEDC, and an expanded NEDC in conjunction with a southern compact. To estimate the interregional impacts, we contracted with the University of Wisconsin-Madison to use the Dairy Sector Interregional Competition Model (IRCM), which is an interregional spatial market equilibrium model of the U.S. dairy sector. This model is useful in estimating the impacts of different dairy policy options, such as dairy compacts. Dr. Thomas L. Cox, Professor of Agricultural and Applied Economics at the university and a primary developer of the model, conducted the policy simulations for us. This appendix describes the structure of the IRCM and how it estimates the interregional impacts of dairy compacts, data and data sources used for conducting policy simulations of different how we calibrated a baseline for 1999, details of each scenario that we modeled, parameter values for our baseline and initial estimates, how we varied key assumptions to test the sensitivity of our initial estimates, and the limitations of the model. The results of our different policy simulations and sensitivity analyses are presented in appendixes V and VI. IRCM Structure The IRCM is a hedonic spatial equilibrium model of the U.S. dairy sector that can be used to estimate the impacts of policy or program changes, such as the establishment of compacts. The model allocates the production and consumption of raw milk and nine other different dairy commodities among 12 regions of the country and solves for the trade flows of these commodities among those regions to achieve a spatial equilibrium. Using nonlinear programming techniques, the model solves to ensure an efficient regional distribution of the different dairy commodity resources, given the demand for and supply of those resources at various prices. On a more technical basis, the model solution maximizes the sum of producer and consumer welfare minus processing, transportation, and U.S. Department of Treasury costs. The model defines aggregate wholesale dairy product demand and farm-level milk supply functions as follows: (1a) D = (1b) where pregion, with ∂p/∂w > 0, i = 1, …, J, and pik (w) is the price-dependent supply function for milk in the i-th (zik) is the price-dependent demand function for the k-th dairy product consumed in the i-th region, /∂zik < 0, i = 1, …, J, k = 1, …, K. Equation (1a) is the sum of the areas under the K demand curves in the i-th region. This can be interpreted as a measure of consumer benefits generated by the K commodities in the i-th region. Equation (1b) is the area under the supply curve, a measure of milk production cost in the i-th region. The term (D – S), consumer benefits minus total production costs in the i-th region, minus transportation costs, is a measure of net social benefits to farmers and consumers in each region. Federal government costs are then subtracted.Two steps are used to create an IRCM that models the impact of compacts in 1999: IRCM99. First, the model is calibrated to 1999 data so that baseline estimates of key dairy sector measures of prices, production, consumption, and trade flows can be obtained on a regional basis. Second, simulation analyses are performed to estimate the impacts of the different compact scenarios. In the IRCM99, milk production and dairy product consumption in the country are divided into 12 regions that are based on the current 11 USDA federal milk marketing orders and California. In addition, the IRCM99 accounts for net private stocks, net government stocks/removals, and U.S. imports and exports. Table 8 compares the IRCM99 regions, the corresponding USDA marketing orders or states before January 2000, states included in the IRCM99 regions, and states that had enacted legislation as of February 2001 authorizing entry into any congressionally authorized dairy compact. Figure 1 shows USDA’s marketing orders, the corresponding IRCM99 regions, the corresponding states included in the IRCM 99 regions, and states that had enacted legislation authorizing entry into a compact as of February 2001. With respect to modeling the impact of compacts, the states that have enacted legislation are part of four different federal milk marketing orders and their corresponding IRCM99 regions. Because of this, both compact and noncompact states are included in some IRCM99 regions when modeling some compact alternatives. This can influence the interpretation of modeled results. For example, some states that have enacted legislation authorizing entry into a dairy compact, such as West Virginia, which is part of the Mideast Marketing Order, produce only a small portion of the milk produced by dairy farmers in that marketing order and hence, estimating the effect of West Virginia’s participation in a compact, would be difficult. IRCM99 solves for regional prices and production levels for farm-level raw milk on the basis of three milk components: milk fat, protein, and carbohydrates (primarily lactose). The model also solves for regional wholesale-level price, supply, demand, and trade flows for the following dairy products: (1) fluid milk, (2) soft dairy products, (3) American cheese, (4) Italian cheese, (5) other cheese, (6) butter, (7) frozen dairy products, (8) other manufactured dairy products (a residual product category), and (9) nonfat dry milk. The model uses a fixed component composition that converts farm milk into fluid milk and different types of manufactured dairy commodities, as shown in table 9. The regional supply of milk components (milk fat, protein, and carbohydrates) must be greater than or equal to the regional utilization of these milk components by the processing sector to ensure regional component supply/demand balance. The marginal value of this restriction (given by the corresponding Lagrange multiplier) measures the shadow value of each milk component in each region. The model subsequently generates empirical estimates of regional shadow prices for each milk component. The model also generates market prices that are consistent with milk component pricing for each commodity in each region. Modeling Classified Pricing The basic structure of the IRCM99 is consistent with a competitive market equilibrium, where, at the optimum, the market price equals the marginal cost of each commodity. However, USDA and California use a system of classified pricing for milk that influences pricing in ways that differ from the competitive outcome. Therefore, to incorporate the classified pricing system, price wedges are used in the model to represent the difference between the minimum price of milk in a particular class and the minimum price of milk in a reference class. In the model, the reference class for USDA’s milk marketing orders is Class III, and the reference class for California is 4b. For example, the price wedge for raw milk used for fluid milk is the difference between the price for milk used for Class I (fluid) and the price for milk used for Class III (cheese), plus potential over-order premiums. For USDA’s milk marketing orders in 1999, we calculated the price of raw milk used for manufacturing (Classes II, III, and IV) as the implicit price of milk used for Class II and Class III in the Upper Midwest Milk Marketing Order (portions of Illinois, Iowa, Michigan, Minnesota, North Dakota, South Dakota, and Wisconsin). USDA milk marketing order prices for nonfat dry milk and butterfat differentials are computed by using USDA formulas and wholesale commodity prices. California milk prices are calculated for milk used in fluid products (Class 1); milk used for heavy cream, cottage cheese, yogurt, and sterilized products (Class 2); milk used in ice cream and other frozen dairy products (Class 3); milk used in butter and nonfat dry milk (Class 4a); and milk used in cheese other than cottage cheese (Class 4b). We computed California price wedges by using administered formulas for the fat and solids-not-fat component prices by class. These component prices are computed from wholesale commodity prices for butter, nonfat dry milk, and cheese that are endogenous to the model. Because Class 4a prices were lower than Class 4b prices in 1999, this method implies a negative price wedge in California for Class 4a. Finally, because dairy farmers are paid blend prices based on USDA and California’s classified pricing systems, we incorporated price wedges that represent these blend prices in each of the respective modeled regions. Modeling Producer Settlement Pools We took into account pooling regulations for both USDA and the NEDC in modeling “producer settlement pools.” The first step in this calculation is to compute total revenues from regional milk production under the assumption that all raw milk is pooled in the region where it is produced. This is done in each region by multiplying the price wedge for each commodity by the quantity of raw milk used to produce that commodity. However, by shipping some of their milk to another order, producers can sometimes become eligible to receive the minimum prices in the destination order for their milk, which might be higher than the minimum prices in their "home" order. As a result, in estimating how the proceeds from raw milk sales are distributed to producers, through producer settlement pools, the model can adjust its initial estimate to take into account that not all raw milk is pooled in the order where it is produced. When we estimated the impacts of the compact scenarios, no such adjustments were necessary because the model's solutions did not yield any pooling of raw milk outside any home order. As a result, before taking into account further adjustments described below, the revenue received from milk sales from farmers in any order were the same as they would have been had no adjustment been made and all producers' receipts from milk sales were based on the prices in their home order. We recognize that some milk is pooled outside the order in which it is produced. However, we did not have data on shipments of raw milk by individual farmers or shipments of fluid milk or other dairy commodities by individual processing plants, which limited the model's ability to estimate raw milk movements across orders. Instead, the model solution includes substantial movement of packaged fluid milk to balance supply and demand across orders. If enough packaged fluid milk moves out of an order, the model's structure allows for further adjustments to be made in the producer settlement pools, but there was no region affected in this way. However, the producer settlement pools were adjusted for the movement of packaged fluid milk into compact regions under the various scenarios that we analyzed. These adjustments were made because processors in exporting regions shipping packaged fluid milk into compact regions are required to pay a compact premium on this milk. Similarly, these processors can return to their producers an amount equal to the compact premium multiplied by the percentage of the compact region's raw milk that is sold for Class I use. Data Used in the Model We obtained data for our analysis from several sources, including USDA, the state of California, academia, and research institutes. We obtained most of the price and production data from USDA and the California Department of Food and Agriculture. In particular, we obtained farm-level milk data from USDA’s Milk Production, Disposition and Income—1999 Summary, and commodity production data from USDA’s Dairy Products— 1999 Summary. We obtained commodity price, stock, import, export, and government utilization data from monthly USDA Economic Research Service “Livestock, Dairy, and Poultry” reports. We obtained federal milk marketing order data from USDA’s Agricultural Marketing Service Federal Market Order Milk Statistics—1999 Annual Summary. In addition, we obtained data on price support levels from USDA’s Commodity Credit Corporation. We obtained regional projections of wholesale dairy product demand by using aggregate wholesale demand functions for the United States and regional population data. We obtained component yields—the amount of milk fat, protein, and carbohydrates per unit of milk and wholesale dairy product—from a component accounting exercise that fully allocates 1999 aggregate milk and dairy product production. We obtained regional farm- level milk supply elasticities from the Food and Agricultural Policy Research Institute and product demand elasticities from research conducted by Cox et al. and USDA’s Economic Research Service. These demand elasticities were estimated using USDA aggregate national time series data on commercial disappearance and wholesale prices. The estimates are consistent with a complete demand system specification covering all major food groups and products. We obtained data on refrigerated and nonrefrigerated transportation and assembly costs for farm milk from dairy researchers at Cornell University. We used USDA estimates of dairy manufacturing costs to incorporate processing costs into our analysis. 1999 Baseline Calibration We calibrated the model to yield solutions that are close to the observed 1999 data for farm-level and wholesale-level measures of prices and production and to link key regional prices to commodity reference prices used by USDA and California, such as those reported by the Chicago Mercantile Exchange and USDA’s National Agricultural Statistics Service. Tables 10 through 13 compare the 1999 model solutions with actual 1999 data for farm-level prices, farm-level production, wholesale commodity prices, and wholesale commodity production, respectively. At the farm level, the simulated values calibrate closely with the actual data—all discrepancies are 0.5 percent or less, except for the farm-level price in California, for which the discrepancy is about 1 percent. At the wholesale level, the price discrepancies for major products (fluid milk, American and Italian cheeses, butter, and nonfat dry milk) are less than about 3 percent, while production discrepancies are 3.5 percent or less. Because our baseline was calibrated to actual 1999 data, our original baseline values represented estimates for a time when the Compact was in place. As a result, we used the IRCM99 to estimate the impact of the NEDC by simulating the year 1999 without the NEDC and comparing those results with those obtained in our original baseline. That is, the estimated impacts of removing the Compact from our original baseline are interpreted as the estimated impacts of adding the Compact to a no- compact baseline, with the signs reversed. For each subsequent compact scenario, we used the IRCM99 to estimate the interregional impact of compacts. That is, we compared our estimates of farm-level and wholesale-level prices, production, and revenue in 1999 under each compact scenario with our estimates of what they would have been in that year without any compact. In effect, the no-compact scenario became our new baseline for comparison. Compact Scenarios We developed three compact scenarios: the NEDC, which includes Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont; an expanded NEDC that also includes Delaware, Maryland, New Jersey, New York, and Pennsylvania; and an expanded NEDC in conjunction with a southern compact that includes Alabama, Arkansas, Georgia, Kansas, Kentucky, Louisiana, Missouri, Mississippi, North Carolina, South Carolina, Tennessee, and Virginia. These states were included in the expanded NEDC and the southern compact on the basis that they had enacted legislation as of the end of February 2001 that would allow them to enter into a dairy compact, should the Congress enact legislation allowing them to do so. While West Virginia had also enacted such legislation as of the end of February 2001, we did not include it in our analysis of an expanded NEDC in conjunction with a southern compact because of the difficulty of modeling West Virginia as part of a southern compact, given that it would have been the only state in USDA’s Mideast Marketing Order whose milk would fall under compact regulation. Since West Virginia produced less than 0.2 percent of the nation’s milk supply in 2000, we do not believe that its omission significantly affects our estimates of the impact of the southern compact. The IRCM99, as discussed previously, models different regions of the country based, in part, on USDA's milk marketing orders and California. Because of this, the states in the NEDC are incorporated as part of the IRCM99 Northeast region. This region encompasses the following states: Connecticut, Delaware, Maine, Massachusetts, Maryland, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont (and the District of Columbia). To account for the NEDC, we calculated the amount of milk produced by the six NEDC states as a percentage of the total amount of milk produced within the Northeast Milk Marketing Order. Therefore, while we are able to estimate the impact of the NEDC on other noncompact regions of the country that do not contain any compact states, we could not estimate the NEDC's impacts on other noncompact states within the Northeast region. Further, because we defined the impacts of the NEDC scenario to be based on the amount of milk produced by the six NEDC states, the impacts of this scenario do not account for milk that noncompact states, such as New York, may ship into the NEDC states. Further, reductions in revenue in noncompact states are also not accounted for. Thus, our estimate of the NEDC's impact on noncompact regions is a little smaller than would be expected had we been able to isolate the effects of noncompact and compact states within the same region. Similarly, in our scenario of an expanded NEDC in conjunction with a southern compact, the IRCM99 has the compact region extending into four regions: the Appalachian, Central, Northeast, and Southeast. While this compact scenario fully includes the majority or all of the states encompassed by three of the four regions, one region—the Central—is not fully encompassed. The Central region includes the following seven states: Colorado, Illinois, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota. However, only two of these states—Kansas and Missouri—are assumed as part of a compact. As with our modeling of the NEDC, we calculated the amount of milk produced by the two states as a percentage of the total amount of milk produced within the Central region. Therefore, while we are able to estimate the impact of the expanded NEDC in conjunction with a southern compact on noncompact regions of the country that do not contain any compact states, we could not estimate this compact scenario's impact on the Central region. Similarly, under this compact scenario, we do not take into account the amount of milk that may be shipped into the compact states by bordering noncompact states in the Central region. Further, reductions in noncompact producer revenue in these noncompact states was also not accounted for. Thus, our estimate of the expanded NEDC in conjunction with a southern compact's impact on all noncompact regions combined is a little smaller than would be expected had we been able to isolate the effects of noncompact and compact states within the Central region. Parameter Values for Our Baseline and Initial Estimates We used specific parameter values to arrive at our baseline, or no- compact, scenario and initial estimates of the impacts of the different compact scenarios. We also used key assumptions regarding the amount of market-driven over-order premiums, compact producer price payment levels, and transportation costs, and the ability of milk to move from region to region. Specifically, we used a milk price support payment amount, or price floor, of $9.90 per hundredweight of milk, which equates to $1.10 per pound for American cheese, 65 cents per pound for butter, and $1.01 per pound for nonfat dry milk. In addition, we assumed medium-term, or 5-year, wholesale demand elasticities for fluid milk and other dairy products, as shown in table 14. We also assumed medium-term regional supply elasticities as shown in table 15. We also assumed that within a compact region, there was no market- driven over-order premium above the compact minimum price. This initial assumption represents a lower bound, and assumes that compact over- order producer price payments may have replaced, or become a substitute for, much of the market-driven over-order premiums in the compact region. We also assumed that the Class I price in a compact region, or the compact price, was $16.94 per hundredweight of milk. To better model the effect of pooling and to account for milk shipments from region to region and from noncompact to compact regions, we assumed that no more than 40 percent of any one region’s packaged fluid milk could be shipped to another region without being pooled in the receiving region. We also assumed that a processor had to pay the compact over-order producer price into a compact pool for milk shipped into a compact region in order to receive a compact blend price for these fluid shipments. With these assumptions, we did not restrict milk flows from region to region or from noncompact to compact regions. We also used transportation costs as developed by researchers at Cornell University, with no adjustment. Sensitivity Analyses Following our initial analyses, we conducted additional analyses of the impacts of compacts by varying our initial parameter values or assumptions to determine how sensitive our initial estimates were to changes in key values or assumptions. Specifically, with respect to wholesale demand elasticities, we changed the medium-term elasticities used in our initial analyses to long-term, as shown in table 16, to determine what impact higher elasticities would have on our initial estimates. In a separate analysis, we changed our regional supply elasticities to long- term, as shown in table 17, to determine if changes in long-term supply elasticities would affect our initial estimates. In a separate analysis, we changed our assumption regarding the lack of any market-driven over-order premium within compact regions and added a 50-cent-per-hundredweight over-order premium above the compact's minimum price to determine what impact this would have on our initial estimates. We then simultaneously used long-term supply and demand elasticities in conjunction with the 50-cent over-order premium in compact regions to determine what combined effect these three changes taken together would have on our initial estimates. In a separate analysis, we inflated transportation costs that we obtained from Cornell University by 25 percent within each of the IRCM99 regions to determine what effect increased transportation costs would have on our initial estimates. In addition, we conducted a separate sensitivity analysis specific to the expanded Northeast Compact in conjunction with a southern compact scenario. In this analysis, we varied the assumption regarding the Class I minimum price, or compact price, in the compact region. We increased the minimum price from $16.94 to $18.00 per hundredweight in the southern compact but retained the $16.94 minimum price in the expanded Northeast Compact. We conducted this analysis because data on cooperative pay prices in selected cities in USDA’s Appalachia, Southeast, and Central milk marketing orders were about a dollar higher in 1999 than in the Northeast Milk Marketing Order. We also analyzed how sensitive the model was to trade limitations across regions. Our initial estimates and previous sensitivity analyses assumed that milk flowed relatively freely between noncompact and compact regions. We revised this assumption by limiting the amount of milk that could flow into a compact region from a noncompact region to that amount of milk produced within 100 miles of a compact region’s border. Although we recognize that there are no regulations establishing such a limit, we performed this particular analysis to reflect the fact that the IRCM99 uses average transportation costs, on a regional basis, because of the lack of specific data on the location of processing plants for 1999. This limitation may lead to a model solution with unrealistically high interregional milk trade. We then compared the estimated impacts of the different compact scenarios with the impacts of the no-compact scenario using this revised trade assumption. We also conducted the same sensitivity analyses discussed above to determine how sensitive these new estimates were to changes in key values and assumptions. We also conducted an analysis of how sensitive our initial estimates were to the magnitude of the difference between USDA and compact prices. We derived our initial estimates and our sensitivity analyses using 1999 pricing data because 1999 was the most recent year for which complete data were available. During 1999, the national average blend price of milk, or the average weighted minimum farm-level price, was $14.09 per hundredweight, which was higher than the prices observed in some other years, including 2000. For example, the national average blend price of milk in 2000 was $12.11 per hundredweight. To determine how sensitive our 1999 estimates were, we used preliminary milk pricing data for 2000 to determine what impact this difference may have had. This required a new baseline for assessing the magnitude of change that compacts made, given the relative magnitude of change in compact and USDA prices. We did not, however, conduct a full range of sensitivity analyses regarding this change in assumptions because of the preliminary nature of the 2000 pricing data. Further, the 2000 pricing data reflected regulatory reform measures implemented by USDA in January 2000, which posed potential data reliability concerns. We calibrated the model using dairy sector data for 2000 to simulate the impact of compacts during that year. The method used to develop the IRCM00 was similar to the method used to develop the IRCM99, with two exceptions: We used data on 2000 regional farm production and milk prices; aggregate commodity supply/demand balance (commodity production, imports, exports, and stocks) and prices; and component balance (using 2000 disaggregate commodity production data) to develop the base model. We incorporated revised January 2000 USDA milk marketing order pricing regulations into the model. These revisions include a methodology for computing (1) classified prices based on USDA’s “Final Rule” multiple component pricing, (2) Class I prices based on the higher of the Class III or Class IV multiple component price, and (3) minimum Class II prices based on adding a 70-cent-per-hundredweight premium to Class IV prices. As in the IRCM99, all classified pricing commodity wedges are computed relative to Class III prices, so that if Class IV prices are higher than Class III prices, additional classified pricing premiums are added to Class I, II, and IV prices. Tables 18 and 19 summarize the additional classified pricing premiums for USDA and California. USDA’s revised pricing regulations result in larger Class I and II premiums over Class III milk than occur under California’s pricing regulations, as shown in table 20. USDA Class I and II premiums are also higher in 2000 than in 1999 because of lower Class III prices in 2000 relative to 1999, and the impact that Class IV prices had on Class I and II. Also, in modeling the impacts of compacts in 2000, we computed commodity premiums in producer price settlement pools on the basis of fat and skim-not-fat commodity components, using revised classified fat and skim-not-fat prices. IRCM Limitations As with any modeling exercise, the IRCM has certain limitations. These limitations include: (1) the model cannot identify individual shipments of raw milk from one milk marketing order to another because we do not have data on shipments of raw milk and packaged fluid milk by individual farmers and processors, respectively, (2) the model is static and does not take into account dynamic adjustments, (3) the model ignores some institutional and/or historical rigidities and capacity constraints (other than for the milk marketing orders and component flow constraints), (4) the model assumes the absence of farmer, processor, and/or retailer market power, and (5) the model takes exports and imports as being exogenous to the model. Appendix III: NEDC’s Intraregional Impacts The intraregional impacts of the NEDC on (1) retail milk prices, (2) milk producer income, (3) dairy farm structure, (4) milk production, and (5) milk consumption are difficult to determine. Data indicate that retail milk prices increased when the NEDC’s alternative minimum pricing requirement took effect in July 1997, and prices continue to remain relatively high compared with retail milk prices in the rest of the country. However, because many factors affect retail milk prices, we were unable to determine what portion of the retail price increases in the NEDC states was due to the NEDC as opposed to other factors. With regard to milk producer income, when the NEDC price has been higher than the Class I price, dairy farmers have received payments that reflect the NEDC price— that is, the Class I price plus the difference between the Class I price and the NEDC price. (This difference is called an over-order payment.) However, it is likely that farmers would have received some portion of the difference even without the Compact in the form of market-driven over- order premiums. Data on the structure of dairy farms, milk production, and fluid milk consumption in the six NEDC states and data for the rest of the United States show similar trends, suggesting little or no change in the NEDC states following the Compact’s establishment. Retail Milk Prices Increased in the NEDC States, but It Is Difficult to Determine the Amount Attributable to the Compact Retail milk prices increased in the NEDC states in July 1997, when the Compact’s alternative minimum price for raw milk to be used for and sold as fluid milk in the six states took effect, but not in the rest of the country. Data indicate that since July 1997, retail prices in Boston and other selected cities and towns in New England have remained relatively high, compared with prices in other major cities in the country. It is difficult to determine, however, what portion of the retail price increase can be attributed to the NEDC’s alternative minimum price as opposed to other factors that affect retail milk prices. Economic analyses of the NEDC’s impact on retail milk prices have concluded that retail milk prices increased following the Compact’s establishment, but the estimated amount attributable to the Compact’s higher alternative minimum price varies among the different studies. Retail Milk Prices Increased in July 1997 in New England but Not in the Rest of the Country The retail price of milk in the New England states increased sharply during July 1997, when the NEDC began setting minimum prices that processors must pay for raw milk used for and sold as fluid milk—milk used for drinking purposes—in the NEDC states, but comparable increases did not occur in most other locations in the United States. In that month, the NEDC price—$1.46 per gallon—was 26 cents higher than USDA’s milk marketing order Class I price of $1.20 per gallon. This 26-cent per gallon (or 22 percent) price increase appears to have been passed on to consumers at the retail level on fluid milk prices during the same month. Table 21 shows the increase in retail prices according to data collected by the departments of agriculture in Connecticut, Maine, and New Hampshire; the International Association of Milk Control Agencies; and A.C. Nielsen. Data from the International Association of Milk Control Agencies indicate that between June and July 1997, the average retail price of a gallon of milk increased by about an average of 7 percent in nine cities in Maine, Massachusetts, and Vermont. A.C. Nielsen data for Boston indicate that retail milk prices increased by an average of about 8 percent during the same period. Our review of data for the rest of the United States indicates few increases in the retail price of milk in July 1997 that were comparable to what occurred in the New England states. For example, data from the International Association of Milk Control Agencies for 42 cities and regions outside the NEDC states indicate that only two of those cities or regions experienced retail price increases comparable to or larger than those observed in selected cities in Maine, Massachusetts, and Vermont in July 1997: Eastern Virginia had a 31-cent increase and Reno, Nevada had a 10-cent increase between June and July 1997. Moreover, data from A.C. Nielsen for 13 major cities outside the NEDC states show an increase in retail milk prices in only one city between June and July 1997: in Seattle, the price of milk rose from $2.91 to $3.01 a gallon. Elsewhere, retail milk prices declined, for example, in Cincinnati, from $1.43 to $1.35 per gallon between June and July 1997 and in Washington, D.C., from $2.47 to $2.44 per gallon. From July 1997 to September 2000, Retail Milk Prices in the Rest of the Country Have Increased Less Than in the NEDC In the longer term, between July 1997 and September 2000, retail milk prices in most of the United States did not increase as much as they did in the NEDC states, according to A.C. Nielsen data, as shown in figure 2. The average retail milk price in Boston increased by 18 percent between June 1997 and September 2000, from $2.40 to $2.84 per gallon. The national average retail price of milk increased by 8 percent during the same period, from $2.49 to $2.68 per gallon. A.C. Nielsen data for 13 major cities outside the NEDC indicate that retail milk prices increased between June 1997 and September 2000 in 10 of those cities as well. For example, retail prices increased by 3 percent in Phoenix, or from $2.21 to $2.27 per gallon; by 12 percent in Seattle, or from $2.91 to $3.26 per gallon; by 17 percent in New Orleans, from $2.58 to $3.03 per gallon. Data for one city—Cincinnati—show a greater price increase than in Boston: prices increased by 80 percent, from $1.43 to $2.57 per gallon, between June 1997 and September 2000. However, data for Cincinnati indicate that between March 1997 and May 1998, retail milk prices were considerably lower than those in other major cities. In contrast to retail milk prices in cities such as Boston and Cincinnati, prices fell in Dallas, Denver, and San Diego. Data from the International Association of Milk Control Agencies also indicate that retail milk prices increased from June 1997 to November 2000 in the NEDC states and the rest of the country. For example, retail prices increased by 17 percent, from $2.59 to $3.04 in Burlington, Vermont; and by 10 percent, from $2.18 to $2.39 in Augusta, Maine, from June 1997 to November 2000. In the rest of the country, of the 29 cities or regions for which the Association had June 1997 and November 2000 data, retail milk prices increased in 21 of them. For example, retail milk prices increased by 4 percent, from $2.76 to $2.88 per gallon in Philadelphia, Pennsylvania; and by 20 percent, from $2.55 to $3.05, in Salem, Oregon. Even though it is likely that the Compact caused some portion of the retail milk price increase in the NEDC states, it is difficult to determine the size of that portion. In part, this is because retail milk prices vary considerably in relation to the minimum farm-level price for raw milk. For example, after retail prices increased by about 20 cents per gallon in July 1997, they fell by about 5 to 7 cents per gallon for a period of several months. In addition, even though the NEDC price remained stable at $1.46 per gallon between July 1997 and August 1998, data on retail milk prices for Boston indicate that retail prices fluctuated during that period from a high of $2.60 to a low of $2.53 per gallon. Figure 3 shows the relationship between the Class I or NEDC price and the average retail price of milk sold in Boston. Between July 1997 and September 2000, the retail price in Boston varied between $2.53 and $2.90 per gallon. In September 1998 when the NEDC price increased by 9 cents, from $1.46 to $1.55 per gallon, the retail price increased by only 1 cent, from $2.55 to $2.56. As figure 3 shows, retail prices increased in early calendar year 1999 and again in late calendar year 1999. In November 1999, the retail price of milk reached a high of $2.90, when the NEDC price was $1.68 per gallon. One of the reasons that there is not a close relationship between the NEDC price for milk and the retail price is that many factors affect the retail price of milk, including wholesalers’ costs, state regulations, consumer demand, and retailers’ pricing strategies. More specifically, the retail price of milk is affected by wholesalers’ costs of acquiring and processing raw milk and packaging and distributing processed fluid milk to retail outlets. The retail price can also be affected by state regulations that, for example, dictate how and where milk can be distributed. Another factor is consumers’ shrinking demand for milk products, as opposed to other beverages. This shrinking demand has created a need to advertise and improve products, which has increased retail costs. Finally, retail milk prices are affected by retail pricing strategies involving such factors as retail costs, competitor pricing, if and how milk prices are used to attract customers, shopping convenience, the image a store may want to project regarding quality or low prices, and the extent to which retailers exercise market power. Other Studies Estimate That the NEDC Has Increased Retail Prices Four studies have estimated the Compact’s impact on retail milk prices, and each has concluded that the NEDC has resulted in increased retail prices. Each study provides a different estimate of the amount that the NEDC has caused retail prices to increase, however, largely because of the different methodologies used and the time frames analyzed. The Office of Management and Budget (OMB) issued a study in 1998 that analyzed retail milk price data for the first 6 months that the NEDC was in effect. OMB estimated that the Compact could have had a small impact (an increase of 5 to 10 cents per gallon), a medium impact (an increase of 10 to 15 cents per gallon), or a large impact (an increase of 15 to 20 cents per gallon) on retail prices, depending on the extent to which costs were passed on from the farm to the retail level, and the extent to which wholesalers and retailers absorbed or passed on any increased costs. However, OMB cautioned that its study was completed too soon after the Compact began operating in July 1997 to determine its economic impacts and implications with confidence or precision. OMB further cautioned that U.S. dairy industry economics are complex, and that producer, wholesale, and retail prices are affected by numerous proprietary, regional, and national factors. OMB concluded that retail price patterns have fluctuated in recent years and provide no definitive indication of the retail price levels that would have occurred without the Compact. In 2000, economists (Lass et al.) at the University of Massachusetts issued a study that analyzed the NEDC’s impact on retail milk prices during the first year of the Compact. For this study, which was conducted for the NEDC, Lass et al. used data from January 1982 through June 1996 to develop a model of farm-to-retail price behavior in two markets: Boston and Hartford. They then used data from an 18-month period—July 1996 through December 1997—to predict what the retail price for milk would have been without the Compact and to compare those predicted effects with the effects that actually occurred in New England. Lass et al. concluded that the NEDC caused an average retail milk price increase of about 7 cents per gallon in the Boston market and about 6 cents per gallon in the Hartford market. They also concluded that because this estimated retail price increase was less than the NEDC increase in costs to fluid milk processors (the over-order amount due to the NEDC), an amount less than the NEDC over-order amount was being passed on to consumers. The authors cautioned, however, that the model did not capture changes that may have occurred in the farm-to-retail price relationship from July through December 1997. A third study, issued in 2000 by an economist (Bailey) at Pennsylvania State University, analyzed the farm-to-retail markup for fluid milk over the period January 1996 through December 1999. Using a simple but direct markup model to evaluate the impact of the NEDC on retail fluid milk prices in Boston and Hartford, Bailey concluded that the retail price of milk increased after the Compact established its minimum price in July 1997. Specifically, Bailey concluded that the retail price of milk rose 24 cents per gallon from July 1997 to December 1999 over the average price in effect from January 1996 to June 1997. According to Bailey, the majority of this 24-cent per gallon increase—17 cents per gallon—was attributable to the Compact, while the rest was attributable to other factors. Finally, studies issued in 2001 by economists (Cotterill and Franklin) at the University of Connecticut examined specific factors that may have increased retail milk prices in four New England marketing areas—Boston, Providence, Hartford/Springfield, and Northern New England—following the Compact’s establishment in July 1997. Cotterill and Franklin compared retail price data from February 1996 through early July 1997 with retail price data from July 1997 through August 1998. The authors separated out retail price increases as caused by four factors: (1) the increased farm price of milk caused by the NEDC; (2) the increased farm price of milk caused by strong raw milk markets when the farm price spiked above the NEDC price; (3) nonmilk costs, such as increased processing costs other than costs for purchasing raw milk and increased distribution costs; and (4) changes in pricing at the wholesale and retail levels. Using this methodology, Cotterill and Franklin concluded that of the 29-cent increase in the retail price of a gallon of milk, on average, 2.7 cents per gallon were caused by the NEDC and 6.5 cents per gallon were caused by strong milk markets. Limited Data Are Available to Estimate the Potential Impacts of the Compact on Producer Income While it is likely that the NEDC has stabilized producer income, it is difficult to determine how large of an impact it has had on producer income because of uncertainty about what dairy farmers in the six NEDC states would have been paid for their milk in the absence of the Compact. This uncertainty arises because NEDC payments made to dairy farmers may be, in part, substitutes for market-driven over-order premium payments. Even so, the NEDC commission has concluded that the Compact has had a positive effect on the financial status of dairy farmers in the six NEDC states and New York. USDA data on dairy farm income do not clearly indicate whether the improved financial status of an average dairy farmer in the NEDC states resulted from the NEDC. Finally, while two studies of the NEDC’s impact on farm income concluded that the NEDC could have a positive impact on farm income, they provide no definitive estimate of the size of this impact. NEDC Payments May Be, in Part, Substitutes for Market-Driven Premium Payments We cannot determine how much impact the NEDC has had on dairy farmer income because we do not know what dairy farmers’ incomes would have been from July 1997 to the present in the absence of the NEDC. While farmers would have received at least the minimum federal marketing order or state prices, in some cases it is likely that they would have received an amount greater than these prices even in the absence of the NEDC. According to USDA officials, dairy farmers in the NEDC states had received some market-driven over-order premiums prior to the NEDC’s establishment. For example, in 1996, the over-order Class I premium in the New England order averaged 76 cents per hundredweight. Research conducted by economists at Cornell University indicates that after the compact began, producers receiving NEDC payments received no over-order premiums above the amount needed to compensate them for cooperative services. In one survey conducted in August 2000, dairy farmers in New York who received NEDC payments were receiving no market-driven over order premiums, while producers not shipping milk into the NEDC states received about 60 cents per hundredweight in market-driven over-order premiums. USDA officials concurred with this assessment. According to one USDA official, over- order charges decreased about 50 cents per hundredweight after the NEDC began setting its price for Class I milk, and were at levels indicative of the costs of services provided by cooperatives for handlers. We estimate, however, that dairy farmers supplying raw milk for use as fluid milk sold in the NEDC states have received revenue as a result of the Compact. We estimate that an average licensed dairy farmer located in one of the six NEDC states received annual payments of between $3,892 and $15,301 since the NEDC regulations took effect in July 1997 through December 2000, as shown in table 22. As table 22 shows, the dairy farm payment resulting from the difference between the USDA Class I price and the NEDC price (the over-order producer price payment) increased significantly in 2000, when the USDA Class I price was low—$14.80 per hundredweightative to the NEDC price of $16.94. Although we are unable to determine how much of an impact the NEDC has had on dairy farmer income in the six NEDC states, dairy farmers supplying milk and receiving over-order producer price payments have likely benefited from the NEDC. At a minimum, the NEDC has had a stabilizing impact on the prices paid to farmers for milk, irrespective of the amount of additional income it may have generated, because the $16.94 NEDC price per hundredweight has protected dairy farmer income when the minimum federal marketing order price has fallen below the NEDC price. This occurred in 35 of the 46 months between July 1997 and March 2001. NEDC Data Indicate a Positive Impact on Net Farm Income, but USDA Data Are Less Clear NEDC commission data indicate that the difference between the Compact Class I price and the USDA minimum Class I price from July 1997 through June 2001, minus fees for administering the NEDC, totaled about $146 million. This amount was provided to 4,217 farms supplying the New England market, of which 1,300 are estimated to be located in New York. According to the NEDC, between July 1997 and June 2001, the NEDC resulted in annual payments of between $3,900 and $14,700, to these farms, depending on herd size. The average annual payment was $9,812 per farm. According to the NEDC commission, this additional income helped stabilize and enhance farm-level prices for farmers in the six NEDC states as well as New York, some of whom have historically been part of the New England milk shed. Similarly, NEDC data indicate that net farm earnings improved as a result of the Compact. For example, the NEDC commission estimated that in 1997, the Compact increased net farm earnings of those supplying the New England milk shed by about $6,800, from about $11,000 to about $17,800. The NEDC commission estimated that in 2000, the Compact increased net earnings by about $15,200, from about $8,100 to about $23,300. The NEDC commission also estimated that in 2000, the percentage of dairy farms that experienced financial stress was 20 percent lower than it would have been in the absence of the Compact: About 50 percent of farms experienced some degree of financial stress, compared with 70 percent that would have experienced some degree of financial stress without the NEDC. These figures led the NEDC commission to conclude that the overall reduction in financial stress resulted in a significant reduction in the likely net loss of dairy operations in the Northeast. While USDA data indicate that the net farm income of an average dairy farmer in the northeastern region has increased slightly more than the net income of an average farmer in other regions between 1991 and 1999, the data also indicate that net farm income in the northeastern region was lower than that of farmers in other regions during the same time period.Figure 4 shows that the net farm income of average dairy farmers in the northeastern region and other regions in the country has increased between 1991 and 1999, but that the trend increased at a lower rate in the northeastern region than in other regions in the country. Regarding the northeastern region, the data indicate that an average dairy farmer’s net farm income increased by 36 percent, from $34,064 in 1996 to $46,415 in 1999. In the rest of the United States, an average dairy farmer’s net farm income increased by 61 percent, from $45,650 in 1996 to $73,486 in 1999. It is difficult to determine, however, the specific impact of the NEDC, because many factors influence dairy farm income, and USDA’s northeastern region includes other states in addition to the six NEDC states. USDA data also indicate that between 1991 and 1999, the percentage of northeastern dairy farms having favorable solvency grew more than the percentage of dairy farms with favorable solvency in other regions of the country. Figure 5 shows that the percentage of dairy farms having favorable solvency in other regions of the country remained relatively constant. About 61 percent of the dairy farms in the northeastern region had favorable solvency in 1991, in comparison with about 69 percent of the dairy farms in the rest of the U.S. regions. About 82 percent of the dairy farms in the northeastern region had favorable solvency in 1999, in comparison with about 73 percent of the dairy farms in the rest of the U.S. regions. Whether the NEDC caused the percentage of dairy farms in the northeastern region having a favorable solvency to grow faster is difficult to determine because, as noted previously, the six NEDC states form only a portion of the northeastern region. Two Studies Provide Limited Information on the NEDC’s Impact on Income Two studies have analyzed the NEDC’s impact on dairy farmer income and concluded that the Compact has the potential to improve farmer finances. OMB’s 1998 analysis was limited by the fact that the Compact had been in effect for only about 6 months when OMB conducted its study. To estimate the NEDC’s impact on farm income, OMB developed two alternative scenarios of what milk prices would have been had the Compact not been established. Under the first scenario, OMB assumed that the Class I price would have averaged $15.92 per hundredweight in 1997; under the second scenario, OMB assumed that the Class I price would have been $16.10 per hundredweight, taking into account the decline in the market-driven over-order premium when the NEDC took effect. OMB then compared these prices with a minimum NEDC price of $16.94 per hundredweight. Using $16.94 as a basis for calculating a blend price, OMB estimated that in 1997, the NEDC generated an average increase in gross farm income of $5,650 under the first scenario and $4,770 under the second scenario. Regarding the NEDC's overall impact on dairy farmers' income, OMB concluded that, if other factors affecting dairy farmers were held constant, higher milk prices would not be expected to greatly alter the long-term trend toward fewer, but larger and more efficient dairy operations in New England. In 1998, an economist (Wackernagel) at the University of Vermont issued a study on the potential impact of the dairy Compact that used computer models to simulate characteristics of Vermont dairy farms under different milk pricing policies. The models varied with respect to factors such as farm size, farm profitability, productivity growth rate, and milk prices. Although no one model specifically simulated the impact of the NEDC, Wackernagel concluded that using his compact scenarios, the impact of stabilizing prices could increase farmer cash reserves and net worth, but he estimated that such gains would be more limited than those that would be achieved by having a higher farm-level price that varied from month to month. Wackernagel also concluded that gains associated with price stabilization would be limited unless a Compact’s policies recognized the impact of inflation as well as the variability and level of milk prices. The NEDC’s Impacts on Farm Structure, Milk Production, and Milk Consumption Are Difficult to Determine The Number of Farms Has Continued to Decline as Herd Size Has Increased Data on the number of farms and herd size show similar trends in the NEDC and other states before and after the NEDC was established, suggesting little or no change in the NEDC states due to the Compact. Moreover, the data suggest that the trends are caused by other factors, such as major technological advancements in dairy farming. The number of dairy farms in the NEDC and the rest of the United States has been decreasing, as measured by both the number of farms having at least one dairy cow and the number of licensed dairies. In the NEDC states, the number of farms having at least one milk cow decreased by 33 percent between 1992 and 2000, from 5,050 to 3,370. In the rest of the United States, the number of farms having at least one milk cow decreased by 39 percent in that same period, from 166,510 to 101,880. Given that some of the farms with at least one milk cow may not produce milk for sale to dairy processors, we also looked at the number of farms licensed to sell milk to dairy processors. Between 1992 and 2000, the number of licensed dairy farms in the six NEDC states decreased by 32 percent, or from 4,079 to 2,772, while the number of licensed dairy farms in the rest of the country decreased by 37 percent during that same time period, or from 127,456 to 80,253 (see fig. 6). From 1992 to 1998, the percentage decrease in the number of licensed dairy farms in the NEDC states was greater each year than in the rest of the United States; in 1998, the percentage decrease in the rest of the United States became greater each year. With respect to the change from 1997 to 2000, the number of licensed dairy farms decreased by 14 percent in the NEDC states, or from 3,237 to 2,772 farms; while the number of licensed dairy farms decreased by 17 percent in the rest of the country, or from 96,176 to 80,253 farms. It is not known whether this change in trends was caused by the NEDC or some other factors. Between 1992 and 2000, the average herd size in the NEDC states increased by 36 percent, from 58 to 79 milk cows, while in the rest of the United States, average herd size increased by 57 percent, from 56 to 88 milk cows. As shown in figure 7, although average herd size has increased in the NEDC states, this increase consistently lagged behind the increases in the rest of the United States. One of the reasons for the declines in herd size is that over the past 50 years, technological developments have significantly altered both dairy farming itself and the way farm products are processed and distributed. Farming has changed from an operation that was historically dependent on human and animal labor to one in which most operations are mechanized. As a result, at every level, economies of scale (the lower cost of large-scale versus small-scale operations) have led to fewer and larger farms. The number of farms in general, and dairy farms in particular, has been shrinking since the Depression. While dairy farms with 100 cows were considered large in 1950, today dairy farms as large as 1,500 to 3,000 are emerging in the western, northwestern, and midwestern regions of the country. This trend, along with the pressure to convert land used for agriculture into land used for nonagricultural purposes, will likely result in a continued increase in the size of farms with a commensurate decline in the total number of farms in the Northeast. Production Has Continued to Increase Milk production increased in the six NEDC states and the rest of the country both before and after the NEDC was established. This trend toward greater milk production has occurred at the same time as the total number of milk cows has declined, reflecting a greater amount of milk produced per cow. These trends make it difficult to determine what impact, if any, the Compact has had on milk production. Two studies have analyzed the NEDC’s impact on milk production and concluded that while the NEDC may have caused an increase in production, any increase was small in relation to the total amount of milk produced in the NEDC states. Milk production has increased in the NEDC states and the rest of the country, as measured by both total milk produced and milk produced per cow. From 1993 to 2000, total milk produced in the NEDC states increased 2.9 percent, from 4.545 billion pounds to 4.678 billion pounds, while production in the rest of the country increased 10.3 percent from 146.091 to 163.274 billion pounds. With respect to the change from 1997 to 2000, production increased by 2.5 percent in the NEDC states, or from about 4.6 billion pounds to 4.7 billion pounds; while production increased 7.8 percent in the rest of the country, or from about 151.5 billion pounds to 163.3 billion pounds. Much of this growth reflected an increase in milk production per cow. Figure 8 shows that from 1993 to 2000 the average amount of milk produced per cow in the NEDC states increased by 11.6 percent, from 15,633 pounds to 17,440 pounds, while production per cow in the rest of the United States increased 15.9 percent, or from 15,726 pounds to 18,226 pounds. Whether the increase in milk production per cow was dramatically influenced by the NEDC, however, is not clear. The data on milk production per cow do not show a large difference in upward trend between the NEDC states and states in the rest of the country between 1997 and 2000. Specifically, during this period production per cow increased by 6.6 percent in the NEDC states, or from 16,360 pounds to 17,440 pounds; while production per cow increased by 7.9 percent in the rest of the country, or from 16,887 pounds to 18,226 pounds. Increases in production per cow, and increased efficiency in all of the states, has more likely resulted from fundamental changes in the structure of dairy farming throughout the country, such as the increased cost of labor; new improved machinery; artificial breeding services; better feed and forage; adoption of different strains of livestock; and careful use of fertilizer, irrigation, and chemicals. The overall impact drastically increased production per cow, which has led to the need for fewer cows to supply the market. Both of the studies that analyzed the NEDC’s impact on milk production were based on a limited amount of data. OMB’s 1998 study found that from July through December 1997, New England milk production was up 3 percent from the same period in the previous year, while national milk production was up 2 percent during the same period. Given this increase in New England milk production and using USDA’s dairy economic model, OMB estimated that the Compact commission’s price of $16.94 per hundredweight for raw milk used for and sold as fluid milk would have caused milk marketings to increase from 5.38 billion pounds to 5.40 billion pounds, or by about 0.4 percent , between July and December 1998. OMB cautioned, however, that the analysis addressed only the first 6 months of the NEDC. At the request of the NEDC, economists (Nicholson et al.) at the University of Vermont also used a model to estimate the impact of higher Compact prices on milk production. Their study, issued in 2000, used state quarterly all-milk prices for each of the six Compact states as a basis to estimate what milk prices would have been without a Compact. In Vermont, these prices ranged from $12.85 to $15.09 per hundredweight. On the basis of their analysis of milk production data for the period July 1997 to June 1998, Nicholson et al. concluded that the NEDC caused milk production to increase by 45 million pounds, or about 1 percent. Nicholson et al. estimated that the Compact resulted in increased production in each of the six NEDC states, with the largest increase occurring in Vermont. Nicholson et al. postulated that without the NEDC milk production in that state would have declined. Milk Consumption Has Continued to Decrease The impact of the NEDC on milk consumption in the six NEDC states is difficult to determine because even though the data indicate that per capita consumption of fluid milk was higher in the NEDC states than in much of the rest of the United States between 1993 and 1999, consumption slowly declined during that period, both within the NEDC states and throughout much of the rest of the country, suggesting little or no change as a result of the NEDC. On the other hand, one study that analyzed the impact of the NEDC on milk consumption concluded that the Compact could have caused a slight reduction in milk consumption in the NEDC states in the latter part of 1997. As figure 9 shows, from 1993 to 1999 annual per capita milk consumption in the New England Milk Marketing Order declined by about 4 percent, from about 233 pounds to about 223 pounds. This decline is equivalent to a reduction from about 27 gallons to 26 gallons. Similarly, annual per capita milk consumption in the rest of the milk marketing orders in the United States for the same time period declined by about 6 percent, from about 214 pounds to about 202 pounds. This is equivalent to a reduction from about 25 gallons to 23 gallons. This slow decline in milk consumption began in the late 1970s, probably as the result of such factors as increasing consumption of other beverages, such as fruit beverages, bottled water, and carbonated soft drinks; growing dietary concerns about fat and cholesterol; and an aging U.S. population. USDA estimated that without marketing efforts undertaken by the milk industry, such as advertising and product innovation, milk consumption would have been lower than it was by as much as 1.4 percent from 1996 to 1999. OMB’s 1998 study found that from July to December 1997, fluid milk sales in New England totaled 1.3 billion pounds, which was down 0.7 percent from the same period 1 year earlier. Nationally, fluid milk sales increased by 0.2 percent in the last half of calendar year 1997. OMB noted that studies on the relationship between retail prices and consumption suggest that a 10-percent increase in retail fluid milk prices reduces consumption by 1 to 2 percent. Using USDA’s model, OMB estimated that the average retail price in the Compact states would have been $2.46 per gallon without the NEDC price regulations, and that with the NEDC price regulations retail prices increased by 3.7 percent to $2.55 per gallon. Given the retail price and consumption relationship found in other studies, and the NEDC’s impact on retail milk prices, OMB concluded that the NEDC’s price regulations reduced fluid milk consumption in the NEDC states by about 10 million pounds, or about 0.5 percent, between July and December 1997. Bailey’s July 2000 study also analyzed the NEDC’s impact on retail fluid milk consumption. Bailey examined Class I sales in the New England Marketing Milk Marketing Order from 1996 through 1999 and observed a retail milk price increase of 24 cents per gallon in the Hartford and Boston markets over that period. He attributed this increase to an average 10-cent- per-gallon Compact obligation and a general rise in the farm-to-retail markup. Even though retail prices increased, Bailey concluded that total fluid milk consumption did not change appreciably after introduction of the Compact: He estimated that the amount of milk consumed decreased by less than 0.3 percent in 1998 and 1999. Appendix IV: NEDC’s Impacts on Federal Program Costs According to USDA, the NEDC has not increased the net federal costs of the milk price support program, and the agency is not certain whether the Compact has increased costs of the nutrition assistance programs administered by USDA. Net costs for USDA's milk price support program have not increased because the 1996 farm bill requires that the NEDC commission compensate USDA for any additional dairy commodity purchases to the extent that the percentage change in milk production in the NEDC states exceeds the national average. Regarding nutrition assistance programs, according to USDA, federal costs for its Food Stamp Program may have increased as a result of the NEDC, but federal costs for its other nutrition assistance programs have not increased because of how these programs are funded or because of how federal program benefits are calculated for them. According to USDA officials, regardless of whether federal costs for its other nutrition assistance programs have increased, increased retail milk prices caused by the Compact have been borne by agencies or organizations that provide program benefits or by program participants in the NEDC states. The NEDC commission directly reimburses the NEDC states for increased costs incurred by selected nutrition assistance programs. While two studies assessed the NEDC’s impacts on selected nutrition assistance programs, study results are inconclusive as to whether the NEDC has impacted these programs. Federal Net Costs for the Milk Price Support Program Have Not Increased Even though USDA estimates that the NEDC states’ rates of increase in milk production have exceeded the national rates of increase in 2 of the 4 years since the Compact began setting the price of milk used for and sold as fluid milk in the NEDC states in July 1997, net federal milk price support program costs have not increased. The price support program, created in 1949, supports farm-level prices by providing a standing offer from the government to purchase butter, cheese, and nonfat dry milk at specified prices. The 1996 farm bill requires that the Compact commission compensate USDA when the estimated rate of increase in milk production in the six NEDC states is greater than the estimated rate of increase in national milk production. The NEDC commission is required to compensate USDA before the end of any fiscal year in which the NEDC states’ estimated rate of increase in production is greater than the national rate of increase for the preceding fiscal year. Neither the NEDC nor USDA must determine whether the greater rate of increase, or what portion of the greater rate of increase, is attributable to the NEDC. According to USDA officials, milk production in the NEDC states did not increase at a greater rate than the national rate of increase between July and September 1997. However, in 1998, USDA estimated that milk production in the NEDC states increased 1.8 percent, based on its analysis of the average amount of milk produced in the six states during 1997 and 1998, as compared to a national average increase in production of 1.3 percent. On the basis of these two rates, USDA estimated that it purchased about $1.8 million of nonfat dry milk that was attributable to the NEDC’s greater rate of increase in production. In 1999, USDA estimated that milk production in the NEDC states increased 3.6 percent, based on its analysis of the states’ average milk production in 1998 and 1999, compared with a national average rate of increase of 3.2 percent. On the basis of these two rates, USDA estimated that it purchased about $1.4 million of nonfat dry milk that was attributable to the NEDC’s greater rate of increase in production. The NEDC commission compensated the federal government these amounts for these 2 fiscal years. The Compact commission was not required to compensate USDA for fiscal year 2000 because USDA concluded that the average rate of increase in milk production in the NEDC states (0.1 percent) was less than the average national rate (5.1 percent). It Is Unclear Whether Federal Net Costs for Nutrition Assistance Programs Have Increased USDA analyzed the Compact’s impact on federal nutrition assistance program costs and concluded that it is uncertain whether the federal costs of one of its major programs—the Food Stamp Program—have increased, while federal costs of its other programs most likely have not. Food Stamp Program costs may have increased to the extent that the NEDC caused retail milk prices to increase nationally and to the extent that the agencies or organizations that provide or receive program benefits are not reimbursed. USDA’s Food and Nutrition Service is responsible for nutrition assistance programs, which include the Food Stamp Program; the Special Supplemental Nutrition Program for Women, Infants and Children (WIC); school programs such as the National School Breakfast and Lunch Programs and the Special Milk Program; the Child and Adult Care Food Program; and several small food distribution programs for Indian reservations, the elderly, pregnant women, and children. These programs are carried out at the state and local level by state and local agencies and organizations, such as day care centers or schools. Regarding the Food Stamp Program, USDA officials said that federal costs may have increased as a result of the NEDC, but they do not know because it is difficult to determine (1) how much of the increased retail milk price in the NEDC states was caused by the Compact, and (2) to what extent that portion of the increased retail milk price caused an increase in the national average cost of food items used to determine program benefits. The Food Stamp Program is operated by state and local welfare offices. Under the program, food stamp recipients spend their benefits (in the form of paper coupons or electronic benefits on debit cards), to buy eligible food in authorized retail food stores. Program benefits under this program are indexed to the cost of a selected group of foods, which is sensitive to the changes in the national average retail price of milk.Benefit levels have increased since July 1997, because of, among other things, increased national average retail milk prices. USDA officials said that the retail milk price increases in the six NEDC states caused by the Compact had the potential to increase the index used to set benefit levels and, thus, federal costs, but it is not certain whether they did so. Further, it is difficult to determine what portion of the retail milk price increases was attributable to the Compact. USDA estimates that if the NEDC caused a 16-cent per gallon increase in the national average retail milk price, there is a 50 percent chance that Food Stamp Program benefits increased by $1 per program participant per month and annual federal Food Stamp Program costs increased by about $60 million. USDA officials also noted that even if the increased price of milk attributable to the NEDC was enough to increase national food stamp benefit levels, some portion of the increased retail milk costs have been borne by participants located in the six NEDC states. Regarding WIC, USDA officials said that the Compact has not increased federal costs to the program because of the discretionary nature of its funding. Even so, because approximately 30 percent of the funds spent on WIC foods are used to buy fluid milk, any increase in retail milk prices can significantly affect the food package cost per participant and these higher costs, unless offset, would reduce the potential number of participants. WIC is administered by state agencies, most of whom provide WIC participants checks or food instruments to purchase specific foods each month at authorized retailers. While the 1996 farm bill does not require the Compact commission to reimburse the WIC program, NEDC regulations specify that the Compact commission is to compensate New England state agencies that administer WIC for any increased costs due to the Compact. The amount of compensation equals the over-order producer price payment multiplied by the volume of milk used by program participants in a given month. For example, in January of 1998 when the Vermont WIC program purchased 52,403 gallons of milk, the difference between the NEDC Class I milk price and the minimum USDA Class I price was 74 cents per hundredweight. Given that a hundredweight is equivalent to about 11.6 gallons of milk, the Compact commission compensated Vermont about $3,343 for that month. In total, the Compact commission compensated the six states’ WIC programs $3.8 million from 1997 through 2000 for increased milk costs due to the NEDC. According to state WIC officials, the programs used to administer WIC are being held harmless by the NEDC. These officials said that the NEDC has not resulted in increased WIC program costs or reduced program participation. One state WIC official said that the NEDC has done everything it can to ensure that the WIC programs in the six states remain unharmed by reimbursing the states the full difference between the NEDC’s minimum Class I producer price and the milk marketing order minimum Class I price on all WIC milk purchases. Regarding school programs, according to USDA officials, it is likely that federal costs to these programs have not increased as a result of the NEDC because program benefits are based on a broad index of food prices that is relatively insensitive to changes in the price of milk in the six NEDC states. Thus, any increases in milk prices caused by the Compact would either have to be absorbed by the schools or passed on to paying students. Programs such as the School Lunch Program are usually administered by state education agencies, which operate the program through agreements with local school districts. NEDC regulations specify that the Compact will reimburse schools for any Compact-related increased costs of fluid milk sold in 8-ounce containers by schools in the six NEDC states. This commitment applies to all milk served in 8-ounce containers by schools, including milk provided under such child nutrition programs as the school lunch and breakfast programs and the Special Milk Program. The NEDC commission requires that school food authorities submit claim forms at the end of each school year that identify the number of 8-oz. cartons of milk purchased during the school year on these forms. School authorities document whether part of the price for the 8-ounce milk containers is attributable to the NEDC over-order premium and, if so, how much on the basis of milk vendor submissions. Our review of school food authority claim data indicates that the portion of the contract price that milk vendors have attributed to the NEDC varies. For example, Connecticut school food authorities attributed from .2 cents to 1.37 cents per 8-ounce container to the NEDC over-order premium during the 1999-2000 school year. The NEDC commission then verifies that amounts claimed do not exceed the NEDC’s average over-order premium for the school year and compensates school food authorities either the average over-order premium or the amount vendors attributed to the NEDC over-order premium, whichever is less. Thus, the amounts paid to school food authorities vary depending on the amount that they attributed to the NEDC and the amount of milk purchased. For example, one school food authority in Massachusetts was compensated $308.35 for 146,835 cartons of milk purchased during the 1999-2000 school year, while another school food authority in that state was compensated $316.33 for 31,633 cartons of milk purchased for the same year. In total, the NEDC commission reimbursed the states $662,606 for the 1998-1999 and 1999-2000 school years. Although the amounts paid varied from school to school, officials in the six states’ departments of education generally said that schools claiming and receiving compensation were not, in the end, spending additional funds for milk or having to charge higher prices for milk sold to students. However, these officials also said that many school food authorities have chosen not to seek compensation because some school food authorities view the claim process as burdensome and not worth the effort given the relatively small amounts of money that they would receive. As is the case with the school programs, according to USDA officials it is likely that federal costs of the Child and Adult Care Food Program also have not increased as a result of the NEDC because program benefits are based on a broad index of retail food prices that is relatively insensitive to changes in the price of milk. State education or health departments administer the program, with independent centers and sponsoring organizations entering into agreements with these departments to operate the program. Under the program, USDA provides eligible centers and sponsoring organizations, such as family day care homes and child-care centers, reimbursements for meals served. According to USDA, it has no data on the amount of milk purchased under the program, and it would be prohibitively labor-intensive for the NEDC to establish a method for compensating thousands of individual homes and centers for any increased retail milk prices. Regarding USDA’s food distribution programs, according to USDA officials, USDA has not estimated the potential NEDC-related increased costs to these programs. USDA officials said, however, that any increased costs would be relatively small, given the small size of these programs compared with programs such as WIC. Furthermore, because most of these programs are not entitlement programs and thus federal funding is not mandatory, any increased costs due to the NEDC would have to be borne by program providers and could result in fewer participants being served. If Compacts Are Expanded, Federal Food Stamp Program Costs Could Increase, as Could Nonfederal Costs for Other Nutrition Assistance Programs According to USDA, if the NEDC is expanded to include additional states or if a southern compact is also created, it is more likely that federal costs for the Food Stamp Program would increase than with the existing NEDC: The likely increase in retail milk prices in more states would have a more direct impact on the index used to set program benefits. USDA estimated that if retail milk prices in the states of an expanded NEDC and a southern compact increased by about 20 cents per gallon—an amount that USDA noted is possible given the NEDC experience—food stamp participants in compact states would spend about $93 million a year more to purchase milk. If this price increase did not cause the national price increase to rise sufficiently to increase program benefits, Food Stamp Program participants would have to absorb this cost. However, if the 20-cent-per-gallon price increase resulted in a sufficiently large national average retail milk price increase to cause a $1- to $2-per- participant increase in Food Stamp Program benefits, USDA estimated that federal Food Stamp Program costs could increase by as much as $60 to $120 million per year—an amount that would have to be federally funded. Moreover, if the NEDC is expanded and a southern compact is established, and if the NEDC does not provide reimbursements, increased retail milk costs could result in fewer participants being served by state WIC programs. Given these assumptions, it is also likely that costs to school programs would have to be absorbed by school food authorities and program participants because of the index used to establish benefits under these programs. Table 23 summarizes the additional costs that USDA estimated could be incurred under the above assumptions in fiscal year 2000 by food stamp, WIC, and school program providers or participants in compact states if the NEDC is expanded and a southern compact is established, and if retail milk prices increase in the compact states. Two Studies of the NEDC’s Impact on Nutrition Assistance Programs Provide Inconclusive Results Two studies—one prepared by the Office of Management and Budget (OMB) and the other by University of Vermont researchers—offer inconclusive results on the NEDC’s potential impacts on USDA’s nutrition assistance programs. Both studies were conducted early in the Compact’s existence and relied on limited retail milk price data. For example, OMB’s 1998 study began before the NEDC commission and the states entered into agreements for compensating states’ WIC and school programs and relied on only 6 months of retail price data—from July through December 1997.Because of this, the study projected either a low (5- to 10-cent), medium (10- to 15-cent), or high (15- to 20-cent) impact on retail milk prices, as reflected in an increased price for a gallon of milk. Assuming no NEDC reimbursements and a medium impact on retail milk prices, OMB estimated that a 15-cent increase in the retail price of milk in the first 6 months of the NEDC would increase state WIC program costs by about $721,300, which would require a reduction in program participation of about 3,000 people if the states did not spend the additional money. OMB also estimated that school lunch and breakfast program costs would increase by $1.2 million during the first 6 months of the NEDC—an increase that would have to be absorbed by schools or passed on to families who pay for meals and snacks. For the same period, OMB also estimated that participants in USDA’s Food Stamp Program who reside in the NEDC states would pay an additional estimated $2.4 million because of increased retail milk prices—an amount that the federal government would not be required to pay because it is likely that national average milk prices would not have increased sufficiently to warrant an increase in benefits. Even if the price increase were large enough to trigger an increase in the index used to establish program benefits, only a small portion of the additional program benefits would go to food stamp recipients in the NEDC states, because all recipients nationwide would receive the increase. Researchers at the University of Vermont, who were asked by the NEDC commission to conduct the study, also relied on limited data. Wang et al. focused on the NEDC’s potential impact on the WIC program and analyzed retail milk price and program participation data for the period between June 1997 and February 1998. They accounted for NEDC reimbursements to the states’ WIC programs and examined retail milk prices in Boston and Hartford. Their analysis concluded that WIC program participation had not been significantly affected by the NEDC during the time frame analyzed. The study also concluded that retail milk prices in Hartford increased significantly more than in Boston, an increase that might be explained by differences in market concentration and competition. However, the authors concluded that their study results had two principal limitations: (1) their analysis was limited to Boston and Hartford and (2) the NEDC had been in effect only since July 1997, thus providing a small amount of data for the analysis. Appendix V: Interregional Impacts of Three Compact Alternatives in 1999 This appendix provides our 1999 estimates of the interregional farm- and wholesale-level impacts of the NEDC, an expanded NEDC, and an expanded NEDC combined with a southern compact on various dairy sector indicators. To develop our estimates, we first estimated the impact of each compact scenario on the basis of certain assumptions, such as transportation costs and supply and demand elasticities. We then varied these assumptions to test the sensitivity of our initial estimates. (See app. II for a detailed discussion of our methodology; a description of the IRCM; a list of the states included in the different compact scenarios; values for parameters used, such as the responsiveness of consumers to changing commodity prices; and a summary of the data used and sources for these data.) We present the data in a series of tables that summarize (1) the range of estimates that we obtained using our initial and subsequent sets of assumptions across the various compact scenarios for each of the dairy sector indicators that we analyzed; (2) our initial estimates of farm-level and wholesale-level impacts of the NEDC scenario, and the results of our sensitivity analyses for that scenario; (3) our initial estimates of farm-level and wholesale-level impacts of the expanded NEDC scenario, and the results of our sensitivity analyses for that scenario; (4) our initial estimates of farm-level and wholesale-level impacts of the expanded NEDC scenario combined with a southern compact, and the results of our sensitivity analyses for that scenario; and (5) the results of our sensitivity analysis for the expanded NEDC scenario combined with a southern compact using a more restrictive trade assumption. In all instances, we present the estimated impacts of the various compact scenarios as changes to our no- compact baseline values for 1999. The Economic Impacts of the NEDC in 1999 To obtain our initial estimates of the effects of the NEDC in 1999, we used the following assumptions (our no-compact baseline includes the first three assumptions): No more than 40 percent of any one region’s milk may be shipped to another region without being subject to the receiving region’s pricing requirements. This assumption is used to simulate USDA milk marketing order regulations regarding minimum pricing requirements for milk shipped between marketing orders. This threshold was chosen as a proxy for the requirement that an adjustment be made if a plurality of an order's packaged milk was sold in another region. Supply elasticities are medium-term (that is, 5 years). Demand elasticities are medium-term. The Class I minimum price in the Compact region is the higher of the Compact price of $16.94 per hundredweight of milk, or the USDA milk marketing order price. Market-driven over-order premiums are zero in the Compact region. This initial assumption represents a lower bound, and assumes that all market- driven over-order premiums in NEDC states are replaced by Compact over-order producer price payments. A handler must pay the compact over-order producer price into the Compact pool for milk shipped into the Compact region in order to receive the compact price. We then performed a series of sensitivity analyses by varying key assumptions to test the “robustness” of these initial estimates—that is, whether, and if so by how much, our initial estimates would change when we used different assumptions. Tables 30 through 39 present our initial and subsequent estimates of the impacts of the NEDC in 1999 compared to a no-compact scenario on farm- and wholesale-level indicators. Sensitivity Analyses for the NEDC Scenario Tables 32 through 37 display the results of our sensitivity analyses for 1999 farm- and wholesale-level indicators. In comparison with our initial estimates, we used (1) 10-year regional supply elasticities as opposed to 5- year; (2) long-term (i.e., higher), as opposed to medium-term, commodity demand elasticities, (3) higher market-driven over-order premiums as opposed to zero; (4) a combination of the previous three assumptions; and (5) an overall 25-percent increase in transportation costs. Summary of the Estimated Impacts of the NEDC Tables 38 and 39 summarize the results of our initial estimates and sensitivity analyses by presenting the range of estimates of the changes from our no-compact scenario that we obtained from our various analyses. The Economic Impacts of an Expanded NEDC in 1999 To obtain our initial estimates of the impacts of an expanded NEDC in 1999, we used the same set of assumptions that we used under our no- compact scenario. We also used the same assumptions that we used in developing our initial estimates of the impacts of the NEDC. We then performed the same series of sensitivity analyses as under the NEDC scenario. Tables 40 through 49 present our initial and subsequent estimates of the impacts of an expanded NEDC in 1999 compared with a no-compact scenario. Summary of the Estimated Impacts of the Expanded NEDC Tables 48 and 49 summarize the results of our initial estimates and sensitivity analyses by presenting a range of estimates of the changes from our no-compact scenario that we obtained in our various analyses. The Economic Impacts of an Expanded NEDC in Conjunction With a Southern Compact in 1999 As with the other compact scenarios, we developed initial estimates of the effects of an expanded NEDC in conjunction with a southern compact in 1999 by using a set of key assumptions and conducting subsequent sensitivity analyses. In addition to conducting the same sensitivity analyses that we conducted under the previous two compact scenarios, we also varied the assumption regarding the Class I minimum price, or compact price, in the compact region. For that analysis, we increased the minimum price from $16.94 to $18.00 per hundredweight in the southern compact but retained the $16.94 minimum price in the expanded NEDC. We used this higher minimum southern compact price because cooperative pay prices in selected cities in USDA’s Appalachian, Southern, and Central milk marketing orders averaged about a dollar higher than in the Northeast Marketing Order in 1999. Under this scenario we conducted an additional analysis in which we assume that fluid trade movements into compact regions are limited to the amount of milk that is produced within a 100-mile radius surrounding a compact region. Because this analysis represents a variation of the model, we also conducted a separate set of sensitivity analyses. We modified the model for each of the three compact scenarios to account for this additional fluid milk trade restriction. The model results under the NEDC and the expanded NEDC scenarios did not change when this restriction was added, but results were different for the expanded NEDC in conjunction with a southern compact scenario. Therefore, we are including the results of this additional modeling effort only for the expanded NEDC in conjunction with a southern compact scenario. We performed this analysis because our data are at the regional level as opposed to the milk plant or dairy farm level. Therefore, our transportation cost data are average cost data and do not apply to individual shipments. As a result, our initial results may reflect more movement of milk between regions than would actually occur. Sensitivity Analyses for the Expanded NEDC in Conjunction With a Southern Compact Scenario Tables 52 through 57 present the results of our sensitivity analyses for 1999 farm and wholesale-level economic variables. Summary of the Estimated Impacts of the Expanded NEDC in Conjunction With a Southern Compact Tables 58 and 59 summarize the results of our initial estimates and sensitivity analyses by presenting the range of estimates of the changes from our no-compact scenario that we obtained from our various analyses. The Economic Impacts of an Expanded NEDC in Conjunction With a Southern Compact Using a More Restrictive Fluid Milk Trade Assumption Tables 60 and 61 provide our initial estimates obtained by modifying the IRCM to include a more restrictive trade assumption about fluid milk. Tables 62 through 67 present the results of our sensitivity analyses using a more restrictive fluid trade assumption. Tables 68 and 69 summarize the results of our initial estimates and sensitivity analyses by presenting the range of estimates of the changes from our no-compact scenario that we obtained from our various analyses using a more restrictive trade assumption. Appendix VI: Interregional Impacts of Three Compact Alternatives in 2000 We present our estimates of the three compact scenarios’ impacts on 2000 farm-level and wholesale-level indicators when compared with a no- compact scenario in the following tables. As with our 1999 analysis, we calibrated the Interregional Dairy Sector Competition Model (IRCM) using 2000 data to develop a baseline—an IRCM00. However, we did not conduct a series of sensitivity analyses for the 2000 estimates for several reasons: The data for 2000 were preliminary when we conducted these analyses in July 2001. The dairy industry was undergoing a period of adjustment following USDA’s regulatory changes to its milk marketing order program in January 2000. Because the IRCM is a spatial equilibrium model, and the dairy markets appeared to be in disequilibrium in 2000, we questioned whether 2000 could be used to accurately estimate the impacts of dairy compacts. The sensitivity analyses performed for the 1999 estimates indicated that the IRCM99 model was robust—that is, the estimates that we obtained when we used different assumptions were similar to the initial estimates that we obtained using our initial set of assumptions. As a result, we did not think that another series of sensitivity analyses would provide much additional information. The following tables provide baseline estimates of the dairy sector indicators under our no-compact scenario and the estimates of the impacts of the three different compact alternatives. As with our 1999 estimates, we present estimates under both less restrictive and more restrictive fluid trade assumptions for the expanded NEDC in conjunction with a southern compact scenario. Our baseline estimates for 2000 also include the effects that USDA’s milk marketing order regulatory reforms had on farm-level and wholesale-level dairy sector indicators. As a result, the baseline estimates for 2000 are not comparable to those for 1999. We used the same set of assumptions to develop our 2000 estimates as we did to develop our 1999 baseline and initial estimates: No more than 40 percent of any one region’s milk may be shipped to another region without being subject to the receiving region’s pricing requirements. This assumption is used to simulate USDA milk marketing order regulations regarding minimum pricing requirements for milk shipped between marketing orders. This threshold was chosen as a proxy for the requirement that an adjustment be made if a plurality of an order's packaged milk was sold in another region. Supply elasticities are medium-term (that is, 5 years). Demand elasticities are medium-term. The Class I minimum price in the compact region, or the compact price, is $16.94 per hundredweight of milk. Market-driven over-order premiums are zero in the compact region. This initial assumption represents a lower bound, and assumes that all market- driven over-order premiums in NEDC states are replaced by Compact over-order producer price payments. A handler must pay the compact over-order producer price into the compact pool for milk shipped into the compact region in order to receive the compact price. Using these assumptions, tables 70 through 75 compare our no-compact baseline scenario with our estimated changes in farm-level and wholesale- level indicators across different compact scenarios for 2000. Appendix VII: Studies of the Interregional Economic Impacts of Various Dairy Compact Alternatives We reviewed and analyzed five studies that provide estimates of the interregional economic impacts of various compact alternatives. These studies used a variety of economic models, assumptions about model parameters such as demand elasticities, and data sets. The compact alternatives that they examined also varied. Despite these differences, the results of these studies on the impacts of relatively small compacts, such as the NEDC, on dairy farmers in noncompact regions were generally comparable to ours. In addition, these studies agree with ours that, as compacts expand in size, the economic impacts on dairy farmers in noncompact regions increase. USDA (1999)—A Study Using an Annual, Time-Series Dairy Sector Model A study issued in 1999 by USDA estimated the interregional impacts that the NEDC would have on noncompact regions. USDA used a model derived, in part, from a national dairy sector model developed by USDA’s Economic Research Service and dairy sector data from 1999 to forecast the impact of the NEDC from 2000 through 2005. The parameters used in the analysis were not directly estimated, but instead, were drawn from the Economic Research Service model. The Service’s national dairy sector model is an annual, time-series dairy model that is estimated as a system of equations using a three-stage, least squares regression analysis. USDA modified the Service’s model to allow for a multiregional analysis. The modified model used to estimate the interregional impacts of the NEDC consisted of five sections: (1) milk supply, (2) dairy product supply, (3) dairy product demand, (4) market equilibrium conditions, and (5) regional market utilization and pricing. The major features of USDA’s analysis and model included the following: The model used an iterative process to solve a system of simultaneous dairy demand and supply equations. The model used 36 regions, including the former 32 federal milk marketing orders; California; and three other nonfederally regulated regions. The model did not provide for milk movements between regions. USDA adjusted parameters developed for the Economic Research Service model to reflect regional differences in the dairy industry. USDA used 1999 data to project the NEDC’s impacts in each of 6 years— 2000 through 2005—and the average annual impact over the same period. USDA used two different scenarios to estimate the NEDC’s impacts: (1) the NEDC price of $16.94 per hundredweight of milk would remain constant during the years 2000 through 2005 and (2) the NEDC price of $16.94 per hundredweight would remain in effect only in 2000, after which time the NEDC price would change to the USDA federal milk marketing order Class I price for Boston plus $1.30 per hundredweight. Table 76 summarizes USDA’s estimates of the 6-year average annual impact of the NEDC in (1) noncompact regions of the country affected the most by the NEDC, (2) all 32 marketing orders combined, (3) California, and (4) the country as a whole. These estimated impacts are expressed as changes to average production levels, farm-level prices, and farm-level revenue from levels that would be expected in the absence of the NEDC. USDA also reported on the interregional impacts in each of its then- existing 32 milk marketing orders and in all noncompact regions combined. For example, under the first scenario, for which it used a $16.94 NEDC price for each of the 6 years, USDA estimated that the average all- milk price in all noncompact regions would decline by less than 1 cent per hundredweight during the years 2000 through 2005. With regard to its individual marketing orders, such as the Upper Midwest Marketing Order, USDA estimated that the all-milk price would decline by 1 cent per hundredweight in 2000, 2001, 2004, and 2005. Cox et al. (1999)–A Study Using a Spatial Market Equilibrium Model A study issued by Cox, Cropp, and Hughes in 1999 estimated the impacts of an expanded NEDC and an expanded NEDC in conjunction with a southern compact on noncompact regions. Cox et al. used a spatial market equilibrium model and dairy sector data for 1997 to estimate the potential impacts of compacts in that year. For purposes of the analysis, the expanded NEDC included the six New England states, New Jersey, New York, Maryland, and Pennsylvania. The southern compact consisted of 10 states. The spatial market equilibrium model also incorporated two additional modeling features: It simulated the processing of dairy commodities in a vertical marketing sector and used price wedges, or mark-ups, for each dairy commodity to simulate USDA’s classified pricing system and compacts. To generate a competitive spatial market equilibrium, the model maximized producer and consumer surpluses in each region, minus transportation costs, for the different commodities, subject to certain trade-flow and other constraints. In addition, the model allowed for classified pricing so that raw milk used for fluid milk attracted a higher price than raw milk used for manufactured dairy commodities. Using an iterative technique, the model solved for regional farm-level milk prices and production, wholesale-level dairy prices and production, and interregional trade flows. This model—the IRCM97—is an earlier version of the model that we used for our analysis of interregional impacts. The model assumed intermediate-run (3- to 5-year) supply and demand functions for 12 geographic regions of the country (the current 11 USDA milk marketing orders and California), representing different milk and dairy product supply and demand regions. In addition, Cox et al. used component yield data from other researchers. Features of their analysis follow: The model was an interregional, spatial equilibrium model. The model prohibited fluid milk trade between compact and noncompact regions. Twelve demand relationships for dairy products were developed for the 12 regions in the model; these demand relationships were based on consumer demand for nine distinct dairy products using national estimates of per capita wholesale demand. To link prices among the 12 regions, the model used 1995 transportation cost estimates provided by researchers at Cornell University. The model used 1997 price and production data obtained from USDA for developing a 1997 base year. The model added a price wedge of $2 per hundredweight to the 1997 Class I differentials in each compact region. All scenarios assumed no Commodity Credit Corporation milk price supports because these were set to expire in 2000. Under the model assumptions used, the researchers estimated the impacts of dairy compacts on farm-level prices and revenue, and commodity prices and expenditures. Tables 77 and 78 provide the study’s results. Bailey (2000)–A Study Using a Regional Economic Simulation Model In 2000, Kenneth Bailey, an agricultural economist at Pennsylvania State University, issued a study that estimated the interregional economic impacts of a large compact on noncompact states. Bailey used a static equilibrium model, similar in structure to the constant elasticity functional form policy models developed by Gardner. In his 2000 study, Bailey relied on 1997 data to forecast the impact of compacts in 2000. The compact simulation included those states in USDA’s Appalachian, Florida, Northeast, and Southeast milk marketing orders. The model was multiregional to reflect milk supply, allocation, and class prices in federal milk marketing orders. However, overall supply and demand for dairy products were modeled at the national level, as opposed to on a marketing order basis. The model also relied on medium-run supply and demand elasticities as reported in agricultural economics literature. The data sources for the model included USDA’s Agricultural Marketing Service, federal milk marketing order administrators, and the California Department of Agriculture. The model incorporated several significant aspects of USDA’s milk marketing order reform that were adopted in January 2000, including component pricing. As specified in milk marketing order reform, the model based the Class I price on the higher of the Class III or IV price. The model posited 13 regions: 11 federal marketing orders, California, and an “unregulated” region representing all areas of the country that did not fall under federal marketing orders or California’s milk pricing plan. Bailey analyzed the impacts of a large multiregional dairy compact accounting for about 27 percent of all milk marketed in the model. This compact scenario was evaluated relative to a no-compact baseline. Bailey also conducted additional analyses by varying retail fluid milk demand elasticities (from –0.32 to –0.23) and by using a fixed-percentage farm-to- retail milk markup instead of a fixed-dollar markup. The major features of Bailey’s analysis and model follow: The model did not allow for trade between regions. The model estimated supply, price, and demand for fluid milk and three dairy commodities: butter, cheese, and nonfat dry milk. Both a $1 and a $2 fixed amount per hundredweight were used to model an effective compact over-order producer price payment. The assumption about the amount of the market-driven over-order premium was varied to reflect either the full amount of the market-driven over-order premium as that in the no-compact baseline or half the amount in the no-compact baseline. The model estimated demand for fluid milk at the retail level and demand for manufactured dairy products at the wholesale level. In addition, the model used various farm-to-retail markup assumptions. Table 79 summarizes Bailey’s estimates of the multiregion compact on farm-level milk prices, revenue, and production in 2000. In addition to estimating the multiregional compact’s impact on farm-level milk prices, revenue, and production, Bailey also estimated changes within the compact region, and the impact that these changes would have on all noncompact regions. Bailey estimated that milk production within the compact region would increase by 0.4 to 1.4 percent, causing lower wholesale prices for butter (by 0.3 to 1.0 percent), cheese (by 0.5 to 1.7 percent), and nonfat dry milk (by 0.3 to 0.8 percent) in all federal milk marketing orders. Balagtas and Sumner (2001)–A Study Using a Price Discrimination Framework A study conducted in 2001 by Balagtas and Sumner estimated the interregional effects of the NEDC and an expanded NEDC on noncompact regions. Balagtas and Sumner used an annual, national-level supply and demand simulation model to estimate the effects of the Compact on the U.S. dairy sector, based on 1999 data. For purposes of this analysis, the expanded NEDC included the NEDC states and New Jersey and New York. The model simulated class and blend prices that would be announced by USDA’s milk marketing orders in the absence of any compact. The model's parameters were established by using milk marketing order and Compact commission data and intermediate-run (3- to 6-year) supply and demand elasticities. The model also used national-level, as opposed to regional- level, supply elasticities. The major features of the model follow: The model included two aggregate milk categories—fluid milk and manufactured dairy products—as opposed to four milk classes. The model estimated the interregional impacts of the two different compacts in four noncompact regions: California, Wisconsin, Minnesota, and a combined rest of the United States that excluded the NEDC states. The model calculated the elasticity of demand for manufactured dairy products in the New England region. Table 80 summarizes the results of Balagtas and Sumner’s estimates of the interregional impacts of the NEDC and an expanded NEDC on farm-level milk prices and production in noncompact regions. Balagtas and Sumner also estimated the interregional impacts on the price of milk used for manufactured dairy commodities and farm-level revenue. They estimated that in 1999, the price of milk used for manufactured dairy commodities would fall in noncompact regions by about 2 cents per hundredweight, which would have translated into producer surplus losses for noncompact producers of about $34 million in 1999. Rosenfeld (2001)–A Study Using an Economic Model of Classified Pricing A study issued by Allen Rosenfeld in 2001 estimated the interregional impacts of the NEDC and a larger compact on noncompact states.Rosenfeld used a classified pricing model and dairy sector data from 2000 to predict potential impacts in that year. For purposes of the analysis, the larger compact included a total of 29 states located in the Northeast and the South. These 29 states accounted for about 62 percent of Class I milk consumption in 2000. The model used supply and demand analysis within the context of traditional modeling of classified pricing. Rosenfeld used supply and demand elasticities from the dairy economics literature to estimate the increase in milk production and the decrease in milk consumption in the compact regions caused by a higher compact minimum price for milk used for and sold as fluid milk. Rosenfeld then used these elasticities to estimate the decrease in the price of milk used for manufacturing purposes, which he then used to estimate the decline in dairy farm revenue in noncompact states. The major features of Rosenfeld’s analysis and model follow: The model used a compact Class I over-order premium for both compact scenarios of 18.5 cents per gallon of milk. The model used a supply elasticity of 0.227 to estimate the impact of a higher blend price within the compact region on milk production. The model subsequently used a higher elasticity (0.35) in a sensitivity analysis. The model used a wholesale demand elasticity of –0.144 for Class I milk and a wholesale demand elasticity of –0.261 for all other classes of milk to estimate the impact of price changes on consumption. The model did not allow for decreased production by farmers in noncompact states in response to lower prices. The study did not discuss interregional trade. Table 81 summarizes Rosenfeld’s estimates of the interregional impacts of the NEDC and a larger compact on dairy farm revenue in noncompact states for 2000. Rosenfeld noted that his use of 2000 data may have overstated the estimates of the impact of the NEDC and an expanded compact because, in 2000, the difference between the Class 1 milk marketing order price and the NEDC price of $16.94 was larger than in other years since the NEDC has been in effect. To account for this, he performed a sensitivity analysis by adjusting the compact Class I price to simulate the average amount of the NEDC over-order payment during the first 44 months of NEDC operations. The result was that the 29-state compact had a smaller impact on dairy farm revenue in noncompact states ($228 million as opposed to $374 million). Comparison of Studies Reviewed The five studies we reviewed and analyzed used a variety of models, assumptions, and data sets. They also varied in terms of the dairy compact alternatives they examined. To allow a comparison, table 82 summarizes the key features of these studies. Appendix VIII: Comments From the Executive Director, NEDC The following are GAO's comments on the NEDC's written response dated September 7, 2001, to our draft report. GAO Comments 1. We recognize that the National Agricultural Statistics Service revised its estimates of New England's milk production for 2000. However, our estimates of the impacts of the NEDC are based primarily on 1999 data. While we also used preliminary data for 2000 to estimate the impacts for that year, we note that these data are preliminary and as a result, we did not conduct sensitivity analyses for that year. Further, a 2-percent adjustment in New England's milk production would be unlikely to affect our estimates of the NEDC's impact on noncompact regions in the country. 2. We recognize that before USDA's milk marketing order regulatory reforms took effect in January 2000, New York was in a different milk marketing order than were the New England states. However, we do not model New York as being in a different milk marketing order. Rather, the dairy model that we used to estimate the impact of compacts aggregates states on a regional basis. As discussed in appendix II, the states included in the model's Northeast region include the six NEDC states, Delaware, the District of Columbia, Maryland, New Jersey, New York, and Pennsylvania. Because the model is a regional model, the NEDC is represented as being part of the Northeast region. As a result, the model is unable to estimate the impact of compact states on noncompact states within the same region. 3. We disagree that the report misrepresents the regulation's design and function. We state that all raw milk used for and sold as fluid milk in the six NEDC states is subject to the NEDC's regulations, not just milk produced by dairy farmers in the six NEDC states. We have revised the report to explicitly state that farmers in New York may receive NEDC payments. However, the impact of the NEDC on dairy farmers in New York was beyond the scope of work that we agreed to perform for Senator Kohl. 4. We concur that New York dairy farmers have received NEDC payments, based on data developed by the NEDC commission. The report, however, does not state that $50 million has been provided to New York dairy farmers. We have revised the report to include data published by the NEDC that indicates that about 1,300 New York dairy farmers have received NEDC payments that have averaged, on an annual basis, $9,812 since the NEDC began in July 1997.5. We disagree that the model omits from consideration the regulatory treatment of plants located outside New England that sell packaged milk in New England—partially regulated plants. The model provides for shipments of packaged milk between compact and noncompact regions. A discussion of how this is modeled is included in appendix II. 6. USDA officials and other dairy economists who have analyzed the NEDC's impact on the premium structure in New England told us that the NEDC has had the effect of eroding much of the market-driven over-order premium that processors had been paying prior to the NEDC's establishment, as opposed to premiums that cover services provided by cooperatives or handlers. This effect would vary, however, depending upon market forces. Such market forces could fluctuate from month to month and year to year. Much of the data needed to determine the specific impact that the NEDC has had on the premium structure in New England are proprietary in nature, and thus we do not have access to this data. 7. The report states that the trends regarding farm attrition rates and production in the NEDC states and the rest of the country were similar, both before and after the compact. This is not to say that the percentage change in the NEDC states was identical to the percentage change in the rest of the country. Regarding attrition, the number of farms has been steadily declining both within the NEDC states and in the rest of the country, but at a slightly different rate. 8. We disagree that the report concludes that the NEDC had no impact on farm attrition. We note that the percentage reduction declined slightly following the NEDC's establishment but conclude that it is difficult to determine the extent to which the NEDC, relative to other factors, may have changed farm attrition. 9. We disagree that the herd size discussion is less helpful. We believe that providing a longer-term perspective that includes national trends provides a useful context for discussing dairy policy options. 10. We agree that the NEDC's impact on retail milk prices is a complex question. Our report states that many factors affect retail prices, in addition to the farm-level price of milk. We revised the report to recognize that retail milk prices, after increasing by about 20 cents per gallon in July 1997, subsequently fell by as much as 5 to 7 cents per gallon for several months. 11. We relied on USDA's Food and Nutrition Service's analysis of the NEDC's impact on the Food Stamp Program. On the basis of its analysis, USDA was unable to determine if the NEDC increased program benefit levels. According to USDA, if the NEDC's impact on retail milk prices in the NEDC states had caused a $1 increase in national benefit levels, this would have resulted in an additional $60 million in federal funding per year. Aside from the issue of whether the NEDC has increased federal costs, USDA indicated that the NEDC has increased Food Stamp Program participant costs. Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to those named above, M. Shawn Arbogast, Venkareddy Chennareddy, Jay R. Cherlow, Nancy L. Crothers, Oliver H. Easterwood, Barbara J. El Osta, and Marcia B. McWreath made key contributions to this report. Related GAO Products Dairy Industry: Information on Milk Prices and Changing Market Structure (GAO-01-561, June 15, 2001). Fluid Milk: Farm and Retail Prices and the Factors That Influence Them (GAO-01-730T, May 14, 2001). Dairy Products: Imports, Domestic Production, and Regulation of Ultra- filtered Milk (GAO-01-326, Mar. 5, 2001). Dairy Industry: Information on Prices for Fluid Milk and the Factors That Influence Them (GAO/RCED-99-4, Oct. 8, 1998). Dairy Industry: Information on Marketing Channels and Prices for Fluid Milk (GAO/RCED-98-70, Mar. 16, 1998).
U.S. dairy farmers produced 167.7 billion pounds of unprocessed, raw milk in 2000. Federal and state dairy programs influence the minimum prices paid to farmers for raw milk. These prices are based on how the raw milk is to be used. Minimum prices set for raw milk to be used for making drinking milk (fluid milk) are higher than those for milk used for manufacturing cheese, butter, and other dairy products. About 70 percent of the raw milk produced in the United States is regulated under the U.S. Department of Agriculture's (USDA) federal milk marketing order program. The 1996 farm bill established another pricing program -- the Northeast Interstate Dairy Compact (NEDC) -- which is run by a commission that sets a minimum price for raw milk sold as fluid milk in six New England states. The NEDC works in conjunction with federal and state dairy programs to establish an alternative minimum price for raw milk in the Compact states. When the monthly NEDC minimum price exceeds the federal marketing order or state minimum price, the NEDC price becomes the minimum price. Congress is now considering legislation that would reauthorize and expand the NEDC and establish additional interstate dairy compacts. This report reviews the potential economic impacts of different compact alternatives.
NOAA Corps History and Current Status The organization that became NOAA was established in 1807, and in 1836 it officially became known as the Coast Survey. The Survey dispatched technical and scientific teams to survey the uncharted U.S. coastline and relied on the Army and the Navy to supply personnel to augment the organization’s civilian employees. After the Civil War, the Army withdrew from the Survey’s work; the Navy withdrew during the Spanish-American War, leaving the work to be done solely by the employees of the newly named Coast and Geodetic Survey. After the Army and the Navy withdrew their personnel, many of the Coast and Geodetic Survey’s civilian employees working in the field continued (1) maintaining a military-like operation with distinct lines of authority, (2) wearing Navy uniforms, and (3) giving and taking orders. At the outbreak of World War I, ships and men who were qualified to operate the ships were needed immediately to augment the military forces. The Coast and Geodetic Survey was the only federal civilian agency that could respond to these requirements. Accordingly, in 1917, Congress passed legislation authorizing the President to transfer the Survey’s ships and men to the Navy and War Departments for the duration of the war and officially giving military rank to Coast and Geodetic Survey field officers when serving in the Army or Navy. The Joint Service Pay Act of 1920 extended the Navy’s pay, allowances, and retirement system to the members of the Coast and Geodetic Survey who held ranks equivalent to Navy officers. In World War II, about half of the commissioned officers and ships of the Coast and Geodetic Survey were temporarily transferred to the armed services. Officers’ duties included training amphibious troops in seamanship and navigation, serving as battalion observation officers, and executing hydrographic surveys in advance of fleet operations in the Aleutian Islands and the Western Pacific. At the end of the war, all Survey ships and officers were returned to the Coast and Geodetic Survey and to civilian duties. However, the Corps continued to exist, and its officers retained their military ranks and compensation. In 1965, the Coast and Geodetic Survey became the Environmental Science Services Administration (ESSA), and in 1970, ESSA became NOAA. NOAA is composed of five line offices—(1) the National Marine Fisheries Service; (2) the Office of Oceanic and Atmospheric Research; (3) the National Weather Service; (4) the National Ocean Service; and (5) the National Environmental Satellite, Data, and Information Service—and the Office of the Administrator. Corps officers are assigned to work in all component offices of NOAA. Table 1 provides Corps officers’ assignments to NOAA’s component offices in April 1995. Corps officials said officers can expect to serve one-third of their careers in each of the following work categories: (1) sea duty; (2) shore duty that involves responsibilities in marine centers, vessel support, geodetic surveys, or aircraft operations; and (3) shore duty that involves management and technical support throughout NOAA. In October 1994, the Corps had approximately 400 commissioned officers. As a result of general downsizing in the Department of Commerce, the Corps was reduced to 332 officers as of July 1, 1996. According to a Corps official, the ultimate downsizing goal was to reduce the number of officers to 285 by the year 2000. NOAA has since expressed an interest in eliminating the Corps and using civilian employees to carry out the Corps’ functions. In January 1996, NOAA’s Administrator announced that the NOAA Corps would begin a transition to civilian status on October 1, 1996, and directed that the transition be completed within 6 months. He asked the Director of NOAA Corps operations to develop an implementation plan for civilianizing the Corps. NOAA officials said that plan was being reviewed by the Secretary of Commerce. NOAA Corps’ Similarity to and Differences From the Military Corps members’ entitlement to military ranks and military-like compensation, including eligibility for retirement at any age after 20 years of service, was an outgrowth of their temporary service with the armed forces during World Wars I and II. The NOAA Corps has not been incorporated into the armed forces since World War II, and DOD’s war mobilization plans envision no role for the Corps in the future. Corps officers continue to receive virtually the same pay and benefits (including retirement) as the military. A 1984 DOD report provided a detailed discussion of the criteria and principles used to justify the military compensation system. According to the report, the main purpose of the military compensation system is to ensure the readiness and sustainability of the armed forces. Military personnel can be assigned at any time to any locations the services see fit, regardless of members’ personal preferences or risks. In other words, the military compensation system is based on the premise that individual aspirations and preferences are subordinated to the good of the service. The NOAA Corps is not considered an armed service, and Corps officers are not subject to the Uniform Code of Military Justice, which underlies how military personnel are managed. Accordingly, NOAA cannot press criminal charges or pass sentence against an officer who disobeys orders, and Corps officers can quit the Corps without legal sanctions. Corps officials said the essential functions of the uniformed Corps are to serve as deck officers aboard NOAA ships and to be a mobile cadre of professionals who can be assigned with little notice to any location and function where their services are necessary, often in hazardous or harsh conditions. We found that some Corps assignments are of this nature, but civilian employees in other agencies are often assigned to duties similar to those of the Corps. For example, the Environmental Protection Agency (EPA), the National Transportation Safety Board, and the Federal Emergency Management Agency use civilian employees to respond quickly to disasters and other emergency situations. Moreover, EPA and the Navy use ships operated by civilian employees or contractors in conducting their oceanic research. Officials from these agencies said they have experienced no problems in using civilian deck officers on the vessels. Also, NOAA ships have been operated on occasion by Wage Marine (civilian) deck officers, and NOAA officials termed this approach successful. Potential Cost Reduction Resulting From Civilianizing the NOAA Corps NOAA contracted with Arthur Andersen LLP to determine the comparative costs of using civilian employees rather than Corps officers to carry out the Corps’ functions. The contractor’s report was issued August 30, 1995. We examined the contractor’s approach and methodology and generally found them to be similar to those we would have used. Thus, other than making an adjustment we believed was necessary for a more complete comparison, we accepted the contractor’s estimates of the comparative costs of using Corps officers and civilian employees. On the basis of the contractor’s report and the adjustment we made, we estimated that the cost to the government would have been about $661,000 lower during the year July 1994 through June 1995, if civilian employees had been used. If the Corps is downsized as intended, the estimated cost savings would be smaller in subsequent years. Arthur Andersen LLP Cost Comparison Study The Arthur Andersen LLP report concluded that civilianization of the Corps would increase government costs by $573,000 a year. This estimate was based on actual costs incurred during the year ending June 30, 1995, and used a Corps strength of 384 officers. Table 2 shows the Arthur Andersen LLP estimates. Our Adjustment to the Contractor’s Estimates Arthur Andersen LLP did not include in its comparison the federal income tax advantage Corps officers receive from their housing and subsistence allowances. Like members of the military, NOAA Corps officers pay no federal income taxes on these allowances. As DOD explained, the “cost” to the government arising from this tax advantage comes in the form of a loss to the U.S. Treasury of the federal income taxes that would otherwise have been paid if the allowances were taxable. Federal civilian employees receive no such tax advantages; they must pay their living expenses from their fully taxable salaries. A DOD publication pointed out that the actual federal tax benefit that an individual member realizes is governed by many considerations. These considerations include (1) the aggregate amount of a member’s (and his or her spouse’s) income, both earned and unearned; (2) the amount of the member’s housing and subsistence allowances; (3) the member’s marital status and number of dependents; (4) whether the member takes the standard deduction or itemizes deductions for federal income tax purposes; and (5) whether the member is entitled to other types of tax exclusions. DOD developed a series of numerical estimates of the tax advantages to members using certain assumptions related to these factors. The publication noted that members do not actually receive the tax advantage in cash or in kind. Accordingly, it is not a cost item in DOD’s budget, nor is it in NOAA’s budget. According to its report, Arthur Andersen LLP did not include Corps members’ tax advantage as a cost of maintaining the Corps because it did not represent “costs incurred by the Federal Government.” However, because the tax advantage represents a revenue loss to the government and is of considerable monetary value to Corps members, we believe it should be included in any cost comparison. Since NOAA Corps officers receive the same base pay and housing and subsistence allowances as military officers at the same ranks, we used DOD’s tax advantage estimates to estimate the tax advantage afforded to Corps members. We estimated that the annual tax advantage associated with the housing and subsistence allowance amounts used in the Arthur Andersen LLP study would be $1,234,000 a year. Adjusting the Arthur Andersen LLP study results by the estimated tax advantage amount results in a total government cost for the Corps of $30,942,000 for the year, compared with the estimated $30,281,000 cost of using civilian employees—a difference of $661,000. If a decision were made to civilianize the NOAA Corps, whether there would be any actual cost reductions would depend, in large part, on the manner in which a transition to civilian employment would be carried out, including the period of time over which the transition would occur. Any decision to replace Corps officers with civilian employees could be implemented in a number of ways. The possibilities range from requiring all officers to immediately convert to civilian employment, to longer-range measures such as allowing all current officers to remain in place until retirement or other separation and requiring all new entrants to be civilian employees. Or, perhaps all officers with a specific number of years in the Corps could be allowed to continue in the Corps until retirement or other separation. The amount of transition costs would also depend on how considerations such as the following were resolved. (1) What retirement benefits or credits are given to officers for the time they spend in the Corps before converting to civilian employment and the civilian employee retirement system. (2) What resources would be required to recruit, train, and retain civilian employees that might be needed to replace Corps officers who opt to leave federal service. (3) The amount of additional resources, if any, that would be required to administer the civilian workforce at NOAA after civilianizing the Corps and its administrative personnel. A plan of action that addresses each of the above factors and other possible considerations would be needed before estimates of the transition costs involved could be determined. Agency Comments The Department of Commerce provided written comments on a draft of this report. The comments and our responses are discussed below. The Department’s comments are provided in their entirety in appendix II. Although the Department expressed concerns about certain information in the report, it acknowledged that a legislative proposal (prepared by the Department) to “disestablish” the Corps was pending clearance within the administration. The Department questioned the appropriateness of our applying DOD’s criteria for military compensation to the NOAA Corps. It said the criteria focused exclusively on the military services, rather than on uniformed services in general. In our opinion, the criteria we used were appropriate. The Corps’ compensation system, generally the same as the military compensation system, was legislatively established after some Corps officers were temporarily assigned to the military during World War I. Thus, in evaluating whether the Corps should continue to receive military-like compensation, we believe the application of the criteria DOD used to justify the military compensation system is reasonable. The Department noted that its goal for downsizing the Corps (if the proposal to “disestablish” the Corps is not accepted) is to have a Corps strength of 285 officers by the year 2000, rather than 280 as stated in the draft report. We changed the report to reflect this updated estimate. According to the Department, the report’s discussion of the history of the Corps and how Corps officers came to receive ranks and compensation similar to the military should have included more detailed information. We included additional historical information consistent with the Department’s suggestions. Similarly, the Department suggested that, to be more complete, the report should acknowledge that Corps officers are subject to the Uniform Code of Military Justice when serving with or assigned to the armed forces. We agree that this is the case. However, the report section cited by the Department already pointed out this exception to the general rule in a footnote. Accordingly, we did not believe a change was needed to address this comment. The Department expressed an opinion that the report did not sufficiently address the ways in which service with the Corps is similar to military service. We disagree. The report discussed areas of similarity between Corps and military service mentioned by Corps officials during our review, but it also pointed out that civilian employees in other agencies were often subject to the same conditions of employment. Moreover, many of the similarities discussed in the Department’s comments exist because the Corps is compensated under a military-like system, not because the Corps has responsibilities like the military. It should also be noted that the criteria for military compensation articulated in the DOD report are based on the need for inducements and incentives to maintain a force necessary “to insure successful accomplishment of the United States national security objectives.” Corps officers have not been involved in meeting national security objectives since World War II. We also provided a draft of the report to DOD. We were advised that, after reviewing the draft, DOD had no comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this report. At that time, we will send copies to the Secretaries of Commerce and Defense and other interested parties. We will also make copies available to others upon request. If you have questions concerning this report, please telephone me or Associate Director, Timothy P. Bowling, at (202) 512-8676. Major contributors to this report are listed in appendix III. Objective, Scope, and Methodology The objective of this report is to provide information on the operations of the National Oceanic and Atmospheric Administration (NOAA) Commissioned Corps. We were asked to address why the NOAA Corps exists; what the Corps officers’ duties are; how the Corps is similar to and different from the military; and what savings, if any, might result from not using uniformed personnel to carry out current Corps functions. To gather the information on the continuing need for the Corps, we reviewed NOAA Corps historical material and interviewed and obtained documentation from officials of NOAA, including the Office of NOAA Corps Operations; the Department of Defense (DOD), including the Department of the Army and the Navy; Woods Hole Oceanographic Institute; the National Science Foundation; the National Transportation Safety Board; the Federal Emergency Management Agency; and the Environmental Protection Agency. To compare the costs of using uniformed personnel or civilian employees to carry out Corps duties, we reviewed the findings in an August 1995 report prepared by Arthur Andersen LLP under a contract with NOAA. We examined the contractor’s approach and methodology and generally found them to be similar to those we would have used. Other than making an adjustment we believed was appropriate to reflect the estimated tax advantages Corps officers receive through their nontaxable housing and subsistence allowances, we accepted the contractor’s findings as valid estimates of the comparative costs of using Corps officers and civilian employees. It should be noted that we did not examine whether the Corps’ functions or the number of persons used to accomplish those functions were necessary or could be changed as the result of civilianization. Thus, the report does not address issues such as whether civilianization of the Corps could present opportunities for possible savings through restructuring or consolidating NOAA operations. Neither did we examine the possibility of contracting with private companies, rather than using civilian employees, to carry out the Corps’ current functions. We did our work in Washington, D.C.; Narragansett, Rhode Island; and Woods Hole, Massachusetts, between November 1994 and January 1996. Our work was done in accordance with generally accepted government auditing standards. The Department of Commerce provided written comments on a draft of this report. A copy of the letter is included as appendix II. The Department of Defense also reviewed a draft of the report and had no comments. Comments From the Department of Commerce Major Contributors to This Report General Government Division, Washington, D.C. Robert E. Shelton, Assistant Director, Federal Management and Workforce Issues Nancy A. Patterson, Assignment Manager Philip Kagan, Technical Advisor Steven J. Berke, Evaluator-in-Charge Marlene M. Zacharias, Evaluator Assistant The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the operations of the National Oceanic and Atmospheric Administration's (NOAA) Commissioned Corps, focusing on: (1) whether there continues to be a need for a commissioned corps with military-like pay, allowances, and benefits; and (2) what the costs would be if federal civilian employees performed the Corps' functions. GAO found that: (1) the NOAA Corps carries out civilian, rather than military, functions; (2) Corps officers operate and manage NOAA research and survey ships that collect the data needed to support fishery management plans, oceanographic and climate research, and hydrographic surveys; (3) Corps officers' entitlement to military ranks and military-like compensation was an outgrowth of their temporary assignments to the armed forces during World Wars I and II; (4) the Department of Defense's war mobilization plans envision no role for the Corps in the future; (5) Corps officers are not subject to the Uniform Code of Military Justice; (6) the government would realize estimated net savings of $661,000 by converting the Corps to civilian status; and (7) a general downsizing in the Department of Commerce reduced the number of Corps officers to 332 as of July 1996, with a goal of 285 officers by 2000.
CMS Has Improved Key Strategies for Preventing and Recouping Improper Payment, but More Can Be Done CMS has made progress strengthening provider enrollment procedures and prepayment controls in the Medicare program to help ensure that payments are made correctly the first time, but the agency could further improve upon its efforts by implementing additional enrollment procedures and prepayment strategies. Likewise, additional improvements to CMS’s postpayment claims review activities could improve their efficiency and effectiveness. CMS Has Implemented Certain Enrollment Procedures to Better Screen Providers, but Has Not Completed Others CMS has implemented certain provider enrollment screening procedures authorized by the Patient Protection and Affordable Care Act (PPACA) and put in place other measures intended to strengthen existing procedures. The changes to provider screening procedures are intended to address past weaknesses identified by GAO and the HHS’s Office of Inspector General (OIG) that allowed entities intent on committing fraud to enroll in Medicare. Blocking the enrollment of such providers helps to prevent Medicare from making improper payments. Specifically, CMS added screenings of categories of provider enrollment applications by risk level and contracted with new national enrollment screening and site visit contractors. Screening Provider Enrollment Applications by Risk Level: CMS and the OIG issued a final rule in February 2011 to implement many of the new screening procedures required by PPACA. CMS designated three levels of risk—high, moderate, and limited—with different screening procedures for categories of Medicare providers at each level. Providers in the high-risk level are subject to the most rigorous screening. Based in part on our work and that of the OIG, CMS designated newly enrolling home health agencies and suppliers of durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) as high risk, and designated other providers as lower risk. Providers at all risk levels are screened to verify that they meet specific requirements established by Medicare, such as having current licenses or accreditation and valid Social Security numbers. High- and moderate-risk providers are also subject to unannounced site visits. Further, PPACA authorizes CMS to require fingerprint-based criminal background checks of providers and suppliers depending on the risks presented. In March 2014, CMS awarded a contract to a Federal Bureau of Investigation-approved contractor that will enable the agency to access criminal history information to help conduct those checks of high-risk providers and suppliers. CMS has indicated that the agency will continue to review the criteria for its screening levels and will publish changes if the agency decides to update the assignment of screening levels for categories of Medicare providers. Doing so could become important because the Department of Justice and HHS reported multiple convictions, judgments, settlements, or exclusions against types of providers not currently at the high-risk level, including community mental health centers and ambulance providers. National Enrollment Screening and Site Visit Contractors: CMS contracted with two new contractors at the end of 2011 to assume centralized responsibility for two functions that had been the responsibility of multiple contractors. One of the new contractors is conducting automated screenings to check that existing and newly enrolling providers and suppliers have valid licensure, accreditation, and a National Provider Identifier, and are not on the OIG list of providers and suppliers excluded from participating in federal health care programs. The second contractor conducts site visits of providers to determine whether sites are legitimate and the providers meet certain Medicare standards. A CMS official reported that, since the implementation of the PPACA screening requirements, the agency had revoked over 17,000 suspect providers’ privileges to bill the Medicare program. Although CMS has taken actions to strengthen the provider enrollment process, we and the OIG have found that CMS has not taken other actions authorized by PPACA and that could improve screening and ultimately reduce improper payments. They include issuing a rule to require surety bonds for certain providers and suppliers as well as a rule on provider and supplier disclosure requirements. Surety Bonds: PPACA authorized CMS to require a surety bond for Surety bonds may certain types of at-risk providers and suppliers. serve as a source for recoupment of erroneous payments. DMEPOS suppliers are currently required to post a surety bond at the time of enrollment. CMS told us in April 2014 that the agency collected about $1.6 million in DMEPOS supplier overpayments between February 2012 and March 2013. However, also in April 2014, CMS reported that it had not scheduled for publication a proposed rule to impose a surety bond requirement as authorized by PPACA for other types of at-risk providers and suppliers—such as home health agencies and independent diagnostic testing facilities. Providers and Suppliers Disclosure: CMS has not yet scheduled the publication of a proposed rule for increased disclosures of prior actions taken against providers and suppliers enrolling or revalidating enrollment in Medicare, such as whether the provider or supplier has been subject to a payment suspension from a federal health care program. As we reported in April 2012, agency officials indicated that developing the additional disclosure requirements has been complicated by provider and supplier concerns about what types of information will be collected, what CMS will do with it, and how the privacy and security of this information will be maintained. We are currently examining the ability of CMS’s provider enrollment system to prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in Medicare. Specifically, we are assessing the process used to enroll and verify the eligibility of Medicare providers in Medicare’s Provider Enrollment, Chain, and Ownership System (PECOS) and the extent to which CMS’s controls are designed to prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in PECOS. CMS Has Improved Prepayment Controls, but More Could Be Done to Prevent Improper Payments CMS has enhanced its efforts to reduce improper payments by improving prepayment controls, particularly prepayment edits to deny claims that should not be paid. CMS has stated that one of its key goals is to pay claims properly the first time—that is, to ensure that payments go to legitimate providers in the right amount for reasonable and necessary services covered by the program for eligible beneficiaries. To do so, among other things, CMS uses prepayment controls such as prepayment edits—instructions that CMS’s contractors, including Medicare Administrative Contractors (MAC), program into claims processing systems that compare claim information to Medicare requirements in order to approve or deny claims or to flag them for additional review. For example, some prepayment edits are related to service coverage and payment, while others are implemented to verify that the claim is properly filled out, that providers are enrolled in Medicare, or that patients are eligible Medicare beneficiaries.and its contractors are automated and applied to all claims; if a claim does not meet the criteria of the edits, it is automatically denied. Other prepayment edits are manual; they flag a claim for individual review by trained staff to determine whether it should be paid. Most of the edits implemented by CMS We previously evaluated CMS’s implementation of prepayment edits and found that while use of prepayment edits saved Medicare at least $1.76 billion in fiscal year 2010, the savings could have been greater had prepayment edits been used more widely. For example, based on our analysis of a limited number of national policies and local coverage determinations in 2012, we identified $14.7 million and $100 million in payments, respectively, that were inconsistent with policies and determinations and were therefore improper. Such inconsistencies could have been identified using automated edits. As we recommended, CMS has taken steps to improve the development of certain prepayment edits that are implemented nationwide. For example, the agency has centralized the development and implementation of automated edits based on a type of national policy called national coverage determinations. In addition, CMS has modified its processes for identifying provider billing of services that are medically unlikely, in order to prevent circumvention of automated edits designed to identify an unusually large quantity of services provided to the same patient. However, as of April 2014, CMS had not fully implemented several of the recommendations we made in 2013 that we believe would promote greater use of prepayment edits and better ensure proper payment. For example, the agency did not include, in its written guidance to agency staff on procedures for ensuring consideration of automated edits, time frames for making decisions on whether an edit would be developed nor did it include requirements for assessing the effects of corrective actions taken. In addition, although CMS has taken initial steps to improve the data it collects about local prepayment edits implemented by its contractors, it had not yet determined a final process for how it would obtain and disseminate information about these edits across contractors. Nor does CMS require contractors to share information with each other about the underlying policies and savings related to their most effective edits, as the agency currently lacks a database to collect such information. Having information about the most effective local edits would enable contractors to determine the most appropriate approach for implementing Medicare payment policy effectively, which could help reduce improper payments. To help prevent improper payments, CMS also implemented a specific type of national edit, called a Medical Unlikely Edit (MUE), which limits the amount of a service that is paid when billed by a provider for a beneficiary on the same day. The limits for certain services that have been fraudulently or abusively billed are unpublished to deter providers from billing up to the maximum allowable limit. In 2013, we reported that CMS may be missing opportunities to prevent improper payments because it has not systematically evaluated MUE limits to determine whether In national edits should be revised to reflect more restrictive local limits.addition, we found that CMS and its contractors did not have a system in place for examining claims to determine the extent to which providers may be exceeding unpublished MUE limits and whether payments for such services were proper. As a result, we recommended that CMS examine contractor edits to determine whether any national unpublished MUE limits should be revised, consider reviewing claims to identify providers that exceed the unpublished MUE limits, and determine whether the provider’s billing was proper. HHS agreed with these recommendations, but as of April 2014, CMS had not implemented them. Postpayment Claims Reviews Have Increased in Recent Years, but More Could Be Done to Increase Consistency across Contractors Medicare uses four types of contractors to conduct postpayment claims reviews to identify and recoup overpayments.the same Medicare coverage and payment guidelines. MACs, in addition to conducting prepayment claims reviews, conduct postpayment claims reviews to help ensure accurate payment and specifically to identify payment errors. This includes identifying ways to address future payment errors—for example, through automated controls that can be added on a prepayment basis and by educating providers with a history of a sustained or high level of billing errors to ensure that they comply with Medicare billing requirements. Zone Program Integrity Contractors (ZPIC), the CMS contractors responsible for detecting and investigating fraud, perform postpayment claims reviews as a part of their investigations. Therefore, ZPIC reviews generally focus on providers whose billing patterns are unusual or aberrant in relation to those of similar providers in order to identify potential fraud. The Comprehensive Error Rate Testing (CERT) contractor estimates the Medicare FFS improper payment rate by using the results of postpayment claims reviews conducted on a sample of claims processed by the MACs. CERT reviews may also help identify program vulnerabilities by measuring the payment accuracy of each MAC, and the Medicare FFS improper payment rate by type of claim and service. Recovery audit contractors (RAC) conduct postpayment claims reviews to identify improper payments. Use of RACs was designed to be in addition to MACs’ existing claims review processes, since the number of postpayment reviews conducted by MACs and other contractors was small relative to the number of claims paid and the amount of improper payments. Whereas RACs are paid on a contingency fee basis based on the amount of improper payments they recoup, the other three contractors are paid under the terms of their contract using appropriated funds. In February 2014, CMS announced a “pause” in the RAC program as the agency makes changes to the program and starts a new procurement process for the next round of recovery audit contracts. CMS said it anticipates awarding all five of these new Medicare FFS recovery audit contracts by the end of summer 2014. All four types of contractors conduct complex reviews of claims. Complex reviews involve manual examinations of each claim and any related documentation requested and received from the provider, including paper files, to determine whether the service was billed properly, and was covered, reasonable, and necessary. Licensed clinical professionals, such as licensed practical nurses, and certified coders typically perform the reviews. Contractors have physician medical directors on staff who provide guidance about making payment determinations on the basis of medical records and other documentation and who may discuss such determinations with providers. In addition to conducting complex reviews, RACs also conduct automated and semiautomated postpayment claims reviews. Automated reviews use computer programming logic to check claims for evidence of improper coding or other mistakes. Automated postpayment reviews analyze paid claims and identify those that can be determined to be improper without examining any additional documentation, such as when a durable medical equipment supplier bills for items that should have been included as part Semiautomated of a bundled payment for a skilled nursing facility stay.reviews use computer programming logic to check for possible improper payments, but allow providers to send in information to rebut the claim denial before it is implemented. If providers send in information, RAC staff review it before making a final determination. Our prior work has found that the overall number of postpayment claims reviews has been increasing in recent years, but remains a very small percentage of total Medicare claims submitted. year for which we have data, the four types of Medicare postpayment review contractors conducted about 2.3 million claims reviews, which is a 55 percent increase from 2011. RACs conducted about 2.1 million, or 90 percent, of these reviews in 2012. All four types of contractors listed except the CERT contractor increased the number of claims they reviewed in 2012, as shown in table 1. GAO, Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522 (Washington, D.C.: July 23, 2013). Reviews completed by MACs do not include the reviews performed by the three legacy contractors that were continuing to provide claims administration services as of June 2013. RAC data are reported for fiscal years 2011 and 2012, rather than calendar year. Automated reviews use computer programming logic to check claims for evidence of improper coding or other mistakes. Only the RACs conducted automated postpayment reviews. RAC complex reviews are based on the number of additional documentation requests received and also include semiautomated reviews. While the number of postpayment reviews has increased significantly, the percentage of Medicare claims reviewed after payment remains small. The 2.3 million reviews performed by these four types of contractors accounted for less than 1 percent of the more than 1 billion FFS claims paid annually. About 1.4 million of the reviews were complex reviews which required the submission of documentation for review. As a systematic matter, the increase in postpayment claims reviews is one factor causing backlogs and delays at the third level of the Medicare appeals process. Medicare providers and suppliers can appeal prepayment and postpayment claims determinations through the Medicare appeals process, which offers four levels of administrative review followed by judicial review. The first two levels of appeals for FFS claims are managed by two CMS contractors—the MAC that processed the original claim and a Qualified Independent Contractor, in that order. The third level of appeal is to an Administrative Law Judge (ALJ) at the Office of Medicare Hearings and Appeals (OMHA), a separate staff division within HHS. A Part A or Part B appeal filed with OMHA should generally be decided within 90 days of the appeal being filed. However, due to a backlog of cases, OMHA currently reports that the average time for appeals to be decided in fiscal year 2014 is 346 days. The number of appeals filed at the ALJ level increased from 59,601 in fiscal year 2011 to 384,651 in fiscal year 2013, according to OMHA. OMHA’s website currently says that new appeals will take about 28 months before they are put on an ALJ’s hearing docket. OMHA has reported that part of the reason for the backlog in Medicare appeals is the increase in postpayment contractor activities. We have been asked to examine the Medicare appeals process, including the reasons for the appeals backlog and what HHS is doing to address it. We have made recommendations to CMS in the past to improve the postpayment claims review process, and we continue to do work in this area. In October 2012, we reported on CMS’s Fraud Prevention System (FPS), which uses predictive analytics to analyze Medicare FFS claims. FPS is intended to detect aberrant billing practices as quickly as possible so they can be investigated to determine whether the payments are proper. At the time, we recommended that CMS integrate FPS with Medicare’s payment-processing system to allow for the prevention of payments until suspect claims could be investigated by ZPICs. Although CMS reported in April 2014 that it had integrated the systems, the system still does not have the ability to suspend payment until suspect claims can be investigated. CMS has begun to implement prepayment edits in FPS that automatically deny claims based on attributes of the FPS edit which reviews the claim against historical claims across all lines of business. In July 2013, we reported that the differences in CMS’s postpayment claims review requirements for the four types of contractors may reduce the efficiency and effectiveness of claims reviews by complicating providers’ compliance with the requirements. For instance, while RACs have to obtain approval from CMS for the billing issues they choose to review on a widespread basis and notify providers and suppliers of those issues on their websites, the other contractors do not. In addition, the minimum number of days that CMS requires a contractor to give a provider to submit additional documentation for a complex review before the claim can be found improper for lack of documentation varies among the contractors from 30 days for ZPICs to 75 days for the CERT contractor. Staffing requirements and quality assurance requirements also vary among the four types of contractors. We recommended that CMS examine all postpayment review requirements for contractors to determine whether they could be made more consistent without negative effects on program integrity. We also recommended that CMS reduce differences in those requirements where it can be done without impeding the efficiency of its efforts to reduce improper payments. In commenting on that report, CMS agreed with our recommendations and stated that the agency was beginning to review its requirements for postpayment claims reviews. We are following up on this work with a study reviewing, among other things, whether CMS has strategies for coordinating postpayment review contractors’ claims review activities. In conclusion, given the amount of estimated improper payments in the Medicare program, the imperative for CMS to use all available authorities to prevent and recoup improper payments is clear. Although CMS has taken important steps to strengthen key strategies for identifying and preventing improper payments, the agency must continue to improve upon these efforts. Identifying the nature, extent, and underlying causes of improper payments and developing adequate corrective action processes to address vulnerabilities are essential prerequisites to reducing them. As CMS continues its implementation of PPACA, additional evaluation and oversight will help determine whether implementation of relevant provisions has been effective in reducing improper payments. We are continuing to conduct a body of work that assesses CMS’s efforts to refine and improve its ability to prevent, identify, and recoup improper payments. Notably, we are currently assessing the extent to which CMS’s information system can help prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in Medicare. Additionally, we are examining CMS’s oversight of some of the contractors that conduct postpayment reviews of claims including whether CMS has a strategy for coordinating these contractors’ claims review activities. Separately, we have also been asked to examine the Medicare appeals process, including the reasons for the appeals backlog and how it is being addressed. Through this work, we hope to develop further recommendations for CMS to help the agency continue to refine its efforts to reduce improper Medicare payments. Chairman Lankford, Ranking Member Speier, and Members of the Subcommittee, this concludes my prepared remarks. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Lori Achman, Assistant Director; Rebecca Abela; Jennel Lockley; and Jennifer Whitworth were key contributors to this statement. Appendix I: Abbreviations Related GAO Products Medicare Fraud: Progress Made, but More Action Needed to Address Medicare Fraud, Waste, and Abuse. GAO-14-560T. Washington, D.C.: April 30, 2014. Medicare Program Integrity: Contractors Reported Generating Savings, but CMS Could Improve Its Oversight. GAO-14-111. Washington, D.C.: October 25, 2013. Health Care Fraud and Abuse Control Program: Indicators Provide Information on Program Accomplishments, but Assessing Program Effectiveness Is Difficult. GAO-13-746. Washington, D.C.: September 30, 2013. Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522. Washington, D.C.: July 23, 2013. Medicare Program Integrity: Few Payments in 2011 Exceeded Limits under One Kind of Prepayment Control, but Reassessing Limits Could Be Helpful. GAO-13-430. Washington, D.C.: May 9, 2013. Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness. GAO-13-104. Washington, D.C.: October 15, 2012. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Health Care Fraud: Types of Providers Involved in Medicare, Medicaid, and the Children’s Health Insurance Program Cases. GAO-12-820. Washington, D.C.: September 7, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicare: Important Steps Have Been Taken, but More Could Be Done to Deter Fraud. GAO-12-671T. Washington, D.C.: April 24, 2012. Medicare Program Integrity: CMS Continues Efforts to Strengthen the Screening of Providers and Suppliers. GAO-12-351. Washington, D.C.: April 10, 2012. Improper Payments: Remaining Challenges and Strategies for Governmentwide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Expand Efforts to Support Program Integrity Initiatives. GAO-12-292T. Washington, D.C.: December 7, 2011. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Improper Payments: Status of Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-08-438T. Washington, D.C.: January 31, 2008. Improper Payments: Federal Executive Branch Agencies’ Fiscal Year 2007 Improper Payment Estimate Reporting. GAO-08-377R. Washington, D.C.: January 23, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Due to its size, complexity, and susceptibility to mismanagement and improper payments, GAO has designated Medicare as a high-risk program. In 2013, Medicare financed health care services for approximately 51 million individuals at a cost of about $604 billion, and reported an estimated $50 billion in improper payments—payments that either were made in an incorrect amount or should not have been made at all. Most of these improper payments were made through the Medicare FFS program, which pays providers based on claims and uses contractors to pay the claims and ensure program integrity. This statement focuses on the progress made and steps still to be taken by CMS to improve improper payment prevention and recoupment efforts in the Medicare FFS program. This statement is based on relevant GAO products and recommendations issued from 2007 through 2014 using a variety of methodologies. GAO also updated information by examining public documents and, in April 2014, GAO received updated information from CMS on its actions related to laws and regulations discussed in this statement. The Centers for Medicare & Medicaid Services (CMS), the agency within the Department of Health and Human Services (HHS) that oversees Medicare, has made progress improving improper payment prevention and recoupment efforts in the Medicare fee-for-service (FFS) program, but further actions are needed. Provider enrollment . CMS has implemented certain provider enrollment screening procedures authorized by the Patient Protection and Affordable Care Act (PPACA) that address past weaknesses identified by GAO and others. The agency has also put in place other measures intended to strengthen existing procedures, but could do more to improve provider enrollment screening and ultimately reduce improper payments. For example, CMS has hired contractors to determine whether providers and suppliers have valid licenses, meet certain Medicare standards, and are at legitimate locations. CMS also recently contracted for fingerprint-based criminal history checks of providers and suppliers it has identified as high-risk. However, CMS has not implemented other screening actions authorized by PPACA that could further strengthen provider enrollment. Prepayment controls . In response to GAO's prior recommendations, CMS has taken steps to improve the development of certain prepayment edits—prepayment controls used to deny Medicare claims that should not be paid; however, important actions that could further prevent improper payments have not yet been implemented. For example, CMS has implemented an automated edit to identify services billed in medically unlikely amounts, but has not implemented a GAO recommendation to examine certain edits to determine whether they should be revised to reflect more restrictive payment limits. GAO has found that wider use of prepayment edits could help prevent improper payments and generate savings for Medicare. Postpayment claims reviews . Postpayment claims reviews help CMS identify and recoup improper payments. Medicare uses a variety of contractors to conduct such reviews, which generally involve reviewing a provider's documentation to ensure that the service was billed properly and was covered, reasonable, and necessary. GAO has found that differing requirements for the various contractors may reduce the efficiency and effectiveness of such reviews. To improve these reviews, GAO has previously recommended CMS examine ways to make the contractor requirements more consistent. CMS reported that it has begun to address these recommendations. Although the percentage of Medicare claims that undergo postpayment review remains very small, GAO has found that the overall number of postpayment claims reviews has been increasing in recent years. HHS has reported that the increase in claims reviews is one factor causing backlogs in the Medicare appeals process. GAO has ongoing work focused on how CMS could continue its efforts to reduce improper Medicare payments. For instance, GAO is examining the extent to which CMS's provider enrollment system can help prevent and detect the continued enrollment of ineligible providers in Medicare. GAO also has work underway to examine whether CMS has strategies for coordinating postpayment review contractors' claims review activities.
Background Congress provides the military services with O&M funds for certain expenses, such as pay and benefits for most of DOD’s civilians; operations at military bases; training, education, and medical care for individual service members; and fuel and spare parts for DOD equipment, among other expenses. When developing annual O&M funding requests, the military services report that estimates of their fuel consumption are based on planned activity levels, which can vary by service. For example, the Air Force, Navy, and Marine Corps estimate their fuel consumption based on planned operational and training flying hours. According to Navy and Marine Corps officials, the Navy estimates its fuel consumption based on a planned number of steaming days for ship operations (i.e., the number of days a ship is not in port), and the Marine Corps estimates its fuel consumption for its ground units based on the number of days for planned training exercises. The Army estimates its fuel consumption based on historical fuel usage rates for vehicle miles during training events and operational fuel requirements as determined by Army major commands and Army operations and plans. In general, the military services follow a similar process for estimating fuel consumption requirements, with some differences in the extent to which they use actual fuel consumption data to estimate future fuel consumption. For example, officials from both the Air Force and Navy reported using historical averages to calculate fuel consumption estimates for their flying hour program and ship operations. However, the Air Force uses five years of data while the Navy uses three years of data to estimate fuel consumption for ship operations and data from the previous year to calculate fuel consumption estimates for its air operations. Officials with two military services—the Army and the Marine Corps—stated that they used other data points to approximate actual fuel consumption in order to calculate a fuel consumption estimate. For example, according to Army officials, the Army uses three to five years of sample test data from ground vehicles and equipment, fuel efficiency rates in technical manuals, and manufacturer’s data for equipment to approximate fuel efficiency for each type of equipment item. Taken together, the Army uses these data points to approximate its actual fuel consumption. According to Marine Corps officials, the Marine Corps bases its fuel consumption requirements on the previous year’s sales data from DLA and adjusts its fuel consumption estimate to reflect changes in operational and training requirements for the budget request year. The military services and other fuel customers use O&M funding to reimburse DOD for the costs of purchasing bulk fuel on the world market to support their operations. The military services calculate their total O&M funding needs for fuel in a given fiscal year by using their planned volume of fuel consumption expressed in millions of barrels of fuel and the standard price per barrel that DOD will charge its fuel customers for fuel. The OUSD Comptroller, in coordination with DLA, estimates and sets a standard price for its fuel and other fuel-related commodities for each budget request. For its fiscal years 2016 and 2017 budget estimates, DOD established the standard price based on two components: the projected cost of refined fuel and operating costs, which cover various overhead and transportation costs. According to DOD officials, in setting the standard price, DOD endeavors to closely approximate the actual per barrel price that will be paid during budget execution, which occurs almost a year later. If the actual market price of fuel is higher than the price DOD is charging its customers, DOD will have to pay more for fuel than it is being reimbursed from its customers. If the actual price is lower than the standard price, DOD will be reimbursed with more cash than it anticipated. DOD and military service financial management officials prepare budget justification materials for their O&M funding requests on an annual basis. Beginning in fiscal year 2010, the military services have prepared separate budget justification materials for O&M base and O&M OCO funding requests. O&M base funding is used to pay for enduring day-to- day programs and activities—including fuel for training activities. O&M OCO funding is used to support activities associated with overseas contingency operations. DOD’s Financial Management Regulation governs how the military services formulate these budget requests and communicate them to Congress. Specifically, the Regulation directs statutory and regulatory financial management requirements, systems, and functions for all appropriated and non-appropriated, working capital, revolving and trust fund activities. For fuel consumption estimates, the military services prepare two principal budget exhibits: Petroleum, Oil, and Lubricants Consumption and Costs budget exhibit (the “OP-26”): Contains information on direct consumption by type of petroleum product. The military services prepare and submit to the OUSD Comptroller three separate exhibits for each budget submission: (1) OP-26A for flying hours; (2) OP-26B for unit fuel costs; and (3) OP-26C for sources of purchases for petroleum, oil, and lubricants consumption. According to DOD’s Financial Management Regulation, the OP-26 is not provided to Congress with the budget justification materials accompanying the President’s annual budget request. Summary of Price and Program Changes budget exhibit (the “OP- 32”): Contains information by specific line items detailing, among other items, Defense-wide Working Capital Fund supplies and materials purchases related to fuel consumption, such as fuel purchases from the DLA’s Defense Fuel Supply Center and locally- purchased fuel. According to DOD’s Financial Management Regulation, the OP-32 is provided to Congress with the budget justification materials accompanying the President’s annual budget request. DLA, as the department-wide executive agent for bulk petroleum, is tasked with executing supply chain management for all bulk fuel owned by DOD. DLA utilizes the Defense-wide Working Capital Fund to purchase bulk fuel for customers. DOD prepares Defense-wide Working Capital Fund operating and capital budget materials. These budget materials describe DLA’s budget requests, provide justifications for any changes in the budget request from previous years, and report changes in the standard price of fuel across fiscal years. Generally, DOD’s O&M budget justification materials for fuel consumption present data for three years, including actual total obligations for fuel consumption spending for the previous fiscal year, estimated obligations for fuel consumption spending for the current fiscal year, and estimated obligations for fuel consumption spending for the budget request fiscal year. The Defense-wide Working Capital Fund covers DLA’s costs for purchasing bulk fuel and is reimbursed through its sale of fuel to the military services and other customers at a standard price. The standard price is intended to remain unchanged until the next budget year. This helps to shield the military services from market price volatility by allowing the cash balance in the fund to absorb minor fuel price fluctuations. For example, from fiscal years 2010 through 2015, the military services purchased an average of approximately 102 million barrels per year from DOD. Therefore, a standard price increase of even $1 per barrel would result in a $102 million difference from the military services’ budget requests. According to DOD’s Financial Management Regulation, working capital funds were established to satisfy recurring DOD requirements using a businesslike buyer-and-seller approach, and the goal for the Defense-wide Working Capital Fund is to remain revenue neutral, allowing the fund to break even over time—that is, to neither make a gain nor incur a loss. During the year the budget is executed, the actual price for a barrel of fuel on the world market may be higher or lower than DOD’s standard price. If the actual price is higher, the cash balance in the Defense-wide Working Capital Fund will go down. If the actual price is lower, the cash balance in the fund will go up. To correct for these fluctuations, DOD may adjust the standard price for the following year. For example, DOD may increase the standard price to make up for losses in the previous year and bolster the cash balance in the fund. Alternatively, DOD may decrease the standard price to reimburse the military services, which had paid a higher price the previous year. DOD can also cover fund losses during the execution year by obtaining an appropriation from Congress, transferring funds from another DOD account into the fund, or adjusting the standard price out of cycle. Figure 1 illustrates the process and the main organizations involved in budgeting for fuel. Military Services’ Reported Actual Spending on Fuel Consumption Has Differed from Budget Estimates since Fiscal Year 2012, Which Officials Attributed Largely to Changes in Operations and Training During fiscal years 2012 through 2015, the military services reported a decrease in total obligations for fuel consumption spending but reported actual obligations differed from budget estimates during these years, which officials attributed to changes in operations and training that affected the level of fuel consumption. Specifically, each of the military services either over- or underestimated actual obligations for fuel consumption spending when compared to their budget estimates. DOD officials identified changes in operations and training levels during these years as the primary reasons for the differences between actual and estimated spending on fuel consumption, although other factors, such as changes in the standard price DOD charges its fuel customers, have contributed to differences in prior years. Military Services’ Reported Actual Spending on Fuel Consumption Decreased from Fiscal Years 2012 through 2015 but Differed from Budget Estimates In fiscal years 2012 through 2015, the military services’ reported a decrease in total obligations for fuel consumption spending from a high of about $13 billion in fiscal year 2012 to a low of about $10.1 billion in fiscal year 2015. The Army reported the greatest overall decrease in total obligations for fuel consumption spending during these years, from a high of about $3.4 billion in fiscal year 2012 to a low of about $1.3 billion in fiscal year 2015. Decreases reported in total obligations for fuel consumption spending for these fiscal years varied by military service, as shown in figure 2. Our analysis of DOD’s budget justification materials comparing the military services’ reported actual obligations for fuel consumption spending against their budget estimates found that each of the military services over- or underestimated fuel consumption spending in each fiscal year from 2012 through 2015. For example, the Army underestimated its fuel consumption spending by about $840 million in fiscal year 2012, while the Navy overestimated its spending by about $2.4 billion in fiscal year 2014. The differences in actual obligations and estimated spending reported for each military service are shown in figure 3. Military Services Identified Various Factors that Contributed to Differences between Actual and Estimated Spending on Fuel Consumption According to military service officials, differences between actual obligations and estimated spending on fuel consumption are mainly attributable to changes in planned operations and training. For example: Army budget officials told us that fiscal year 2015 marked a change in its mission in Afghanistan, from the end of Operation Enduring Freedom to the beginning of Operation Freedom’s Sentinel. According to these officials, changes in operational missions were the main driver of the difference between its actual and estimated fuel consumption spending for that fiscal year. Air Force financial management officials identified changes in fighter and tanker support during overseas missions as factors that contributed to differences between its actual and estimated fuel consumption spending. One Navy budget official told us that delays in the delivery of ships and equipment can lead to differences between actual and estimated fuel consumption spending. For example, Navy budget officials cited the delay in deployment of the Littoral Combat Ship in fiscal year 2015, noting that they included fuel consumption spending estimates for these ships in annual budget requests for that year, but the ships were not yet ready to deploy, and thus the fuel consumption spending estimates were over stated. Marine Corps budget officials told us that it is difficult to identify an accurate budget estimate for fuel consumption spending up to 18 months in advance of the year of budget execution, because factors like a change in operational tempo or a sudden need to deploy or redeploy forces can have a significant effect on actual fuel consumption spending. Officials told us that other factors can result in differences between actual and estimated spending on fuel consumption, such as inclement weather or maintenance issues. For example, Air Force and Navy officials stated that inclement weather can affect fuel efficiency for air and ship operations or result in delays or the cancellation of training activities. Army officials stated, for instance, that entire training schedules have been canceled as a result of inclement weather. Unforeseen maintenance issues during the year of budget execution can also have an effect on fuel consumption spending. For example, Army officials stated that funding budgeted for fuel can be used for spare parts and other costs related to operation and maintenance instead of fuel, which has contributed to differences between actual and estimated spending on fuel consumption. Budgetary actions that affect O&M funding levels for fuel have also affected actual consumption spending, according to service officials. In fiscal year 2013, for example, officials reported that actual fuel consumption spending was lower than estimated spending as a result of actions DOD took to address sequestration. In our prior work, we highlighted several actions identified by DOD officials that DOD and the military services took to address these budgetary reductions. For instance, all four of the military services cancelled or reduced participation in training exercises in fiscal year 2013. Additionally, the Air Force stood down 17 of 62 operational squadrons for 3 months during fiscal year 2013 and reduced flying hours for 10 other squadrons for a period of 1 to 3 months. Military service officials also described changes in the standard price that DOD charges its fuel customers that can result in differences between actual and estimated fuel consumption spending. The military services use the standard price as a key component when developing their O&M budget requests. If DOD changes the standard price actually charged to fuel customers during the year of budget execution, the military services’ O&M budgets can be affected as a result. For instance, in 2014, we reported that from fiscal years 2009 through 2013, the differences between the price DOD paid for fuel and the standard price it charged its fuel customers accounted for, on average, 74 percent of the difference between DOD’s actual and estimated fuel costs. DOD officials told us that they try to avoid changes to the standard price when possible to avoid the negative effect on the military services’ O&M budgets. We found that for fiscal years 2012 through 2015, DOD generally kept the standard price it charged fuel customers the same throughout the year or decreased it. For example, DOD decreased the standard price three times in fiscal year 2012 (from $165.90 to $97.02 per barrel) and left it unchanged for fiscal years 2013 and 2014. In fiscal year 2015, DOD decreased the standard price from $155.40 to $136.92 per barrel. As a result, changes in the standard prices charged to fuel customers had a limited effect on the differences between actual and estimated fuel consumption spending for these years. DOD Does Not Fully Reconcile Differences in Reported Fuel Consumption Spending and Does Not Include Certain Fuel Consumption Data in Annual Budget Requests DOD takes some steps to report fuel consumption data in annual budget estimates, but it does not fully reconcile differences between the military services’ reported actual fuel consumption spending and DLA’s reported fuel sales and does not include certain data that the Congress could use to evaluate the military services’ funding requests for fuel. For each budget request, DOD validates the military services’ fuel consumption data by reviewing the military services’ fuel consumption estimates to ensure that the estimates align with DOD’s overall funding priorities, among other steps. However, DOD does not reconcile differences between the military services’ actual obligations for fuel consumption spending reported in O&M budget requests and DLA’s reported fuel sales to the military services that could potentially improve the accuracy of the military services’ annual budget estimates. Further, DOD’s annual O&M budget requests for fuel contain some actual and estimated fuel consumption spending data, but the requests did not include fuel volume data or separate the military services’ actual O&M base obligations for day-to-day activities, such as training, from its actual O&M OCO obligations for fuel consumption spending. DOD and the Military Services Perform Steps to Validate and Report Fuel Consumption Data When Developing Annual Budget Requests Each military service develops an annual O&M funding estimate for fuel consumption based on planned activity levels, such as flying hours, steaming days, tank miles, and base operations, among other factors, and the standard price provided by the OUSD Comptroller. Consistent with requirements established in DOD’s Financial Management Regulation, the military services each prepare an OP-26 (“Petroleum, Oil, and Lubricants Consumption and Costs”) and OP-32 (“Summary of Price and Program Changes”) budget exhibit to justify their O&M funding requests for fuel consumption. More specifically, the military services prepare the OP-26 budget exhibit for planned fuel consumption that describes estimates for a total of both O&M base and O&M OCO fuel volume requirements (i.e., millions of barrels of fuel) and dollars, which are used by the military services and the OUSD Comptroller to gauge the effect of any fuel price changes on DOD’s O&M funding requests during the budgeting process. For example, during the budget development process, the services prepare the OP-26 budget exhibit showing fuel volume requirements and the standard price to develop their O&M funding estimates. An OUSD Comptroller official explained that the department would use the OP-26 data to assess any effect on the military services’ O&M estimates and funding needs if it were to adjust the standard price for the President’s budget request submission, but it does not submit the OP-26 to Congress with its annual O&M budget justification materials. Separately, the military services prepare individual OP-32 budget exhibits for their O&M base and O&M OCO funding requests. The OP-32 exhibits summarize the total price and program changes in dollars from the previous fiscal year to the current fiscal year and from the current fiscal year to the budget request year. Unlike the OP-26, DOD submits the OP-32 to Congress with the budget justification materials accompanying the President’s annual budget request. According to an OUSD Comptroller official who oversees the bulk fuel program, the OUSD Comptroller evaluates the military services’ fuel consumption estimates contained in these budget exhibits to ensure that they align with overall DOD funding priorities to support the President’s budget request and that the data are consistent among all exhibits. The official stated these budget exhibits are also reviewed to ensure that the military services’ fuel consumption estimates are in line with historical fuel consumption. The official stated that the OUSD Comptroller reviews DLA data on fuel sales to the military services as one point of comparison when evaluating the military services’ fuel consumption budget estimates, but the official noted that differences between DLA and the military services’ fuel sales data can exist. Specifically, DLA reports its actual and estimated fuel sales in the Defense-wide Working Capital Fund budget exhibit provided annually to Congress. DLA also publishes a fact book each fiscal year, which contains information regarding DLA’s business operations that includes data on fuel sales to the military services, among other information. Following the requirements established in DOD’s Financial Management Regulation, the OP-32 budget exhibits are then incorporated into the overall O&M budget request for each service. DOD Does Not Reconcile Differences between the Military Services’ Fuel Consumption Budget Data and DLA Fuel Sales Based on our review of military service and DLA data for fiscal years 2012 through 2015, we found significant differences between the military services’ reported actual obligations for fuel consumption spending reported in annual O&M budget requests and DLA fuel sales data. For example, in the President’s budget request for fiscal year 2016 that was submitted to Congress in February 2015, DOD reported that the Navy’s actual obligations for fuel consumption spending in fiscal year 2014 were about $2.7 billion less than what DLA’s fuel sales data show was sold to the Navy in that year. In addition, in this same budget request, DOD reported that the Army’s actual obligations for fuel consumption spending in fiscal year 2014 were about $1.2 billion more than what DLA’s fuel sales data show was sold to the Army for the same fiscal year. Figure 4 shows differences between military services’ actual obligations for fuel consumption spending that were reported in the President’s annual budget requests and fuel sales to the military services reported by DLA for fiscal years 2012 through 2015. DLA and military service officials provided some explanations for why differences may exist between the military services’ actual obligations for fuel consumption spending reported in annual budget requests and DLA fuel sales data, but neither DLA nor the military services could fully account for these differences, even though they compared the two data sets during the budget review process during the budget review process. For example, according to DOD officials, DLA fuel sales for an individual service may include sales to DOD’s combatant commands, such as U.S. Transportation Command. Officials explained that the large differences between the Air Force’s reported obligations for fuel consumption spending and DLA’s fuel sales to the Air Force could be attributable to the inclusion of fuel sales to U.S. Transportation Command in DLA’s data sets if Air Force aircraft flew missions for the command. However, the obligations for fuel consumption spending would not necessarily be accounted for in the Air Force’s data. Additionally, according to an official from the OUSD Comptroller, the Air Force might purchase fuel from DLA in order to support an Army mission. The Army would then be responsible for reimbursing the Air Force for this fuel consumption. Another reason for these discrepancies, officials explained, is the result of the military services’ accounting practices for fuel consumption spending. For example, when an Air Force aircraft is used to support a U.S. Transportation Command or other DOD component mission, the fuel purchased from DLA for that aircraft is initially charged to an Air Force account. Officials stated that through a monthly review of accounting records, the Air Force’s fuel charges for that particular aircraft would eventually be charged to the appropriate DOD component organization. Yet, DLA would not be informed of the final consumer of the fuel, and would thus record the sale of the fuel to the Air Force. DOD and military service officials stated these reasons would not account for all discrepancies between DLA’s data and military services’ actual obligations for fuel consumption spending. However, despite these significant differences in military service and DLA data, DOD officials were unable to provide an analysis or other documentation that explained the differences between the military services’ actual obligations for fuel consumption spending reported in annual budget requests and DLA fuel sales data. DOD has not established an approach to reconcile data on fuel consumption reported by the military services and DLA fuel sales to the military services, although DLA’s Strategic Plan for 2015-2022 emphasizes DLA’s commitment to collaborating with the military services to increase transparency. The plan highlights the need to have an ongoing, open dialogue with the military services about DLA’s costs. On an annual basis, DLA coordinates with the military services to define estimated fuel requirements, which DLA uses to purchase fuel worldwide for eventual sale to the military services. DOD officials told us that during the annual budget development process, the OUSD Comptroller uses DLA’s fuel sales data to validate the military services O&M fuel consumption estimates; however, neither OUSD Comptroller, DLA, nor the military services act to reconcile any differences between data on DLA’s fuel sales and the military services’ actual obligations for fuel consumption spending. These officials noted that the directive establishing DLA as the department’s executive agent for bulk fuel does not require DLA to record or report fuel sales data to the military services or other fuel customers or reconcile any differences with the military services’ data, and there is no department-wide policy that requires consistency between DLA fuel sales data and military service actual obligations for fuel consumption spending. Standards for Internal Control in the Federal Government states that appropriate control activities include the establishment of activities to monitor performance measures and indicators, which may include comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. The OUSD Comptroller official who is responsible for the bulk fuel program told us that, based on issues we raised during the course of our work, the department is taking some initial steps to understand the differences between the military services’ and DLA’s fuel consumption data. Specifically, the department held a series of initial working group meetings in April and May 2016 to discuss the military services’ fuel consumption O&M budget exhibits and DLA’s process for reporting its fuel sales, including how DLA fuel sales are recorded and reported and how DLA’s data relate to the information the military services report in their O&M budget justification materials. For example, in May 2016, the OUSD Comptroller, DLA, and the military services discussed possible adjustments to DOD’s accounting practices for fuel consumption spending to more accurately record DLA fuel sales to fuel customers. However, while DOD held an initial set of working groups at the time of our review, it did not have specific plans or milestones to address the limitations and inconsistencies between the military services’ and DLA’s data, although an OUSD Comptroller official agreed that a more formal process to reconcile differences would help validate the military services’ annual O&M budget estimates for planned fuel consumption spending. Given the significant differences between the military services’ and DLA’s fuel data, having an approach to reconcile differences would provide DOD with a means to understand any discrepancies in its fuel consumption data and better assess the accuracy of the military services’ actual fuel consumption spending that is reported in annual budget requests. DOD’s Annual Budget Requests Provided to Congress Do Not Include Certain Fuel Consumption Data DOD’s annual O&M budget requests and the accompanying budget justification materials provide Congress with certain information on the military services’ actual and estimated fuel consumption spending that can help it make appropriations decisions, conduct oversight, and provide control over funds. However, the Senate Armed Services Committee has expressed its concern with DOD’s fuel consumption estimates, noting that as DOD transitions from large-scale contingency operations in Afghanistan, the military services' fuel consumption estimates should be more consistent as full-spectrum training resumes. The committee also stated that given recent fuel price fluctuations due to changes in the global oil market, accurate fuel consumption estimates become even more important in trying to adequately determine budget requests, particularly in times of fiscal constraints. Standards for Internal Control in the Federal Government emphasizes using quality and complete information to make decisions and communicate such information externally. Moreover, the Handbook of Federal Accounting Standards states that agencies should provide reliable and timely information on the full costs of their federal programs in order to assist congressional and executive decision makers in allocating federal resources and making decisions to improve economy and efficiency. DOD Provides Some Fuel Consumption Spending Data in Its Annual Budget Requests Our analysis of DOD’s annual O&M budget requests to Congress found that they contain some actual and estimated fuel consumption spending data. For example, the OP-32 budget exhibits included in DOD’s annual O&M budget materials provide the military services’ fuel consumption spending estimates for both their O&M base and O&M OCO funding needs for the current fiscal year and budget request year. In addition, the military services’ budget exhibits report data on actual obligations for fuel consumption spending for the total of both O&M base and O&M OCO spending combined for the prior fiscal year. The military services also report actual obligations for O&M OCO-only fuel consumption spending for the prior fiscal year in the OP-32 budget exhibit accompanying the O&M OCO budget request for each service. Separately, DLA reports information on its energy management activities in the Defense-wide Working Capital Fund budget justification materials provided to Congress on an annual basis. These budget materials include DLA’s estimated fuel sales to the military services (for both O&M base and O&M OCO) for the current fiscal year and budget request year and actual fuel sales to the military services (for O&M OCO only) for the prior fiscal year. Also included are details on DLA’s overhead costs and the standard price the military services will be charged for fuel. Fuel Volume Data and O&M Base Obligations Are Not Included in Budget Requests We also found, however, that DOD’s annual budget requests do not provide information in two areas that could be used by Congress to evaluate the military services’ funding requests for fuel. Specifically, DOD’s budget requests did not (1) provide fuel volume data and (2) separate the military services’ actual O&M base obligations for fuel consumption spending for day-to-day activities from its actual O&M OCO obligations for war-related fuel consumption spending. The military services do provide the OUSD Comptroller with actual and estimated fuel volume data in the OP-26 budget exhibits during the budget development process. These budget exhibits describe the volume of fuel (i.e., millions of barrels of fuel) that the military services estimate they will use for a total of their base and OCO needs when developing annual budget estimates. However, although DOD collects and evaluates fuel volume data from the military services, it does not include the OP-26 budget exhibits in the O&M budget justification materials it provides annually to Congress. According to an OUSD Comptroller official who oversees the bulk fuel program, DOD’s historical practice has been to use the fuel volume requirements data included on the OP-26 during the budget development process. Although the DOD Financial Management Regulation states that the OP-26 will not be included with the military services’ budget justification materials submitted to Congress, it does not specifically preclude DOD from providing fuel volume information. The official could not explain the reasoning behind the Financial Management Regulation direction to exclude the OP-26 from DOD’s budget request. Because the military services’ O&M funding estimates for fuel can be affected by market price fluctuations from one year to the next, fuel volume data would provide another measure of estimated or actual fuel consumption to justify DOD’s funding requests for fuel. Additionally, the military services’ O&M budget materials submitted to Congress do not report actual O&M base obligations for fuel consumption spending separately from actual O&M OCO obligations for the prior fiscal year. For example, as noted above, the OP-32 budget exhibits accompanying the military services’ O&M budget requests provide data on (1) actual obligations for the total of O&M base and O&M OCO fuel consumption spending combined and (2) O&M OCO-only spending. According to the OUSD Comptroller official who oversees the bulk fuel program, DOD and the military services collect and track O&M base obligations and O&M OCO obligations for fuel consumption spending separately, but DOD’s Financial Management Regulation does not require O&M base obligations to be reported separately from O&M OCO obligations in its budget justification materials and does not specifically preclude DOD from doing so. This official also stated that Congress has not asked the department to report O&M base obligations for fuel consumption spending separately from its O&M OCO obligations. Table 1 shows the extent to which DOD’s various O&M budget documents contain fuel consumption information and are reported to Congress. Additional data on the military services’ actual fuel consumption could assist Congress in determining funding levels that are needed for their activities, including full-spectrum training. In the absence of such data, congressional decision makers may not have the data they need to assess any trends in actual O&M base obligation for non-war-related purposes when evaluating the military services’ budget requests for fuel. For example, although DOD does not report actual O&M base obligations for fuel consumption spending separately from actual O&M OCO obligations for prior fiscal years, we conducted an analysis to separate the military services’ actual O&M base and actual O&M OCO obligations for fuel consumption spending. In conducting this analysis, we calculated O&M base obligations, because DOD does not report this information in its budget justification materials, as noted above. To do this, we first compiled and summed data on actual O&M OCO obligations for fuel consumption spending reported in the OP-32 budget exhibits accompanying the military services’ O&M OCO requests for fiscal years 2012 through 2015. We then subtracted this amount from the total O&M obligations for fuel consumption spending for these same years that are reported in the OP-32 budget exhibits accompanying the military services’ O&M base budget requests which, as we also noted above, included actual obligations for fuel consumption spending in the prior fiscal year for the total of both O&M base and O&M OCO obligations combined. We then compared this amount to the estimates for fuel consumption spending included in the military services’ O&M base budget request for each fiscal year. Our analysis found that the military services generally overestimated the amount of actual O&M base fuel consumption spending for fiscal years 2012 through 2015, with one exception, as figure 5 shows. For example, the Army, Navy, and Marine Corps each overestimated O&M base fuel consumption spending each year during this time period. The amount and percent difference of the overestimate for these services, comparing our estimate of actual O&M base obligations for fuel consumption spending with the original estimates, varied each year from a high of about $2.5 billion for the Navy in fiscal year 2014, or about a 280 percent difference from its original estimate, to a low of about $17 million for the Marine Corps in fiscal year 2013, or about a 17 percent difference from its original estimate. Our analysis also showed that the Air Force over- estimated its O&M base fuel consumption in three out of the four years during this time frame. For example, the Air Force underestimated O&M base fuel consumption spending by about $477 million in fiscal year 2012, or about a 13 percent difference from its original estimate. In fiscal year 2015, however, the Air Force overestimated its O&M base fuel consumption spending by about $895 million, or about a 24 percent difference from its original estimate. Navy officials noted that congressional budget actions can affect the amount of O&M base fuel consumption spending in a particular fiscal year. Navy officials stated that, in fiscal year 2014, Congress realigned funds from base funding to OCO funding, which resulted in a difference of $800 million from the Navy’s original O&M base budget request for fuel. This realignment then affected the base activities the Navy was able to execute for that fiscal year. A Navy official noted that this realignment was one explanation for the differences between actual and estimated fuel consumption spending in the Navy’s O&M base spending for fuel. DOD also produces additional sources of information that contain data that could be used by decision makers to measure the military services’ fuel consumption, but these sources lack details in these same areas and are not provided to Congress. For example, DLA publishes annually a fact book in which, among other activities, it reports the total dollar amount of fuel it recorded having sold to the military services for that fiscal year, but the fact book does not report the fuel volume associated with these sales. Further, according to a DLA official, DLA does not submit the fact book to Congress with its annual budget request. As our analysis shows, there is no single document or set of documents that provides Congress with information on actual and estimated fuel volume and fuel consumption spending that it could use to evaluate the military services’ budget requests for fuel. Unless DOD reports more complete information on its actual and estimated fuel consumption, Congress will not have full visibility over the amount of fuel volume the military services require on an annual basis for their activities, or trends in the military services’ spending for non-war-related fuel consumption, which has varied considerably from budget estimates. DOD’s Approach for Determining the Fiscal Year 2017 Standard Price Methodology Is Consistent with Federal Budget Guidance and Leading Practices for a Credible Cost Estimate, but DOD Has Not Fully Documented Its Rationale for Estimating the Price DOD’s approach for determining the fiscal year 2017 standard price of fuel is consistent with federal budget guidance and leading practices for a credible cost estimate, but DOD has not fully documented its rationale for estimating the standard price. In 2014 and 2015, we found weaknesses with DOD’s methodology for developing its standard price. DOD adjusted its methodology for establishing the fiscal year 2017 standard price that aligns with federal budget guidance and leading cost estimating practices because DOD used valid and reliable data and it assessed the relative risks and limitations of various pricing options. However, DOD has not fully documented its process for establishing the standard fuel price as we have previously recommended. Prior Weaknesses Found with DOD’s Standard Price Methodology In July 2014, we found that DOD had not updated its approach to establishing the standard price for fuel to reflect current market conditions since 2007, nor had it documented its rationale for the assumptions it uses in estimating the standard price. We recommended that DOD reevaluate its approach for establishing the standard price to allow DOD to develop more informed estimates and be better positioned to minimize risks and uncertainty resulting from changing market conditions. We also recommended that DOD document its assumptions, including providing detailed rationale for how it establishes the standard price. In November 2015, we found that, consistent with our recommendation, DOD had evaluated a range of options to establish the standard price for the President's fiscal year 2016 budget request and developed a new methodology. However, we found that the new methodology did not reflect actual market conditions or fully account for risks to the reliability of DOD’s fuel cost estimates. More specifically, we found that DOD had not used valid and reliable data on market conditions when evaluating options for adjusting its fuel pricing methodology because it used OMB's Gas and Oil price index as a dollar value rather than applying it in its analyses as a percentage to measure the change in prices from one year to the next. Our analysis showed that applying the Gas and Oil price index as a measure of a change in fuel prices from one year to the next produced results that differed from what DOD found. For example, DOD applied the Gas and Oil price index for fiscal year 2016 as a dollar price of $122.56 per barrel of refined fuel. In contrast, we calculated a refined fuel price estimate between $58.10 and $83.58, depending on how the Gas and Oil price index is applied to actual fuel prices. Furthermore, we found that DOD's analysis of the methodology based on the use of the price index did not review and understand the limitations and risks to the reliability of its fuel estimate which, in this case, resulted from determining a projected fuel price that applied the price index to actual fuel prices that would be almost 2 years old at the time of DOD’s budget request. According to its budget materials, DOD had a fiscal year 2016 estimate of planned fuel consumption totaling 81 million barrels of fuel, which, according to our analysis, led DOD to request in its fiscal year 2016 budget request $9.9 billion for refined fuel based on the refined fuel portion of the standard price of $122.56 per barrel of refined fuel. In contrast, our analysis found that the difference between the estimates for refined fuel when applying the Gas and Oil price index as a reflection of the change in prices from fiscal year 2014 would have resulted in a budget request based on the refined fuel portion of the standard price of between about $8.6 billion and about $8.9 billion, depending on how the price index was applied to fiscal year 2014 actual refined fuel prices. As a result, we recommended that, in addition to fully implementing our prior recommendations, DOD use valid and reliable data on market conditions and review and understand the risks and limitations of using data, such as actual fuel price data from 2 years prior when it developed its standard price for fiscal year 2017 and future fiscal years. In commenting on our draft report in November 2015, DOD agreed or partially agreed with our previous recommendations but did not state the reasons for the partial concurrence or what actions it planned to take in response to our recommendations. Fiscal Year 2017 Standard Price is Consistent with Federal Budget Guidance and Leading Practices for a Credible Cost Estimate, but DOD Has Not Fully Documented Its Rationale for Estimating the Price For its fiscal year 2017 budget request, DOD adjusted its methodology to address our prior recommendations. According to documentation from the OUSD Comptroller, DOD evaluated three methodologies for developing the fiscal year 2017 standard price. The first option DOD evaluated used projections of the price of regular gasoline contained in the Energy Information Administration’s November 2015 Short-Term Energy Outlook to calculate a future price of regular grade gasoline upon which to base the standard price. The second and third options calculated a two-year percentage change in the Gas and Oil price index applied against two different periods of actual average refined product costs. One of these options used a 1-year average of DOD’s actual refined fuel costs for fiscal year 2015; the other used a 5-year average of actual refined fuel costs for fiscal years 2011 through 2015. According to DOD’s analysis, DOD chose the option using the percent change in the Gas and Oil price index applied against the most recent 1-year average of actual refined product costs. An official with the OUSD Comptroller who oversees the bulk fuel program stated that several factors underpinned the department’s decision to select the fiscal year 2017 standard price methodology. First, leadership within the department felt strongly that fuel pricing should be developed in a consistent manner for each budget cycle that is based on information included in the Administration’s economic assumptions. Second, the methodology DOD selected provided an estimate that seemed reasonable compared with the actual fiscal year 2015 average price for refined petroleum products. Finally, the official noted that the methodology is based on actual fuel prices that were adjusted to account for projected market changes. Figure 6 shows a comparison of how DOD calculated the fiscal year 2017 standard price with the approach it used in prior years. For fiscal year 2017, DOD established the projected price of refined fuel at $105 per barrel. We evaluated DOD’s standard price methodology for fiscal year 2017 and found that it is consistent with federal budget guidance and leading practices for a credible cost estimate because DOD used valid and reliable data and it assessed relative risks and limitations by reviewing various pricing options. OMB’s Circular No. A-11 requires that federal agencies’ budget submissions be consistent with OMB’s economic assumptions. Our Cost Estimating and Assessment Guide states that one characteristic of a credible cost estimate is the availability of valid data that are suitable and relevant, and that data should be fully reviewed before being used in a cost estimate to understand the limitations and risks. In our prior work, we reported that DOD has discretion over which economic assumptions provided by OMB to apply in developing its bulk fuel estimates for budgeting purposes. For its fiscal year 2017 methodology, DOD (1) incorporated the administration’s economic estimates and (2) applied the Gas and Oil price index against actual refined fuel prices to develop a price estimate that, according to DOD’s analysis, it concluded was reasonable compared with the fiscal year 2015 average price for refined petroleum products. While DOD revised its standard price methodology to address our prior recommendations, it has not fully documented its rationale for the assumptions it used in estimating the fiscal year 2017 standard price. For its fiscal year 2017 standard price, DOD documented parts of the methodology it used. Specifically, DOD detailed in an internal OUSD Comptroller memorandum the various options it considered, the reasons why it chose the methodology it used, and the calculations it used to arrive at its estimated standard price. However, we found that DOD has not documented its process for establishing the standard price in three areas. First, DOD has not documented a formalized process that describes the steps it will take on an annual basis to determine the standard price for future fiscal years. Second, documentation detailing the options DOD considered and the rationale behind the methodology it chose is not available to Congress and its fuel customers. Third, DOD has not documented the formal review and approval of the new methodology by senior Comptroller officials. Our Cost Estimating and Assessment Guide states that a cost estimate should be supported by detailed documentation that describes how it was derived. According to the guide, the documentation should include, among other things, the estimating methodology used to derive the costs for each element of the cost estimate, and it should also discuss any limitations of the data or assumptions. Further, a well-documented methodology allows decision makers to understand and evaluate the budget request and make proper determinations. In partially agreeing with our 2014 recommendation, DOD noted the department did not have a documented, specific, step-by-step process to develop the standard price but that it priced fuel by using a formal process that had been presented to the department’s leadership, briefed to congressional staff, discussed with the administration, and reproduced in various instructional and informational briefings and papers. The OUSD Comptroller official responsible for managing the bulk fuel program stated that the department does not have a similar formal process for determining rates for other commodities and working capital funds. The official stated that, therefore, DOD does not want to make the bulk fuel standard price determination unique and apart from these other commodities. However, because of concerns with the quality and transparency of information available to congressional decision makers and department fuel customers concerning the methodology selected each year and its application to relevant data used in estimating fuel rate prices for the next fiscal year, the Senate Armed Services Committee directed DOD to submit detailed guidance to the congressional defense committees no later than February 1, 2017, that includes the following elements: The steps DOD will take to develop and implement a process for the annual review and selection and application of an appropriate methodology for estimating fuel rate prices for the next fiscal year; The process for identifying an appropriate methodology to assess the accuracy of estimated fuel rate prices as compared with actual fuel prices for the most recent fiscal year; and The establishment of a detailed process for the annual development of estimated fuel rate prices for the next fiscal year, to include requiring documentation of the rationale for using one methodology over another for estimating the next fiscal year’s fuel rate price and the limitations and assumptions of underlying data, and establishing a timeline for developing annual estimated fuel rate prices for the next fiscal year. We continue to believe that documentation by DOD of its assumptions would provide greater transparency and clarify for fuel customers and decision makers the process DOD uses to set the standard price, as we recommended in 2014 and 2015. Conclusions The military services have reported actual spending on fuel consumption that differed from their fuel consumption budget estimates, attributing most of the differences to changes in operations and training that affected fuel consumption during the year of budget execution. The OUSD Comptroller takes some steps to validate the military services’ fuel consumption estimates, but neither this office nor the military services have an approach to reconcile the military services’ reported fuel consumption spending data with DLA’s fuel sales during the annual budget development process. Having an approach to reconcile differences would provide DOD with a means to understand any discrepancies in its fuel consumption data and determine whether any actions are needed to better assess the accuracy of the military services’ actual fuel consumption spending that it reports to Congress in annual budget requests. DOD’s O&M budget materials provide some actual and estimated fuel consumption spending data but are limited in the amount of information they convey because they do not provide data on fuel volume or separate actual O&M base obligations for the military services’ fuel consumption spending for day-to-day activities from their O&M OCO obligations. As a result, Congress does not have full visibility over the amount of fuel volume the military services require on an annual basis for their activities, or trends in the military services’ spending for non-war- related fuel consumption, which has varied considerably from budget estimates. DOD adjusted the methodology it used to set the standard price in fiscal year 2017 to address our prior recommendations, but it has not fully documented the rationale it uses in the standard price process. As we previously reported, until DOD documents its rationale for how it establishes the standard price, fuel customers and decision makers will not have the transparency and clarity they need to understand the process and make fully informed decisions. Recommendations for Executive Action In order to improve the accuracy of the information included in the O&M budget justification material submitted to Congress and provide complete information to review the military services’ fuel consumption spending requests, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller), in consultation with the military services and DLA, to take the following two actions: Develop an approach to reconcile data on fuel consumption reported by the military services and fuel sales to the military services reported by DLA and take any appropriate corrective actions to improve the accuracy of actual fuel consumption spending data, and Report complete fuel consumption information to Congress, to include actual and estimated fuel volume and actual O&M base obligations for fuel consumption spending separate from O&M OCO obligations. This information could be provided as part of DOD’s annual O&M budget justification materials, or through other reporting mechanisms. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, which are summarized below and reprinted in Appendix II, DOD concurred with the first recommendation and did not concur with the second recommendation. DOD concurred with the first recommendation that it develop an approach to reconcile the military services’ and DLA fuel consumption data. DOD stated that the OUSD Comptroller had established a working group with representatives from the military services and DLA to reconcile fuel sales reports. DOD further stated that the working group expected to complete its work to support the development of the President’s Budget for fiscal year 2018. DOD did not concur with the second recommendation that it report more complete fuel consumption information to Congress. DOD stated that it agreed that including additional fuel consumption detail could be useful information and stated that it will look at ways to incorporate additional data in upcoming budget submissions. However, DOD stated that it would be very difficult and labor intensive to implement a system to separate base from OCO data and cited several reasons. Among those reasons, DOD stated that many legacy financial systems currently in use cannot easily distinguish between base and OCO execution data. DOD also stated that manually identifying these data would be extremely labor intensive. However, DOD stated that once all DOD components convert from the legacy systems, the department should be able to report base and OCO obligations consistently and effectively. We acknowledge DOD’s ongoing efforts to transition from its legacy systems; however, in our report, we note that fuel volume information is available and that the military services already provide the OUSD Comptroller with actual and estimated fuel volume data during the annual budget development process. Further, our report discusses the basic steps we took to calculate O&M base obligations separately from O&M OCO obligations for fuel consumption spending with DOD’s existing budget materials. These steps included compiling and summing data on actual O&M OCO obligations for fuel consumption spending reported in budget exhibits accompanying the military services’ O&M OCO requests and subtracting these amounts from the total O&M obligations for fuel consumption spending that are reported in the budget exhibits accompanying the military services’ O&M base budget requests. As we noted in our report, the budget exhibits include actual obligations for fuel consumption spending for the total of both O&M base and O&M OCO obligations combined. We then compared this amount to the estimates for fuel consumption spending included in the military services’ O&M base budget request. DOD also stated that it is already required to report total obligations to Congress by appropriation. However, neither the OMB circular that governs federal agencies’ preparation, submission, and execution of their budgets nor relevant sections of the U.S. Code preclude the department from providing additional detail on O&M base obligations. As we discuss in our report, the military services generally over-estimated the amount of actual O&M base fuel consumption spending for the period we reviewed; therefore, without additional data that distinguishes between O&M base and O&M OCO spending, Congress does not have the information to assess trends in the military services’ spending for non-war-related fuel consumption, which has varied considerably from budget estimates. Moreover, as we also noted in our report, DOD produces data in various sources that could be used by decision makers to measure the military services’ fuel consumption. DOD could report additional information on actual O&M base obligations for fuel consumption spending as well as actual and estimated fuel volume as we recommended as part of DOD’s annual O&M budget justification materials, or through other reporting mechanisms that the department determined would assist Congress in its decision making. Without more complete information, Congress does not have full visibility over the amount of fuel volume the military services require on an annual basis for their activities. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretaries of Army, Navy, and Air Force, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe the military services’ reported actual spending on fuel consumption compared to their budget estimates since 2012 and factors that were reported to have contributed to any differences; (2) assess the steps the Department of Defense (DOD) takes to report accurate and complete fuel consumption data in its annual budget requests; and (3) evaluate the extent to which DOD’s approach for determining the fiscal year 2017 standard price charged to fuel customers is consistent with federal budget guidance and leading practices for a credible and well-documented cost estimate. To describe how the military services’ reported actual spending on fuel consumption compared to their budget estimates since 2012 and factors that were reported to have contributed to any differences, we analyzed DOD’s operation and maintenance (O&M) budget justification materials for fiscal years 2012 through 2015. We focused our analysis on fiscal years 2012 through 2015 because this period covered the most recent complete year of DOD fuel sales and provided three years of cost data to analyze any trends. We identified the specific accounting lines in each O&M budget exhibit related to fuel for both O&M base and O&M Overseas Contingency Operations (OCO) fuel consumption. We then compared the military services’ reported actual obligations for fuel consumption spending that are contained in these accounting lines against the military services’ budget estimates. To determine the reliability of the data, we obtained information on how the data were collected, managed, and used through interviews with and questionnaires to relevant officials and determined that the data presented in our findings were sufficiently reliable to present trends in this report on the military services’ actual and estimated O&M spending for fuel consumption for fiscal years 2012 through 2015. We interviewed an official from the Office of the Under Secretary of Defense (OUSD) Comptroller, who is responsible for managing the bulk fuel program, and budget and financial management officials with the military services to better understand any factors that contributed to differences between actual and estimated fuel consumption. To assess the steps DOD takes to report accurate and complete fuel consumption data in annual budget requests, we analyzed DOD’s budget justification materials for fiscal years 2012 through 2015, as well as military service and Defense Logistics Agency (DLA) fuel data. We interviewed an official from the OUSD Comptroller who is responsible for managing the bulk fuel program, officials with military service budget and financial management offices, and DLA to determine how O&M budget justification materials generally, and fuel consumption estimates specifically, are prepared, evaluated, and reported to Congress. We interviewed officials from each military service to determine how budget justification materials are prepared for their annual O&M budget requests. We interviewed officials from DLA to determine how it reports its fuel sales to the military services. To understand the differences between the military services’ fuel consumption data and DLA fuel sales, we analyzed the military services’ actual obligations for fuel consumption spending reported in their O&M budget materials for fiscal years 2012 through 2015 against DLA data on fuel sales to the military services for these same years. To determine the reliability of both the O&M budget justification data and DLA fuel sales data provided to us by DOD, we obtained information on how the data were collected, managed, and used through interviews with and questionnaires to relevant officials. We assessed the information against federal internal controls and accounting standards that describe practices regarding how information should be recorded and communicated to management and others. We determined that the data were sufficiently reliable to present the military services’ total O&M obligations for fuel consumption spending for fiscal years 2012 through 2015 and DLA fuel sales data to the military services for these same years. However, as discussed in this report, we identified differences in the fuel consumption data reported by the military services and DLA. To understand the differences between the military services’ O&M base request for fuel and actual fuel consumption for O&M base programs and activities, we calculated O&M base spending, because DOD does not report this information separately from O&M OCO spending in its budget justification materials. To do this, we compiled and summed the O&M OCO obligations for fuel consumption spending that were reported in the O&M OCO budget materials for each military service for fiscal years 2012 through 2015 and subtracted this amount from total O&M obligations for fuel consumption spending reported in the military services’ O&M base budget exhibits (which included the total of O&M base obligations and O&M OCO obligations). We then compared this amount to fuel consumption estimates included in the military services’ O&M base budget requests for each fiscal year. We assessed this information against the Standards for Internal Control in the Federal Government and Handbook of Federal Accounting Standards on how information should be recorded and communicated to management and others. To determine the extent to which DOD’s approach for determining the fiscal year 2017 price charged to fuel customers is consistent with federal budget guidance and leading practices for a credible and well- documented cost estimate, we reviewed documentation on DOD’s analysis of various methodologies it examined, as well as its justification for the one it ultimately chose to apply for fiscal year 2017. We did not evaluate the relative costs or benefits of the methodologies that DOD considered—such as the limitations or uncertainties that may be inherent in selecting one methodology over another. Specifically, we determined how DOD evaluated methodologies for setting the standard fuel price for fiscal year 2017. To better understand the steps DOD took, we determined how it applied OMB’s Gas and Oil price index when evaluating methodologies for setting the standard fuel price, compared to what it did in prior years. We also interviewed an official from the OUSD Comptroller, who is responsible for managing the bulk fuel program, about DOD’s methodology for developing its standard price in fiscal year 2017 and its plans for determining the methodology in the future. We compared DOD’s methodology for establishing the fiscal year 2017 standard price for budgeting purposes with OMB’s Circular A-11, which governs federal agencies’ budget development, and with our Cost Estimating and Assessment Guide, which is a compilation of best practices, including the characteristics of a credible and well-documented cost estimate, which federal cost-estimating organizations and industry use to develop and maintain reliable cost estimates. We interviewed official, and, where appropriate, obtained documentation, from the following organizations: Office of the Under Secretary of Defense (Comptroller) Defense Logistics Agency – Energy Defense Logistics Agency – Finance Air Force Petroleum Agency Naval Supply Systems Command Office of the Assistant Secretary of the Air Force, Financial Office of the Assistant Secretary of the Army, Financial Management Office of the Assistant Secretary of the Navy, Financial Management We conducted this performance audit from July 2015 to September 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Matthew Ullengren (Assistant Director), Robert Brown, Amy Bush, Adam Hatton, Amie Steele Lesser, Felicia M. Lopez, and Pedro Almoguera made key contributions to this report.
DOD and the military services estimate total funding needs for fuel in annual budget requests by using planned consumption (measured in barrels of fuel) and a standard price per barrel set by DOD. Senate Report 114-49, accompanying a bill for the National Defense Authorization Act for fiscal year 2016, included a provision for GAO to review DOD's approach to estimating fuel consumption. Among other objectives, this report (1) describes the military services' reported actual spending on fuel consumption compared to their budget estimates since 2012, and factors that were reported to have contributed to any differences, and (2) assesses the steps DOD takes to report accurate and complete fuel consumption data in annual budget requests. GAO analyzed DOD budget documents, including military service and DLA fuel data for fiscal years 2012 through 2015 and interviewed DOD officials responsible for preparing fuel consumption budget materials. The military services' total obligations for fuel consumption spending decreased from a high of about $13 billion in fiscal year 2012 to a low of about $10.1 billion in fiscal year 2015 but differed from budget estimates, which officials largely attributed to changes in operations and training that affected fuel consumption. Specifically, each of the military services either over- or underestimated its actual fuel consumption spending when compared to budget estimates (see figure). Military Services' Reported Actual Versus Estimated Fuel Consumption Spending, Fiscal Years 2012 through 2015 The Department of Defense (DOD) takes some steps to report fuel consumption data in annual budget requests, but it does not fully reconcile differences in the military services' reported actual fuel consumption data and does not include some fuel consumption data. For each budget request, DOD validates the military services' fuel consumption data by ensuring that the budget estimates align with DOD's funding priorities, among other steps. However, GAO's analysis found differences between the military services' reported fuel consumption spending and Defense Logistics Agency (DLA) data on fuel sales to them. For example, DOD reported that the Navy's actual obligations for fuel consumption spending in fiscal year 2014 were about $2.7 billion less than what DLA's fuel sales data show was sold. DOD had not established an approach to reconcile such differences. Having an approach to reconcile differences would provide DOD with a means to determine if any actions are needed to better assess the accuracy of the military services' reported fuel consumption data. Further, GAO's analysis found that DOD's budget requests for fuel did not include details in two areas that could be used by Congress to evaluate funding requests for fuel. First, the budget requests excluded fuel volume data that were collected during the budget development process. Fuel volume data would provide another measure of fuel consumption to justify DOD's funding requests. Second, the requests did not separate actual fuel consumption spending for day-to-day activities, such as training, from war-related spending, which has varied considerably from budget estimates. Without additional data in these two areas, Congress does not have full visibility over the amount of fuel volume the military services require for their activities or trends in fuel consumption spending for non-war-related purposes.
Background Social Security is a retirement income program whose benefits are based, in part, on an individual’s earnings. Social Security is also gender-neutral—that is, a man and a woman whose labor force participation and earnings are identical, in terms of both extent and timing, will receive the exact same Social Security benefit. When calculating actual benefits, Social Security employs a progressive benefit formula that replaces a relatively larger portion of lifetime earnings for people with low earnings than for people with high earnings. Because women tend to have lower lifetime taxable earnings than men, they generally benefit from this provision. The program also provides benefits to retirees’ dependents (such as spouses, ex-spouses, children, and survivors). Many more women than men receive dependent benefits as spouses or survivors. Unlike some pension benefits, these benefits are automatic for all eligible dependents and do not depend on the worker’s electing to include them. In general, a retired worker’s spouse who is not entitled to benefits under his or her own work record will receive a benefit up to as much as 50 percent of the retired worker’s benefit, and a surviving spouse will receive up to as much as 100 percent of the deceased worker’s benefit. A spouse’s receiving dependent benefits does not reduce the size of the worker’s own benefit. Social Security has helped reduce poverty rates for the elderly, from 35 percent in 1959 to less than 11 percent in 1996. Nevertheless, some subgroups of the elderly population are at a greater risk of living in poverty than others. Unmarried women make up more than 70 percent of poor elderly households, although they constitute only 45 percent of all elderly households. Single, divorced, and widowed women aged 65 or older have a poverty rate of 22 percent, compared with 15 percent for unmarried men and 5 percent for married couples older than 65. In addition, some researchers expect the current level of poverty among widows to persist over the next 20 years because there will still be a substantial number of women with a history of low earnings and intermittent labor force attachment whose own worker benefit will not be greater than their widow’s benefit. In part, because of the anticipated increase in the size of the elderly population and the growing proportion of the total population that the elderly will constitute over the next 33 years, Social Security’s trust funds are projected to be depleted by 2029. A number of proposals have emerged to resolve this difficulty, with a great deal of variety in terms of both how the Social Security program would be structured and who would be eligible for benefits. Appendix II summarizes the key features of the major proposals. Among the various proposals for restoring long-term financial balance to the Social Security system are several that call for some degree of privatization. Some of these privatization proposals would redesign the Social Security system, patterning it, in part, after some private sector pension plans, such as 401(k) plans. Under such a system, a portion of workers’ Social Security taxes would be deposited in an investment account that they would then control. By investing in stocks or other assets, workers could increase their retirement savings and potentially increase their retirement benefits. However, they could also lose some portion of their savings for retirement if, for example, stock prices fell. While the data indicate that the U.S. stock market has historically outperformed the implicit return expected from Social Security for today’s and future retirees, there is always a risk of loss. The uncertainty of market gains or losses would be borne by the individual, and the individual’s retirement income would not be guaranteed by the government as it currently is under Social Security. Retirees could use the payout from individual accounts to buy an annuity, or they could receive a lump-sum distribution of the accumulated savings to manage or spend as they saw fit. In most cases, an annuity lasts for the life of the recipient, removing the risk that retirees will outlive their savings. With a lump sum, retirees may make other choices about the distribution of their assets, including, at their death, bequeathing any remaining funds to their heirs. Women’s Benefits Differ From Men’s Because of Labor Market Differences Women’s Social Security benefits are currently lower, on average, than men’s because their labor force participation rates and earnings are lower. These gaps are narrower than in past years yet still large enough to affect retirement income benefits. The gaps are not expected to disappear entirely, even in the long term. Labor Force Attachment and Earnings Differ for Men and Women Women’s labor force participation rates continue to be lower than men’s at every age, despite substantial increases in women’s rates in the past 35 years. On average, the labor force participation rate for women aged 16 and older in 1996 was 59 percent, compared with 75 percent for men. As seen in figure 1, this represents a significant increase for women from 35 years ago, when their labor force participation rate was only 38 percent, compared with 83 percent for men. Figure 2 shows the change in labor force participation rates for women born in different 5-year intervals as they move through their prime-age years (25 to 54). Women born more recently have higher labor force participation rates than older women had at the same age. The labor force participation rates of the younger women do not drop off during their child-bearing years as the older women’s did, but the rate of increase in labor force participation for the younger women has slowed. Women today are much more likely to participate in the labor force than in previous generations, but their rate of participation is still below the rate for men. The difference in labor force participation has implications for women’s level of Social Security benefits relative to men’s, since under the current rules Social Security calculates monthly benefits on the basis of lifetime taxable earnings averaged over a worker’s 35 years of highest earnings. Women generally spend more time out of the labor force than men and have fewer years of taxable earnings, so the calculation of their benefit includes more years with zero earnings. The median number of years with zero earnings for workers turning 62 in 1993 was 4 for men and 15 for women. This results in lower monthly benefits for women relative to men. Women also earn lower wages than men, although some of this difference can be explained by the fact that women more often work part-time. However, even in a comparison of year-round, full-time workers, median earnings for women are still only about 70 percent of men’s. This difference further narrows when differences in education, work effort, age, and other relevant characteristics are accounted for, but even then the gap does not close completely, with women earning wages that are 15- to 20-percent lower than men’s. These differences in earnings lead to lower Social Security benefits for women relative to men. In 1995, the average monthly benefit for retired workers was $621.30 for women and $810.00 for men; women’s average benefit was 77 percent that of men’s. Even if earnings for men and women and their labor force participation behavior were equalized starting today, women would continue to have lower benefits than men until the 2030s because earnings are averaged over 35 years; it would take that long for benefits to be equalized. Neither the difference between men’s and women’s labor force participation rates nor the gap in their earnings is expected to disappear in the foreseeable future. As figure 2 shows, the long-term upward trend in women’s labor force participation rates has flattened out in recent years. The decline in men’s labor force participation is also leveling off, making it less likely that women will have the same rate as men. Because a 15-to-20-percent gap in earnings between men and women remains even after accounting for demographic and labor force characteristics, it is likely that the gap will not close completely. Since retirement income benefits are based on both amount of earnings and number of years in the labor force, the gap will continue to produce lower benefits, on average, for women than for men. Over the course of their retirement, women might receive benefits for a longer period of time than men because they live longer, but they will not necessarily receive more in total lifetime benefits, and in any case, it is the monthly benefit that is most important to the retiree’s standard of living. With Individual Accounts, Women May Fare Worse Than Men Because They Are More Risk Averse Many of the reform proposals call for the creation of mandatory savings accounts that allow workers to make their own investment decisions. One consequence of this move might be that individuals would decide to take on more risk in order to earn potentially higher rates of return. Economists have found evidence suggesting that women are generally more risk averse than men in financial decisionmaking. Compared with men, they might choose an investment strategy for their retirement income accounts that earns them lower rates of return. Although proponents argue that privatization could allow for higher retirement benefits for both men and women, a too-conservative investment strategy could leave women with lower final account balances than men, even if both make the same contributions to their accounts. In reality, women’s lower average earnings will result in their making smaller average contributions to their accounts than men will make. Thus, even though women could be better off under a privatized system, compared to the current Social Security system, the gap between men’s and women’s benefits could increase. We attempted to calculate the difference in risk aversion between men and women by looking specifically at the differences in how men and women invest their assets. We found that women aged 51 to 61 in 1992 had a lower percentage of their total assets in stocks, mutual funds, and investment trusts than men did. These assets are riskier, but potentially higher yielding, than others, such as certificates of deposit, savings accounts, or government bonds. On average, we found that the ratio of riskier assets to total assets held by men was 8 percentage points higher than the same ratio for women. Other researchers, looking at participants in the federal Thrift Savings Plan, have also found that women invest less in stocks than men do. Our analysis, using different data and focusing on individuals in their prime working and saving years, increases the robustness of this conclusion. By investing less in these riskier assets, women benefit less from the potentially greater rates of return that, in the long run, stocks could generate. At the same time, they are not as exposed to large losses from riskier assets. While it is true that in the past U.S. stocks have almost always posted higher returns than less-risky assets, there is no guarantee that they will always do so. Costs of and Rules on Annuitization and the Effect on Women’s Benefits Some proposals for reforming Social Security would not require retirees to purchase an annuity with the funds in their retirement income accounts. At retirement, workers could choose to receive their account balance as a lump-sum payment, as some pension plans now allow, to spend as they see fit. If retirees and their spouses do not accurately predict their remaining lifespans or make poor investment choices, they may end up with very small incomes from assets late in life. Most married women with little work history of their own currently receive a Social Security benefit as a dependent, based on their husband’s earnings. Under Social Security, the distribution of benefits to dependents does not reduce a worker’s benefit and they are mandatory, so that no worker can opt out of providing them. In contrast, some of the privatization proposals do not automatically provide dependent benefits from the investment portion of the retirement income accounts. Workers may choose not to purchase an annuity at all, or they may choose a single life annuity that ends at the worker’s death. Either of these options would put dependent wives at greater risk of having little to live on should their husbands die first. While some retirees might prefer to avoid the cost of an annuity, receiving their account balance as a lump-sum payment to manage as they see fit, others might prefer the security of a guaranteed monthly income for life that an annuity provides and therefore choose to purchase one. However, a man and a woman could retire with similar amounts in their personal accounts under a privatized social security system and still end up with very different monthly benefits if they were to purchase an annuity.Annuities sold to individuals are usually based on gender-specific life tables. That is, insurance companies take into account women’s longer life expectancy and either provide a lower monthly benefit to women or charge women more for the same level of benefits given to men.Insurance companies also pay lower benefits for a joint and survivor annuity that covers both husband and wife than for a single life annuity that covers only the worker during his or her lifetime, again because the total time in which the benefits are expected to be paid is longer. Women are more likely to receive the survivor portion of this type of annuity, since they are more likely to outlive their husbands. Thus, while men’s and women’s total lifetime benefits may be similar, the monthly benefit women receive, either as retirees or as survivors, will likely be lower. Table 1 shows the average monthly benefit paid to men and women at different ages, based on a $100,000 premium, for both single life and joint and full survivor options. At every age, a man’s monthly benefit under a single life option is between 8 and 13 percent higher than a woman’s. This comparison of average benefits masks significant differences between insurance companies. Table 2 shows for men and women separately, at each age, the highest and lowest monthly benefit paid for a $100,000 premium in a single life plan. While men and women differ little in terms of the variation in monthly benefits, the lowest possible benefit paid to a woman is still lower than the lowest benefit paid to a man of the same age, and the highest possible benefit paid to a woman is also lower than the highest paid to a man. The difference in annuity benefits for men and women exists only for individual annuities. A 1983 Supreme Court ruling requires that employer-provided pension plans use a unisex life table in calculating annuities, so that women and men receive the same monthly benefit.Federal, state, and local pension plans also use unisex life tables in calculating monthly annuity benefits. The market for individual annuities, however, is not covered by the Supreme Court ruling, and it is unclear whether or not annuities purchased from retirement savings accounts in a reformed Social Security system would be covered by the Court ruling. Other Proposed Changes Could Differentially Affect Women Other proposed changes in various Social Security reform proposals would differentially affect women, although the effects might not be as far-reaching and in some cases could even be beneficial. Some reform proposals require Social Security to extend the computation period for benefits from 35 years to 38 or 40 years. For women, with their lower rates of labor force participation giving them fewer years of taxable earnings than men, increasing the computation period would increase the number of zero years used in the calculation of benefits, lowering their average benefit. The Social Security Administration (SSA) forecasts that fewer than 30 percent of women retiring in 2020 will have 38 years of taxable earnings, compared with almost 60 percent of men. However, SSA has also calculated that the difference in additional benefit reductions for men and women would be relatively small: a 3.1-percent reduction for men compared with a 3.9-percent reduction for women if the computation period were 38 years, and a 5.2-percent reduction for men compared with a 6.4-percent reduction for women if the computation period were extended to 40 years. Another of the reform proposals includes a provision designed to improve the status of survivors, who are predominantly widows. This provision decreases the spousal benefit while a retired worker is alive (from 50 percent to 33 percent of the worker’s benefit) and increases the survivor’s benefit to 75 percent of the couple’s combined benefit or 100 percent of the worker’s benefit, whichever is greater. Another feature of this particular proposal, however, would change the benefit formula for retired workers in a way that would reduce the monthly benefit for most retired workers, disabled workers, spouses, survivors, and children. Thus, the net effect of these changes in spouse and survivor benefits will vary by individual circumstances. While mandatory savings accounts are intended to replace these lost benefits, it is not clear whose total benefits would be maintained and whose would increase or decrease. The effect of individual changes in the reform proposals could be relatively minor. However, several taken together could interact substantially. For example, cuts in spouse benefits and in the benefit formula, combined with increases in years of taxable earnings included in the computation period and increases in the normal retirement age, could potentially add up to a large effect on women relative to men. Some groups of women may be at risk of receiving lower retirement income benefits under some of the Social Security reform proposals, and other groups may lose their eligibility for benefits entirely. Under current Social Security law, divorced spouses are entitled to a benefit based on the work record of their former spouse, if they are aged 62 or older, had been married at least 10 years, and have not remarried. Divorced survivors are entitled to a benefit based on the work record of their former spouse if they are aged 60 or older and had been married at least 10 years. Under several of the reform proposals that create mandatory savings accounts, divorced spouses and divorced survivors are not acknowledged as having any claim at all on the mandatory savings accumulated by their former spouse during the period of their marriage. Under these proposals, the current automatic provision of benefits would be eliminated. While this money may become part of the settlement upon divorce, it is not guaranteed under these proposals. Investor Education Might Narrow the Differences in Investment Behavior To the degree that women are more risk averse than men, they might be less likely to take full advantage of the potential benefits from Social Security privatization. Some pension specialists believe that education is a critical factor in helping individuals make the most of their retirement investments. Preliminary evidence from a study of 401(k) participants suggests that people who are given information about their investment choices and potential returns are more likely to participate in a 401(k) and to contribute a higher proportion of their salaries than those who do not receive such information. However, few, if any, studies have examined how education affects the allocation decisions of 401(k) participants. Nevertheless, investor education that covers general investment principles and financial planning advice might help both men and women to better manage their investments. While employers have provided this type of education in the case of 401(k) accounts, it is not clear who the provider would be in the case of individual retirement savings accounts under a privatized Social Security system. Government Role in Annuities Provision Could Mitigate Differences A variety of policy options may help preserve the protective aspects of annuities, especially for women who are receiving dependent benefits. These range from mandatory annuitization of all individual accounts at retirement to partial annuitization, where some minimum level of annuity purchase is mandatory but the balance of an individual’s account can be paid in a lump sum, to voluntary annuitization with some government regulation of the market, such as requiring the use of unisex life tables in calculating annuities. Mandatory annuitization simply means that the balance in each individual’s account must be used to purchase an annuity at retirement. Because everyone is in the same risk pool for insurance purposes, the cost of annuities should be lower than if they were purchased individually, and monthly benefit levels should be higher for all annuitants. If annuities were also purchased under the auspices of the federal government, gender-neutral life tables could be used, so that men and women with the same account balance at retirement would receive the same monthly benefit from their annuity. In addition, by requiring married workers to purchase a joint and survivor annuity, unless a spouse signs a waiver, a mandatory annuity could protect women whose minimal work histories might make them ineligible for a retired-worker benefit of their own. Partial annuitization means that some portion of each individual’s account balance would be used to purchase an annuity, but the rest of the money in the account could be paid out in a lump sum and spent as the individual wished. Partial annuitization might also lead to the use of gender-neutral life tables in the calculation of monthly benefits, leading to equal benefits for women and men with comparable lifetime earnings. And again, since all retirees would be in the same risk pool, the cost of an annuity would probably be lower than when purchased by an individual. The monthly benefits from these annuities would be lower than under a full annuitization plan, since they would not be using the entire account balance, but dependent spouses would still benefit from the protection of having some portion of their retirement income in the form of a joint and survivor annuity. Voluntary annuitization would leave the decision of whether to purchase an annuity, and what type of annuity to purchase, up to each individual. Under this plan, dependent spouses could lose the protection that a mandatory joint and survivor annuity would provide. Finally, under Social Security, the government ensures that men and women retiring at the same age with the same earnings history receive the same monthly benefits, despite the fact that women are expected to live longer and will therefore receive benefits over a longer period of time. The current approach provides equal living standards for equal contributions. If individual annuities were provided under gender-specific life tables, men and women with the same earnings history would receive different monthly benefits but equivalent expected lifetime benefits. The result would be that women’s living standards would be lower than men’s despite the same contributions. One option for mitigating this outcome is to use the same unisex life tables that are currently required for employer-provided group annuities for all annuitants.
Pursuant to a congressional request, GAO reviewed the issue of social security reform and women's retirement income, focusing on: (1) why women's benefits are lower than men's under the current social security system; (2) the possible differential effects on women of the new privatization reform proposals; and (3) what can be done to minimize the possibly negative effect on women of certain elements of the social security reform proposals. GAO noted that: (1) women's average social security benefits are lower than men's for a number of reasons, most of which relate to women's lower rates of labor force participation and lower earnings levels; (2) although the labor market differences between men and women have narrowed over time, the Bureau of Labor Statistics does not project that they will disappear entirely, even in the long term; (3) the reform proposals that would create individual private savings accounts and change the way benefits would be distributed from those accounts are the most likely to affect women and men differently; (4) a retirement income system that is based in large part on mandatory contributions of a fixed percentage of earnings and on individuals' making their own investment decisions could lead to women's receiving relatively lower benefits than man; (5) working women earn less than men, on average, and therefore would have fewer funds to invest in their individual accounts; (6) GAO's analysis of women in their prime earning and saving years suggests that they are less likely than men to invest in potentially higher yielding, though riskier, assets such as stocks, which would generally leave them at risk of having accumulated relatively less in their accounts at retirement; (7) even if men and women enter retirement with equal amounts in their individual accounts, women may receive a lower monthly benefit if they buy an individual annuity--a monthly benefit for the life of the worker or the worker and a spouse--because it is adjusted for their greater longevity; (8) changes over time in women's labor force behavior and experience are projected to reduce, but not completely eliminate, the differences in men's and women's labor force participation rates and earnings; (9) any reform of the system that bases benefits on earnings will continue to produce different benefit levels for men and women; (10) if a reformed Social Security system were to rely largely on individual investment, better education about investment strategies and general financial principles might help women workers increase their retirement benefits; (11) in addition, requiring that retirement savings be annuitized would be better protect dependent spouses; and (12) annuities purchased with individual account balances might give rise to differential benefit levels for men and women with the same level of lifetime earnings because women are charged higher annuity prices, based on their longer average lifespan.
Background The Importance of Having a Forward Military Presence Overseas Maintaining an overseas military presence that is prepared to deter threats and engage enemies remains an enduring tenet of U.S. national military strategy and priorities. For example, the National Military Strategy notes that an overseas presence supports the ability of the United States to project power against threats and support the establishment of an environment that reduces the conditions that foster extremist ideologies. The strategy also notes that keeping an overseas presence serves to assure U.S. allies; improves the ability to prosecute the global war on terrorism; deters, dissuades, and defeats other threats; and supports transformation. The Chief of Naval Operations earlier this year underscored the continuing importance of forward-deployed forces, noting “Our forward rotations remain critically important to our security, to strengthening alliances and coalitions, and to the global war on terrorism. But it is clear we must make these rotations with purpose, not just to fill the calendar.” Current Operational and Budgetary Pressures on Ship Procurement and Operational Accounts In early 2001, the Chief of Naval Operations recognized the challenge of accomplishing the Navy’s missions within its budget. In February 2001, the Vice Chief of Naval Operations established a task force to explore force structure options facing the naval service, noting that in order for organizations to remain vital and competitive “they maintain their options and seek innovative developments that may provide simpler, more convenient, or less costly alternative solutions to their needs.” One of the task force’s primary assumptions was that the Navy leadership understands that there may be insufficient procurement funds available to maintain current fleet size. Another assumption was that the demand for naval forward presence would remain greater than the supply, regardless of fleet size. Within a year, an operational studies group within the Office of the Chief of Naval Operations noted that the need for alternative crewing approaches might be necessary to sustain the pace of global operations, especially in the global war on terrorism. More recently, senior Navy officials have warned that budgets will remain tight. In June 2004, the Secretary of the Navy stated that DOD will have less money for recapitalization because the defense budget will not continue growing at the rates it has in recent years. The Navy’s acquisition executive has also noted that the Navy is employing multiple strategies that eventually may reduce the number of ships, submarines, and aircraft it purchases, saving taxpayer dollars as it seeks more effective ways of employing its forces so that fewer of them can provide the capabilities needed to accomplish assigned missions. Rotating Crews Is a Part of Force Structure Assessment One such effort that may enable the Navy to sustain a high pace of operations within expected budgets involves the rotation of crews on and off forward-deployed Navy surface ships. While the Chief of Naval Operations stated earlier this year that the ideal fleet size would be about 375 ships, he also said that he is no longer willing to commit to any specific number of ships until the Navy completes a new assessment of ship requirements. The assessment, which started this year, will evaluate the potential impact on force structure requirements from keeping ships at sea for longer than standard 6-month deployments by rotating the crews on and off. He noted the Navy’s recent experience with keeping two destroyers on extended deployments, whereby these two ships provided the presence overseas that is the equivalent of 8 to 10 ships on normal deployment schedules. Traditional Ship Employment Cycle Provides Limited Time in Theater The amount of time a ship ultimately spends forward deployed in a theater of operations is affected by several factors in its employment cycle. These factors include length of a deployment, transit speeds and port calls, crew training and certification, ship maintenance requirements, and maintaining sufficient readiness for surging forces during nondeployed periods. The result is that a ship homeported in the United States and deploying to the Persian Gulf area for 6 months will normally spend less than 20 percent of its time in theater and that the Navy would need about 6 ships to maintain a continuous presence in the region over a 2-year period. As part of the transformation efforts to increase the fleet’s operational readiness and responsiveness, the Navy recently implemented a new operational strategy—called the Fleet Response Plan—that changes the manner in which it maintains, trains, mans, and deploys its ships. The overall objective of the plan is to create a more responsive force by sustaining a more level balance of training between deployments, instead of dropping to minimum rates of readiness upon return from deployment and then gradually rebuilding its state of readiness throughout a 12-month training cycle that follows major maintenance of the vessel. The plan also modifies long-standing forward presence policy of predictable, 6-month deployments to predetermined regions. This flexible deployment concept allows units that have attained high readiness to embark on deployments of varied duration—but still generally no longer than 6 months—in support of specific national priorities, such as homeland defense, multinational exercises, security cooperation events, deterrent operations, or prosecution of the global war on terrorism. These deployments provide what the Chief of Naval Operations calls “presence with a purpose,” and are intended to occur in less predictable patterns to keep potential adversaries off guard. Ship Crewing Options In addition to the standard ship and crew employment cycle, the range of Navy crewing options falls into four major categories: (1) Sea Swap, (2) Horizon, (3) Blue-Gold, and (4) partial or graduated crew swapping. Each of these options can be implemented in varying ways and may have different advantages and disadvantages, but the Navy’s actual experience with nonstandard crewing concepts on surface ships is limited. Standard crew deployments use one crew per ship. Most of the crewmembers are assigned to the ship for 4 years, and it is common for crewmembers to deploy overseas on the same ship more than once. Standard ship deployments occur once every 27 months for a period of 6 months of which the ship and the crew are on-station for 3 to 4 months, depending upon whether the ship deploys from the east or west coast of the United States. Most Navy ships and their crews employ the standard crew deployment option. The Sea Swap option uses the same number of crews as ships. Notionally under this option, one of the ships deploys two, three, or four times longer than the standard time by rotating crews every 6 months at an overseas location. Ideally, all of the Sea Swap ships share an identical configuration, so crew performance and capability are not degraded because of ship differences. Because crews do not return to the ships on which they trained, under a four-ship Sea Swap option, some crews could serve on three different ships in just over 6 months and be expected to demonstrate combat proficiency on each one. A limited number of destroyers and patrol coastal ships have employed the Sea Swap option in recent years. The Horizon option involves one or two more crews than hulls, such as four crews for three ships or five crews for three ships. Crews serve for no more than 6 months on ships that are deployed for 18 months or more. Under a three-ship Horizon option, crews could serve on at least two ships in just over 6 months and be expected to demonstrate combat proficiency on each one. In addition, each crew would be without a ship for a period of time and stay ashore at a readiness, or training, center. This crewing option was employed on mine countermeasure ships during the 1990s. The Blue-Gold option assigns two complete crews, designated “Blue” and “Gold,” to a single ship. Most of the crewmembers are assigned to a ship for several years, and it is common for them to deploy overseas on the same ship more than once. Crew deployments would not exceed 6 months and are often of much shorter duration. An advantage with this option includes the crews’ familiarity with the ship. However, a disadvantage is that the proficiency can degrade since crews sometimes do not have a ship on which to train and must rely on mock-ups and simulators. The Blue- Gold option has been employed by the strategic submarine force and the HSV-2 Swift. Partial or Graduated Crew Swapping Partial crew swapping has been employed on a limited basis. The most notable use of this option involved the exchange of crewmembers between a ship based in Japan with a ship based in the United States in 1999. A variation on this theme is when portions of a ship’s crew are swapped out at regular intervals, for example, one-quarter of the crew every 2 or 3 months. Rotational Crewing Believed to Provide Forward Presence Benefits The most significant advantage attributed to rotational crewing options is the more efficient use of a ship in an overseas operating area. This is accomplished by keeping the ship on extended deployments, ranging from 12 to 36 months or longer, while at the same time not increasing the crew’s time away from home. Top Navy officials, including the Chief of Naval Operations, believe that increased efficiencies from rotating crews enable the Navy to perform the same number of missions with fewer ships or to increase the number of missions with the same force size. For example, the Navy’s acquisition executive stated that if the Sea Swap option is employed on its next generation guided missile destroyer, the DD(X), the Navy might be able to reduce requirements from 24 to 16 ships and apply the savings toward the next generation cruiser. Disadvantages often associated with rotational crewing include increased infrastructure costs; deteriorating ship material condition and lack of ready access to maintenance support while on extended deployment; decreased readiness due to differences between ships; and decreased quality of life and other sociological issues for crew members, including the sense of less “ship ownership,” fewer port calls, and cultural changes. Recent Sea Swap Destroyer Demonstration Project Assesses Feasibility The Navy recently conducted a 2-year demonstration to determine if two destroyers could (1) provide more deployment time on-station and (2) maintain sailor quality of life while rotating crews. The Navy declared the demonstration a success, stating that the ships operated well while increasing their operational capability. In its report on the Sea Swap demonstration project, the Center concluded that the feasibility of the concept clearly was a success. However, the Center noted that there were problems and limitations. While none of the problems was considered a showstopper, the Center stated that the Sea Swap demonstration afforded the opportunity to learn lessons in order to enhance the use of the practice in the future. Many of these, such as the need for improved accountability, oversight, and understanding of maintenance strategies, are discussed in this report. Key Commands Responsible for Implementing Rotational Crewing on Surface Ships The Chief of Naval Operations has charged the Commander, Naval Surface Force, U.S. Pacific Fleet, with being the primary proponent for demonstrating the feasibility of rotating crews on surface ships as well as assessing the cost of the various options and providing oversight and accountability. To date the Command’s emphasis has been on using the Sea Swap demonstration as a “proof-of-concept” for rotational crewing. It provided the guidance implementing the concept, approved the assessment plan, and used Center support to collect and analyze some data. However, other commands are involved in implementing other rotational crewing options on surface ships; they include the Mine Warfare Command and the Amphibious Group Two Command. See appendix II for a more complete list of organizations involved. The Navy Has Not Systematically Evaluated the Feasibility and Cost-Effectiveness of Rotational Crewing for Surface Ships Although the Navy’s senior leadership has initiated a change in how the Navy can operate in the future by demonstrating that rotational crewing is a feasible alternative to traditional 6-month ship deployments, the Navy has not systematically evaluated the feasibility and cost-effectiveness of all rotational crewing options for its current and future classes of surface ships. The Navy has documented that rotational crewing helps to increase the forward presence of its ships beyond the traditional 6-month deployment periods, and officials have indicated that they want to make greater use of rotational crew options. While the Navy has conducted some limited assessments of the Sea Swap destroyer demonstration project, it has not developed a comprehensive common analytical framework to assess the potential impact of all rotational crewing options on the material condition of all of the ships, operational requirements, and the quality of life for crews. In addition, the Navy has not collected complete and consistent information that is critical for comparing different crewing alternatives for such factors as evaluating which alternative most cost-effectively meets specific requirements and warfighting effectiveness. In the absence of a common analytical framework, Navy officials assigned to ships that have used or experimented with crew rotations have been left to develop their own goals, objectives, and metrics and the results have to date been uneven. As a result, the Navy does not have complete and accurate data, including cost data that reflect total ownership and operating and support costs, readiness, and crew quality of life, making success or failure of the individual options involving different types of ships difficult to determine. In the absence of a systematic evaluation, the Navy also does not know the extent to which rotational crewing options can provide maximum return on investment and economically offset future ship total ownership costs. Navy Has Some Evidence to Show That Rotational Crewing Increases Forward Presence and Is Considering Greater Use The Navy has developed some data to demonstrate that rotational crewing helps to increase the forward presence of its ships beyond the traditional 6-month deployment periods. Table 1 shows the percentage of time a ship would be notionally forward deployed during the employment cycle for each type of crew deployment option and the number of ships that would be required to keep one vessel continuously operating in the Persian Gulf. Given such promise for improving deployment efficiencies, Surface Force Pacific Command officials have considered using rotational crewing options on other ships. For example, in July 2004, the Commander, Naval Surface Force, indicated plans to use the Sea Swap option on an Arleigh Burke-class destroyer based in the Atlantic Fleet and an expeditionary strike group based in the Pacific Fleet, during the spring and summer of 2005, respectively. In addition, Mine Warfare Command officials informed us in July 2004 that it intends to rotate crews on mine warfare ships based in the Persian Gulf later in the year, but it had not yet determined which option it will use. The Navy is also considering rotational crewing for operating future ships and, as a result, it could change the number of new ships that might be purchased. For example, the Navy is designing and procuring the littoral combat ship and the DD(X), which will cost billions of dollars. The Navy has suggested that if crew rotations with an expeditionary strike group are as successful as with the Sea Swap destroyers’ demonstration, procurement plans for the number of the DD(X) destroyers can be reduced and the savings applied to other high priority ships. No Established Framework and Limited Information for Comparing Crew Rotation Options While the Center and the Surface Force Pacific Command have conducted some assessments of the Sea Swap demonstration project, the Navy did not have an analytical framework or collect the information that would be needed to assess and compare all crewing options. Lacking such a framework, the Navy has not systematically assessed the effect that rotational crewing has on such factors as the ships’ material condition and readiness or crew quality of life and training. Additionally, the Navy has not systematically evaluated the cost-effectiveness of the various crewing options. Analytic Framework Would Improve Ability to Evaluate Crewing Options Best practices show that an analytic framework that includes measurable goals and objectives, performance metrics, and evaluation plans would allow decision makers and others to receive consistent information needed to compare and assess different policy options, measure implementation progress, and determine whether the desired results were being achieved. Without such information for the various crewing options, Navy managers do not have a clear picture of the status of the crew rotation efforts, whether potential benefits from different crew rotations are being achieved, which option might be best in certain situations, and whether major issues need to be resolved. The Navy has not established formal criteria for evaluating the implementation of the various rotational crewing options because its focus has been on demonstrating the feasibility of the concept rather than on assessing and formalizing the options. For example, the Navy did not establish evaluation criteria prior to implementing Sea Swap, and none was identified in the Center’s Sea Swap assessment plan. As a result, the Center lacked criteria for judging ship condition and crew quality of life. According to the Center’s September 2004 report, the Navy had no intent to control the operational activities in the sense of a scientific experiment, where one notionally scores a probability of success or other such measure of effectiveness. It said the intent was that general conclusions about the feasibility and difficulties of pursuing the Sea Swap concept for future force employment planning would be drawn from the experiment. Moreover, the Navy did not have comparable assessments for the options employed on other ships such as the patrol coastal ships and the HSV-2 Swift. More common data and analyses are not available for comparison because, in the absence of a common analytical framework, individual commands using crew rotations have been able to decide on their own what (1) goals, objectives, and metrics to establish; (2) data to collect; and (3) evaluations to do, if any. Such goals, objectives, and metrics on ship condition and quality of life, which could affect crew retention, were not established prior to deployment, and complete information on these factors was not systematically collected during and after deployments. As a result, while the Navy has reported that the Sea Swap demonstration project was a success for the destroyers involved, the Navy lacks clear criteria to objectively evaluate how well the project did and the project’s potential against other rotational crewing options in two key areas we assessed—the condition of the ship and the crew’s quality of life. Material Condition of Ships Not Systematically Assessed The Sea Swap demonstration project collected data on ship condition that could be valuable. However, complete data were not systematically collected on the ships before deployment and there were no clear criteria for comparing the ships’ condition upon return. For example, the Navy conducted a total ship readiness assessment of the U.S.S. Higgins, one of the two demonstration destroyers, in April 2004, shortly after the ship returned from its 18-month deployment. This post-deployment assessment of the combat, hull, mechanical, and electrical systems was used to compare the U.S.S. Higgins’s material condition to the U.S.S. Decatur’s. The U.S.S. Decatur, a guided missile destroyer, had recently completed a standard 6-month deployment. According to Surface Force officials, there was no significant difference between the two ships’ material condition upon return. However, there is some disagreement about the criteria and interpretation of the data used in reaching this conclusion. This is discussed in more detail on pages 37-39. In its report, the Center cautioned that further analyses of ship material condition are needed. Comparable assessments of ship condition are not being performed on the U.S.S. Fletcher, the other Sea Swap destroyer on extended deployment. The Navy is missing an opportunity to collect data and more objectively assess the impact of extended deployments on ship condition. A more stringent independent inspection for the U.S.S. Higgins is scheduled in January 2005, about 8 months after its return from deployment and likely after having received significant shipyard maintenance and modernization. Furthermore, Surface Force officials also told us that a comparable pre- inactivation inspection, which is normally performed, would not be done on the U.S.S. Fletcher because it is being decommissioned and they do not want to spend the money. Quality-of-Life Issues Not Fully Assessed An objective of Sea Swap was to maintain the crews’ quality of life. The Center’s study plan stated the Center would examine how the project improved or degraded the quality of life and quality of work for Navy personnel through surveys and interviews with crewmembers. However, the Navy did not establish goals for determining the quality-of-life success of the Sea Swap program. As a result, even though the Center had collected data on morale, it could not conclude whether Sea Swap had succeeded or failed in this regard. Also, the Navy has no plans to monitor crews’ quality of life for the patrol coastal ships and the HSV-2 Swift. The need for such an analysis is borne out by the impact of crew morale on reenlistment rates. Quality of Life Is an Important Factor in Sailors’ Career Choice Sailors’ views of their quality of life is an important factor in determining whether they will choose to continue their military career. The Chief of Naval Operations has recognized the importance of people in making the Navy successful in performing its mission and has consistently made manpower and quality of service top priorities. According to the Chief, “Quality of work includes everything that makes your workplace a great place to be—from getting the spare parts you need in a timely manner to working spaces that are up to current standards.” Sea Swap’s Implementation May Have Been Key to Quality-of-Life Concerns Information collected by the Center, and by us during our review, indicated that implementation of the Sea Swap demonstration project had a negative effect on crewmember quality of life. While noting that Sea Swap had been successful technically, the Center’s pre- and post- surveys of the crew showed that Sea Swap adversely affected morale because of the increased workload, fewer opportunities for liberty port calls, and crewmembers’ general impression that the Sea Swap deployment was worse than their previous deployment. For example, the Center asked crews about their expectations for Sea Swap compared to previous deployments. The survey results showed that 65 percent of the arriving crews expected that Sea Swap would be a worse experience than their last deployment and of departing crews; 84 percent said participating in Sea Swap was worse. Our focus groups with crews on the U.S.S. Higgins and the patrol coastal ships also revealed a negative quality of life. The Center and we both identified several factors that contributed to sailors’ feelings, including workload, training and professional development opportunities, quantity and quality of port visits, and several sociological issues (e.g., culture, ship “ownership,” sense of pride and recognition, predictability, or Navy tradition). The Center also noted that Navy leadership would need to understand which features of Sea Swap cause negative perceptions. We addressed quality-of-life issues in each of our 43 focus group meetings. All 26 of our focus group meetings with Sea Swap destroyer crews that served on the U.S.S. Higgins and with crews on patrol coastal ships attested to a highly negative quality of life, a decreased morale, and a strong desire to not participate on any more crew rotations implemented like their most recent experience. Many crewmembers indicated that rotational crewing might have had a more positive effect if the following conditions were met: crew swapping had been better managed to ensure work accountability to reduce the workload, there had been time for individual training and professional development, promises had been kept on designated port calls, port calls had been phased throughout the deployment instead of at the end when sailors just wanted to return home, return flight schedules had been better coordinated, and proper recognition had been given to each crew. A small number of crewmembers indicated that their Sea Swap experience was positive in that they liked knowing they would be on a finite deployment period of 6 months. In contrast, the 17 focus groups we conducted with Blue-Gold crewmembers from the HSV-2 Swift and the strategic submarine force found that these crewmembers had a generally positive crew rotation experience. They attributed their positive experiences to a level workload, management accountability, predictable schedules, individual training and professional development opportunities, and sufficient amounts of personal time during the interdeployment cycle, despite the ships’ high operational tempo. Negative Morale Impacted Reenlistment Rates Lower reenlistment rates for sailors with less than 6 years of service that served on Sea Swap guided missile destroyers and patrol coastal ships reinforced the Center’s survey results and our focus group findings. Both Pacific Fleet and Surface Force Command officials identified reenlistment data as a key indicator of whether crews are satisfied with rotations. The Center’s survey and our analysis showed that negative morale associated with participating in Sea Swap had an adverse impact on reenlistment rates. The Center’s conclusion was based on a series of crew surveys. According to the Center, 55 percent of the crew said after the deployment that they thought that Sea Swap would make them less likely to stay in the Navy, versus 39 percent before the deployment, and 73 percent stated that if all deployments were like Sea Swap, they would be less likely to stay in the Navy. Our analysis of overall reenlistment data for sailors with less than 6 years of active service indicated that the crews on all three Arleigh Burke-class destroyers involved in the Sea Swap demonstration experienced 50 percent reenlistment rates. These rates were below the Navy-wide reenlistment goal of 56 percent for this group and the actual 64 percent reenlistment rate for non-Sea Swap Arleigh Burke-class destroyers based in the Pacific Fleet. Because the first-term reenlistment rates for the three Sea Swap destroyer crews were as low as 23 to 37 percent during their Sea Swap cycle, these ships were among the few that did not meet the Navy-wide reenlistment goal. If the Navy expands rotational crewing with out understanding its full impact on crews, the results could affect retention and crew support. Cost-Effectiveness of Crew Rotation Options Not Systematically Evaluated The Offices of the Chief of Naval Operations and Navy commands using crew rotations have not systematically collected cost data for assessing the return on investment or cost-effectiveness of all surface ship rotational crewing options for current and future ships. The Navy testified to the Senate Committee on Armed Services in March 2002 that it would determine the true cost and potential savings of one rotational crewing option, Sea Swap, to provide a firm analytical basis in order to decide whether to expand use of that option or look for other alternatives. Recently, the Commander, Naval Surface Force, initiated a limited effort to collect and model costs. However, to date, data collection and analyses comparing the cost of all the crew rotation options have not been completed. Cost-effectiveness is a method used by organizations seeking to gain the best value for their money and to achieve operational requirements while balancing costs, schedules, performance, and risks. The best value is often not readily apparent and requires an analysis to maximize value. A cost-effectiveness analysis is used where benefits cannot be expressed in monetary terms but, rather, in “units of benefit,” for example, days of forward presence. Such an analysis would be of particular importance when making return on investment decisions about how many ships to buy and how to operate them. Moreover, officials in DOD’s Office of Program Analysis and Evaluation told us that they have not conducted a basic cost-effectiveness analysis of rotational crewing alternatives. Nonetheless, they believe that rotational crewing is a good concept, that the Navy needs to perform these analyses, and that they were not aware of any such analyses having been conducted in the Navy. The Naval Cost Analysis Division cited DOD cost analysis guidance and procedures that would be applicable to a cost-effectiveness study of rotational crewing alternatives. According to Division officials, this guidance is to be used as the basis for preparing program life-cycle cost estimates, and provides information on the scope of the cost analysis, the procedures, and the presentation of the estimates. Division officials also told us they have not participated in any rotational crewing cost-effectiveness studies nor are they aware of any. Officials in both DOD and Navy offices indicated that the cost analyses for crew rotation alternatives should include the development of a cost structure for identifying all the relevant cost elements in the program, including depot level maintenance, fuel, training, infrastructure costs, and other costs unique to the program. While Surface Force Pacific officials had developed limited information on costs and savings unique to the Sea Swap destroyers, it was not complete, and they have not developed comparable data for the patrol coastal ships and the HSV-2 Swift. Examples of information collected included the estimated fuel savings from ship transits that were not needed; transportation, room and board for flying the crews to turnover cities; and special training. These officials told us that they plan to collect additional data to help evaluate Sea Swap costs, but that they are still determining what cost data should be collected and how to establish a baseline for control group comparison purposes. Furthermore, they told us that collection of the data will be challenging because there is no central database or automated system for coding rotational crewing-related expenses that could be used for documenting the unique costs associated with rotational crewing. The officials were also concerned that Navy management and accounting data systems are not integrated, making it difficult to collect complete and actual cost information that could be helpful in identifying the costs of the Sea Swap initiative. Surface Force Pacific officials have also determined that they have responsibility for assessing the costs of crew rotation in the patrol coastal ships as well, but they had not been doing so. Amphibious Group Two officials told us in October 2003 that they have not systematically evaluated costs and are not aware of any cost-effectiveness analyses of rotational crewing being conducted. Surface Force officials said that more complete costs for patrol coastal ships have to be collected and analyzed to allow for cost-effective comparisons. Notwithstanding the limitations in the available cost data, Naval Surface Force Pacific officials told us they recently developed and are refining a model that presents information that is more comprehensive. For example, in a July 14, 2004, briefing, the Force’s commanding officer presented costs of the Sea Swap demonstration, including a cost comparison for both the U.S.S. Fletcher and the U.S.S. Higgins to other ships in their respective classes, including the average costs per deployed day. Surface Force Pacific officials said that this model is also used to present similar data for the future littoral combat ship. However, we were informed that much of the data used in the model is based on estimates rather than actual costs and that some costs integral to evaluating rotational crewing options, such as maintenance and training infrastructure, were not included. Furthermore, the model has not been tested or validated by the Navy. Navy Has Not Provided Effective Guidance or Capitalized on Lessons Learned from Rotational Crewing Experiences The Navy has done some planning in support of rotational crewing on surface ships, such as for the Sea Swap demonstration project, but because the concept is evolving as an alternative, the service has not provided effective guidance during implementation on all ships to ensure proper oversight and accountability. Furthermore, the Navy has not systematically leveraged lessons learned to effectively support rotational crewing. Effective guidance and sharing of lessons learned are key management tools to overcome challenges associated with institutionalizing change and facilitating efficient operations. The Navy has well-established crew rotation policies and procedures for ballistic missile submarines for use as best practices that include appropriately documenting the ship’s condition and using advanced teams to help prepare for crew turnover and help ensure accountability. However, the Navy has not provided comparable guidance with policies and procedures to ensure proper crew turnover and accountability to all surface ships using rotational crewing. Consequently, the management of surface ship crew rotations has been informally delegated to each ship’s incoming and outgoing commanding officers. This has resulted in inconsistent management of and accountability for operational factors, such as the ship’s condition and ship inventories, when one crew replaces another. In addition, the surface ship community has not systematically collected, recorded, or disseminated lessons learned from all rotational crewing experiences. Although the Navy has a formal system to record lessons learned from fleet operations, experiences from crew rotations are not being recorded in the system so that they could be routinely shared among the surface ships and commands using rotational crewing. As a result, the Navy unnecessarily risks repeating past mistakes that could decrease warfighting effectiveness and crew morale. Navy Conducted Some Planning in Support of Rotational Crewing Because rotating crews aboard surface ships on extended deployments differs from the traditional 6-month ship deployment, it is important that planning be effective to increase institutional knowledge and gain acceptance for implementing the change. The Navy has performed extensive planning in support of rotational crewing on ballistic missile submarines. However, crew rotation planning for the surface ship community has been limited and less formal. Submarine Community Has Established Planning Elements The submarine community has a well-established concept for conducting Blue-Gold crew rotations, based on 40 years of experience on fleet ballistic missile submarines. As a result, we analyzed the community’s concepts, procedures, and processes to identify “best practices.” We found that three key elements of this concept are formalized turnover policies and procedures; a training plan that maintains proficiency of crews that are in port; and a maintenance plan that includes crew and incremental maintenance. Formalized Crew Rotation Turnover Policies and Procedures Help Ensure Accountability The Navy’s Submarine Forces Command developed formal policies and procedures for crew turnover in order to develop a comprehensive status of a ship’s material condition and accountability of controlled material and documents, scheduled maintenance, and supply. The turnover process takes place over 2 to 3 days, during which the on-coming crewmembers from each department and division meet with their off-going counterparts to review detailed turnover checklists that cover issues such as personnel, training, administration, maintenance logs, classified material, ship operational funds, parts, and food supplies. For example, both crews review the status of preventative and corrective maintenance repairs that are recorded in equipment status logs, which help document the material condition of the ship. This information is passed from one crew to another during turnover to maintain continuity of maintenance. Both crews also review an inventory of provisions, medicines, hazardous material, and information technology equipment. Crewmembers from both crews are required to sign the checklists, and the two ship commanders are ultimately responsible for ensuring accountability of the material condition of the ship. By taking these steps, the on-coming ship command has the opportunity to note unsatisfactory conditions—including significant personnel, training, operational readiness, habitability, and material deficiency issues—on an exchange-of-command report. Turnovers can be delayed if both crews do not agree on the ship’s material status. Members from one crew we met mentioned that they take pride in conducting the turnover because they want to set the standard for their partner crew. Training Programs Maintain Proficiency of the Crew While Ashore Maintaining the operational proficiency of the crew that is in port without a submarine is the main challenge to the strategic submarine’s Blue-Gold system. In response, the strategic submarine force has developed a training program to maintain crew proficiency in core competencies while ashore. This program is designed to update crews on recent procedural changes, allow crews to perform maintenance operations, and refresh personnel who have been away from their duties for several months. Crews receive classroom instruction and maintain their skills in simulators at the Trident Training Facility. Crews are monitored and evaluated through graded individual and group exercises. Officers and crewmembers stated that they generally received adequate and sufficient training at the training facility to perform their mission. Nevertheless, they stated that simulated training is not the same as training on a ship and that crew readiness is lower during the first week of deployment as they try to refamiliarize themselves with the ship and their mission. Crew and Incremental Maintenance Plan Designed for Rotational Crewing The ballistic missile submarine maintenance concept was specifically designed to accomplish incremental maintenance over a 42-year life cycle. The concept consists of crews working together to conduct maintenance repairs and incremental maintenance that is planned or unplanned corrective maintenance during an in-port maintenance period. The submarine community has formal guidance for the in-port maintenance period during which both crews jointly conduct maintenance repairs. One main purpose is to enhance the efficiency and productivity of the maintenance period. During this time, both crews operate under one chain of command; the off-going crew reports directly to the on-coming ship commander. Once the submarine is at sea, the off-crew works with the maintenance facility and the on-crew to develop a work package of needed preventive and corrective maintenance repairs. As a result, during the next in-port maintenance period the crew that has just taken command knows what to expect. Officers and crewmembers in our focus groups stated that this approach was key to completing required maintenance repairs in a short period. It also helps ensure that items that may not have been captured during turnover are identified according to officers on one submarine. In addition, crews stated that this concept decreases the incentives for pushing off work to the other crew because both crews conduct the needed maintenance repairs. The incremental maintenance plan involves routine maintenance based on a set schedule common to all submarines and corrective repairs, which include those items that break or are in a degraded condition as a result of operations. The Trident Planned Equipment Replacement Program, another aspect of incremental maintenance, provides for repairs on hull, mechanical, electrical, or combat control system equipment that require maintenance beyond the ability of the ship’s crew. The incremental overhaul relies on an extensive shore-based maintenance infrastructure, including dedicated full-time maintenance personnel, maintenance facilities that provide a full range of repair and maintenance services, and dry docks that provide the support necessary to conduct required equipment repairs and replacements. Limited Planning for Surface Ship Crew Rotations Despite the challenges of implementing this change in crewing practice, the surface ship community’s planning in support of crew rotations has been less formal and limited to several areas, including crew training on different systems used on participating ships, use of advanced crew turnover teams, and location and timing of port calls. Crews in our focus groups also identified some limitations to these planning efforts. Planned Training for Different Equipment and Systems between Ships Had Limitations A Naval Warfare College study of several crew rotation options identified the crew’s unfamiliarity with equipment and systems between different ships as a potential challenge for conducting the program. As part of the Sea Swap demonstration project, the Commander, Naval Surface Force, sought to address differences in ship design, construction, and modernization between forward-deployed and nondeployed ships by providing crews with predeployment training specific to the forward- deployed ship they would join. The Command planned for the training to account for many of the differences between the destroyers, with emphasis on training systems and equipment on the forward-deployed ship, and set-up training classes in the United States and Sea Swap cities. For example, one on-coming crew received training to ensure its proficiency in areas such as critical weapons systems and engineering prior to the ship’s turnover. Amphibious Group Two also provided training to patrol coastal ship crews to help them bridge the engineering differences they would face on the deployed ship. However, in our focus group discussions, ship crews participating in Sea Swap and on the patrol coastal ships cited concerns about the adequacy of this training. The crewmembers indicated that proficiency improved with practice drills, but sufficient proficiency was not achieved prior to deploying, even though they had received their certifications. The delay in achieving proficiency was accentuated for the crew that swapped in the Persian Gulf because the crewmembers did not have the opportunity during a transit to become familiar with their new ship. For example, crewmembers for one ship stated that they only received partial training for operating a new radio that is necessary for conducting strike operations. This partial training degraded the crew’s ability to shoot Tomahawk land attack missiles. Crewmembers also stated that they did not receive training to operate damage control radios, which meant the crew would have been unable to use the radios in an emergency. Patrol coastal ship crewmembers also indicated that they faced challenges in training to operate the deployed ship’s different equipment. For example, crewmembers stated they did not receive weapons training for Stinger missiles prior to overseas deployment because these weapons systems are not typically used while on deployment in the United States. Patrol coastal ship focus group comments revealed that the crews compensated for training deficiencies with self-initiated training during deployment. These crews also received some training from the Coast Guard while in theater. They felt the deficiencies in training on different systems compromised their ability to perform their respective mission. Value of Some Advanced Turnover Teams Was Limited The Surface Force Pacific Command established advance turnover teams to assist ships participating in the Sea Swap destroyer demonstration project, but their assistance was sometimes constrained. These teams were comprised of approximately 15 to 20 members of the on-coming crew who were sent to the forward-deployed ship 2 weeks in advance of the turnover to conduct inventories and observe ship operations. The use of an advanced turnover team was an effort to expedite the turnover process from one crew to another. A Command official cited the work performed by these teams as instrumental in reducing the amount of time required for the turnover as well as for increasing their familiarity with the new ship. However, crewmembers in our focus groups stated that advance teams were not as effective as they could have been in some turnovers because they were denied access to areas and equipment in the ship at the time of turnovers. For example, a regional support office assumed control of a Sea Swap destroyer in the United States, locked up the workspaces, and did not grant the advance team access. In another instance, the advance team arriving on the ship overseas was not given access until the new crew assumed responsibility for the ship, which limited the team’s time and ability to expedite an effective turnover. Navy Crew Rotation Efforts Have Lacked Standard Guidance to Ensure Oversight and Accountability The Navy’s implementation of surface ship crew rotation efforts lacked effective guidance to ensure oversight and accountability. Because the practice differs from the traditional crewing approach, such guidance is a key to ensuring successful implementation. In the absence of such guidance, including standard policies and procedures similar to those used in the ballistic missile submarine community, officers and crews on Sea Swap destroyers, patrol coastal ships, and the HSV-2 Swift developed their own turnover procedures. This caused inconsistency between crews conducting the turnovers, which in turn, led to problems in documenting ship condition and accounting for ship inventories. As a result, surface ship crews cited the need to develop and implement standard turnover procedures, including checklists. Crewmembers said there was no document to sign during the turnover to hold crews accountable for recording necessary maintenance repairs. For example, crews reported that Navy systems for tracking maintenance requirements and accomplishments were not systematically used to record maintenance repairs. Officers and enlisted crews on Sea Swap destroyers and patrol coastal ships indicated that, as a result, the ship maintenance logs did not accurately reflect the material status of the ships. One Sea Swap crew reported that the prior crew did not document that the forward-fueling station had a hole, which took the entire deployment to fix. In another instance, one crew stated that although three portable fire pumps were required to be on board the vessel, the crew only found two pumps, of which only one worked. Additionally, a patrol coastal ship crew indicated that the previous crew reported only a few needed maintenance repairs in the maintenance log. However, after turnover, the on-coming crew said that it noted about 50 repair items, including all 6 main engines that could not operate simultaneously. In another case, the electronic preventive maintenance log was not working during turnover, which the on-coming crew reportedly spent 3 weeks in repairs to make it function. A ship commander mentioned that there is a challenge associated with properly tracking maintenance logs, which are not valued by all crews. Those logs can be valuable tools when used, but he stated that the maintenance logs did not reflect the material status of the ship. Some patrol coastal ship officers stated that every crew emphasizes different maintenance priorities, which can contribute to perceptions of inadequate material condition of the ship during and after turnover. Notwithstanding different perceptions of the material condition of the ship, Sea Swap and patrol coastal ship crewmembers raised concerns about the lack of accountability, in particular oversight of documenting the material condition of the ship. Crewmembers from the Sea Swap destroyers and patrol coastal ships cited the need to establish turnover standards and checklists and to conduct an independent inspection to monitor the turnover and review the material condition of the ship. Sea Swap and patrol coastal ship crews also mentioned that accountability for ship inventories was inadequate. Naval supply guidance cites the need to conduct physical inventories of equipment and materials to the extent necessary to ensure effective control of those materials normally required for performing the mission or which require special management attention. Crewmembers told us that guidance on conducting inventories was not always followed in preparation for and during turnovers. Some crews mentioned that the time to review supply inventories, a time-consuming activity during turnover, was a problem. There were several instances on Sea Swap destroyers of missing equipment— maintenance assistance modules estimated at $90,000—and tools. One Sea Swap destroyer crew also reported that the crew discovered during an inventory a pair of missing night vision goggles. In another case, the on- coming crew lacked basic supplies, such as cleaning materials, light bulbs, and toilet paper. Crewmembers also reported items missing on their assigned ships upon return that were not identified during turnover. In another example, crewmembers of a patrol coastal ship crew stated that, upon return to the United States, they found that 10,000 rounds of ammunition were missing on their assigned ship. Sea Swap destroyer and patrol coastal ship crews cited the need for an independent authority to hold crews accountable for ship inventories. Surface Ship Community Did Not Capitalize on Past and Current Lessons Learned The surface ship community also has not capitalized on existing and evolving lessons learned to more effectively plan and conduct crew rotations. Capturing and sharing such lessons serve to further institutionalize change by improving its implementation. While the Navy has a formal system to record lessons learned, experiences from current rotational crewing efforts are not being systematically collected and recorded in that system. As a result, the Navy is missing an opportunity to record lessons learned that could be leveraged by crews involved in current and future crew rotation experiences. Further, surface ships and commands have not capitalized on the lessons learned in the system to plan and conduct crew rotations. Consequently, crews experienced similar difficulties to those that the previously recorded lessons learned sought to correct. Navy Lessons Learned System Created as a Central Repository to Preclude the Loss of Knowledge The Navy created a lessons learned database in 1991 to provide a system for units to benefit from collective Navy experiences, identify deficiencies, and take corrective measures in all aspects of fleet operations. A lesson learned is defined as information that increases the efficiency of Navy processes and improves the execution of future operations. According to the Navy, it should provide value to existing Navy policy, doctrine, tactics, techniques, procedures, organization, training, systems, or equipment. The Navy Warfare Development Command is responsible for administering the system, and its officials indicated that information for current rotational crewing efforts should be submitted to the system, as it is the best way for lessons to be shared across the Navy community. Anyone in the Navy can submit a lessons learned report through the immediate chain of command. Fleet commands process and validate the proposed report, which is then forwarded to be officially entered into the system. Navy personnel ashore and at sea can access lessons learned contained in the system through a classified Internet site. Use of this central repository would preclude the loss of lessons, such as those lost by the Mine Warfare Command in the late 1990s due to a computer failure. Surface Ship Commands Have Not Made Systematic Efforts to Collect and Record Lessons Learned for the Navy’s Central System The Naval Surface Force Command recognized the need for a comprehensive list of lessons learned in order to examine the Sea Swap initiative, but the Command has not made a systematic effort to collect or record lessons learned, nor did it task Sea Swap crews to identify and submit them. Aside from 78 lessons learned pertaining to crew rotations that took place in 1999 on Forward Deployed Naval Forces in the Seventh Fleet area of operations, no lessons learned directly related to crew rotations had been posted regarding the Sea Swap destroyers, patrol coastal ships, and HSV Swift experiences as of July 30, 2004. Absent guidance, Sea Swap crews’ efforts to record lessons learned have been inconsistent. Some crews developed lists of lessons learned that were distributed to other rotational crews and the Command, including some that related to manning, personnel, supply, predeployment maintenance, training, turnover preparations and execution, turnover time, and advance parties. In one case, a Sea Swap ship undertook a concerted effort to document lessons learned prior to deployment, but a majority of those documents were later discarded because the crew wanted to create additional workspace. By not systematically recording and providing valuable experiences from crew rotations to the Navy Lessons Learned System, the Navy is missing an opportunity to more effectively plan and conduct current and future crew rotations. In response to a Senate Armed Services Committee request on the status of one of the Sea Swap ships, the Command identified a preliminary set of lessons learned as shown in table 2. A final report will be provided to the Committee once the initiative is completed. None of these lessons learned from the Sea Swap initiative have been reported to the Navy Lessons Learned System. Efforts to gather lessons learned in the patrol coastal community have been inconsistent. Amphibious Group Two similarly did not provide direction to collect and record lessons learned and stated that crews involved in rotations passed on lessons learned to one another. A patrol coastal ship commander stated that crew efforts to gather lessons learned were informal. We identified one lessons learned report sent by a ship commander to the ship’s command, Amphibious Group Two, that contained lessons related to maintenance funding, ownership, and maintaining good ship inventories. However, none of these lessons learned had been recorded in the Navy Lessons Learned System as of July 30, 2004. The Mine Warfare Command directed the HSV-2 Swift commanding officers to develop lessons learned reports on five issues. Only two of those lessons learned reports had been posted to the Navy Lessons Learned System as of July 30, 2004, and neither addressed ship crewing issues. Many Past Lessons Learned Available in Formal System Are Not Being Systematically Leveraged The surface ship community has not capitalized on the Navy lessons learned database to plan and conduct crew rotations. A Naval Surface Force Pacific Command official told us that the Command did not systematically solicit available lessons learned from the Navy Lessons Learned System to help plan for crew rotations. We found that participants in our focus groups reported experiencing similar problems that several of the formal lessons learned reported by the Forward Deployed Naval Forces in 1999 had addressed. For example, two important lessons not leveraged were reviewing the automated process for the transfer of crew identification codes when assigned to a new ship and establishing and abiding by a written agreement between both ship commanders that clearly defines transfer and accountability procedures for equipment turnover. When crews for Forward Deployed Naval Forces were rotated in 1999, the Navy recognized that the ships were not timely in properly updating crewmembers’ records to show the ship to which crewmembers were assigned. This resulted in incorrect enlisted master files and the inability to process pay transactions. The lesson learned report stated that the Navy should instantly transfer personnel from one code to another automatically in a timely manner, which is crucial to avoid incorrect master files and the potential loss of certain pay and entitlements. Numerous Sea Swap destroyer, patrol coastal ship, and HSV-2 Swift officers and crews we met experienced similar difficulties. They reported that because their respective code was not changed to reflect they had changed ships, some crewmembers experienced problems receiving pay and others were ordered to the wrong ship. Officers and crew from a Sea Swap ship stated that creating codes for each crew would help alleviate similar problems. Assigning crews codes is a standard practice in the ballistic missile submarine community. This practice was also used by the mine warfare community during their crew rotations in the mid-1990s. The systematic use of an effective lessons learned system could have alerted the Navy to the need for a mechanism to ensure the effective transfer of crews and ships from one code to another in a timely and an accurate manner. Establishing and abiding by written agreements between both ship commanders involved in a crew rotation enable both crews early in the planning phase to determine what equipment stays with the ship or the crew and improves accountability for tool equipment transfer. The Navy’s lessons learned database identified the need for such agreements. However, despite both ship commanders agreeing during the planning phases of one of the turnovers that each ship’s tools, parts, and material would remain with the respective ship and that both crews would review an inventory checklist during turnover, both crews did not follow the agreement. One crew removed many of the tools and other equipment before leaving the ship. As a result, the on-coming crew did not have the needed tools and other equipment to perform maintenance and repairs and had to spend $150,000 to buy the needed tools. Officers and crews from two patrol coastal ships also indicated that absent an agreed-upon written inventory identifying which items stay with the ship and what items stay with the crew, one of the crews took needed ship items back to the United States, in part, to ensure the crew had necessary items on the new ship. Officers from one patrol coastal ship stated that there is a need for a standard set of inventory items that should stay on a ship. Sea Swap and patrol coastal ship officers and crewmembers stated that an independent authority is needed to monitor the turnover process, including an inventory of tools, to hold both crews accountable. Maintenance Strategies for Alternative Crewing and Potential Impacts Have Not Been Fully Assessed The impact of ship maintenance on the implementation of rotational crewing has not been fully assessed. This is because the Navy has been focused on demonstrating the feasibility of the practice and allowed ships to use different approaches to conducting maintenance without capturing all needed information and examining all related issues that could impact success. A full assessment of maintenance issues on all ships employing this practice would be important in identifying and addressing possible impediments to effectively implementing rotational crewing. Navy destroyers and patrol coastal ships using rotational crews on extended deployments have faced maintenance challenges to ensure the mission capability of ships while overseas. To help minimize the adverse effects on the material condition of forward-deployed Sea Swap destroyers, the Navy expanded the scope of predeployment maintenance and sent maintenance support representatives in theater to provide additional technical support to crews. Despite concluding that the condition of the returning ship, U.S.S. Higgins, was comparable to that of another ship that had recently returned from a deployment, the results of such efforts on maintaining ship material condition are uncertain. The Center recommended that a review of maintenance support might be necessary prior to expanding Sea Swap to other ships. We found the need for such an analysis was further supported by the experience of patrol coastal ships, which did not receive such focused maintenance and identified several maintenance problems that were not corrected while deployed that could have affected their mission capability. Moreover, both the Center and our focus groups with rotational crews found that increased maintenance tasks contributed to diminished crew morale. Therefore, while the Navy used rotational crews to keep ships on station for up to 24 months, in the absence of a careful analysis of alternative maintenance strategies, the Navy runs the risk that some maintenance approaches will degrade the long-term condition of ships, diminish crew morale, and discourage crew support for using the practice. Maintaining Ships on Extended Deployment Is a Challenge Navy vessels using rotational crews on extended deployments have faced maintenance challenges to ensure the vessels’ mission capability while overseas. Normally, most ship maintenance and repair is completed between 6-month deployments. For instance, Arleigh Burke-class destroyers normally receive continuous maintenance annually and 2-month Selected Restricted Availabilities every 22 months. However, ships employing rotational crews on extended deployments do not return to the United States for periods of 12 or more months, so crews must maintain ship capability while deployed in compliance with law and Navy guidance on overseas maintenance (see appendix III for details on Navy guidance). According to the Center, each Sea Swap destroyer received more maintenance support and more intensive support than typically received by ships on routine deployments. This support included numerous predeployment inspections and maintenance on their power, electrical, steering, combat, and other systems to eliminate many potential required maintenance activities during deployment. For example, the predeployment maintenance on one of the Sea Swap ships, the U.S.S. Fletcher, began with the identification of all time-driven maintenance requirements that were scheduled during the extended deployment. Examples included calibration, assessments, and inspections of equipment to renew time-driven certifications. (Such actions are comparable to checking a car’s timing belt or inspecting brakes and tires before taking a long trip.) Numerous other inspections were also conducted prior to deployment on selected ship systems and equipment to identify and repair problems and ensure the good working order of the ship. The U.S.S. Fletcher and the U.S.S. Higgins each received inspections for hull, mechanical, and electrical systems, as well as combat systems. The U.S.S. Higgins also received inspections of its Aegis radar system. Sea Swap destroyers also received overseas maintenance support beyond that available to ships on a typical deployment. The Surface Force Pacific Command sent U.S.-based ship engineering material assessment teams, ranging from 3 to 11 members, to perform maintenance on the Sea Swap destroyers while the ships transited from their operational area of responsibility to overseas locations where crew turnovers occurred. The teams also assisted the crews while the destroyers were in port at the crew turnover city and were comprised of senior-level maintainers capable of performing a variety of maintenance jobs at the ship’s organizational and intermediate levels. According to Navy maintenance officials, the team’s presence during transit from the theater of operations to the Sea Swap city and in port facilitated the completion of preventative maintenance, particularly repairs associated with ship habitability. Surface Force Pacific Command also assigned a Sea Swap destroyer port engineer to help ship officials develop maintenance plans during port visits, which is not typical for ships on normal deployments. U.S. Naval Forces Central Command officials also noted that Sea Swap destroyers experienced material degradation over time. As a result, both destroyers required maintenance that was not readily supportable during operations. Navy officials said that Sea Swap destroyers were given preference for port visits in support of crew turnovers and maintenance as compared to other ships. They also said that maintaining ships deployed to the U.S. Central Command area of responsibility for long periods would continue to be a challenge. During our review, we found that patrol coastal ship rotational crews also faced challenges in maintaining ship material condition. Like the Sea Swap destroyers, the patrol coastal ships received system inspections prior to deployment. Patrol coastal port engineers and maintenance support teams checked key systems—such as engines, weapons packages, and the bridge—to hedge against wear and tear the ship would experience on an extended deployment. However, unlike the Sea Swap destroyers, U.S. Naval Forces Central Command officials indicated that patrol coastal ships were not given preferential treatment to support maintenance. The patrol coastal ship community deployed a maintenance support team with the crews in an effort to address overseas maintenance needs, however, these teams are not unique to rotational crewing and typically support any patrol coastal ship deployment. The team consisted of five members located in theater who performed limited maintenance, ordered and stored parts, and provided administrative support. The scope of the maintenance performed by the teams was limited to organizational, intermediate, and select depot-level maintenance. According to focus groups with patrol coastal ship crews, the maintenance support teams were usually the only personnel in theater capable of rectifying frequently occurring maintenance problems. If a maintenance support team was not available, the crew had to contact a technical support representative in the United States for assistance or try to conduct the maintenance itself. Some patrol coastal ship crewmembers indicated that the size of the maintenance support teams was insufficient to support both patrol coastal ships on extended deployments and suggested expanding the maintenance support teams to be comparable to the system used by the Coast Guard. According to patrol coastal ship crews, the Coast Guard had four ships similar to the patrol coastal ships in theater and provided approximately 50 maintenance personnel to perform the same function as the patrol coastal maintenance support team. The increased size of the Coast Guard’s maintenance support allowed its crews to stand down and live in barracks during maintenance periods. By contrast, a patrol coastal officer noted that, during maintenance availabilities, maintenance support teams only assisted the crew and did not take over the work effort and the crews remained on board throughout the repair process and performed maintenance. Full Impact of Navy Maintenance Strategy for Destroyers and Other Ships Using Crew Rotations Is Not Clear The results of the different maintenance strategies used to sustain the two destroyers that were apart of the Sea Swap demonstration project and other ships using rotational crewing are uncertain. While the Center judged that the condition of the U.S.S. Higgins was comparable to another ship that had recently returned from a routine 6-month deployment, others in the Navy disagreed based on inspection results. We did not identify any similar effort to determine the impact on the patrol coastal or other ships that would provide the Navy with additional insights into the impact of the extended deployment on their condition. The Center’s judgment was based in part on a total ship readiness assessment conducted by Pacific Fleet maintenance personnel, in which Surface Force Pacific officials judged the U.S.S. Higgins’ ship material condition after a 17-month deployment to be comparable to the U.S.S. Decatur’s. However, officials from the Fleet Technical Support Center Pacific that performed the assessment thought there were some significant differences in the condition between the two ships. These officials found that the U.S.S. Higgins had 697 noted deficiencies out of 3,370 items tested (21 percent), whereas the U.S.S. Decatur had 465 out of 3,231 items tested (14 percent). While the number of deficiencies alone does not necessarily indicate the significant material differences between the ships, some of the items deficient on the U.S.S. Higgins included data links for controlling operations between a ship and an aircraft and another was the nonoperational extra high frequency communication system on the U.S.S. Higgins that was operational on the U.S.S. Decatur. Fleet Technical Support Center Pacific officials also assessed the operational functionality of each ship’s equipment and found that the U.S.S. Higgins was not as capable. This assessment measured the equipment operational capability of each ship in order to quantitatively determine whether the ship’s systems were performing in accordance with Navy requirements. The assessment results showed that the U.S.S. Higgins received an overall score of .70, while the U.S.S. Decatur received a score of .85. According to the Navy handbook, an equipment operational capability score of 1.0 indicates the equipment is fully capable of performing its function as designed, while a score of 0 indicates the equipment is totally unable to perform its function as designed. The handbook provides that any score between .70 and .80 indicates ship equipment is unable to obtain optimum operational standards, while scores above .80 indicate ship equipment passes all operational tests. A further breakdown of the scores indicates the U.S.S. Higgins may have had problems that were more serious. The .70 score for the U.S.S. Higgins was arrived at by assessing two categories of equipment: the combat system-related equipment and the hull, mechanical, and electrical systems-related equipment. The combat system-related equipment score for the U.S.S. Higgins was .77, while the U.S.S. Decatur received a score of .83. Since the combat system portion of the score was higher than the total for the U.S.S. Higgins, the hull, mechanical, and electrical equipment score was at a minimum below .70. According to the handbook, scores above .50 and below .70 indicate that equipment has significantly reduced output or restricted operability. By contrast, we found that the hull mechanical and electrical equipment score for the U.S.S. Decatur was at least .85, given an overall score of .85 and a combat system score of .83, which indicated that equipment was fully operable. Even though it concluded that the U.S.S. Higgins’ condition was comparable, the Center recognized the importance of maintenance to the success of rotational crewing and proposed the Navy further assess maintenance responsibilities, relationships, and costs. Specifically, the Center suggested that if Sea Swap becomes a more standard practice, “it will be necessary to conduct a holistic review of the overall maintenance process, including technical services and training.” This review would assess the responsibilities and interrelationships among the many players, such as the ship’s force, ship repair units, port engineers, and ship engineering maintenance teams. In addition, the Center added that the Navy should conduct a careful assessment to determine which maintenance support aspects are essential costs and which are dispensable. As of July 2004, the Navy had not started such an assessment. We found that the experience of other ships on extended deployments, such as patrol coastal ships, bore out the need for such an analysis. Patrol coastal ships did not receive focused maintenance comparable to Sea Swap destroyers, and ship officials identified several maintenance problems aboard one or more ships (see table 3) that were not corrected while deployed that could have affected their mission capability. Patrol coastal ships on extended deployments did not have extra in-theater maintenance support comparable to Sea Swap destroyers. For instance, patrol coastal ships did not have ship engineering maintenance teams to aid the crew in achieving maintenance. As a result, according to the maintenance support team coordinator for patrol coastal ships, routine continuous maintenance often could not be accomplished and, subsequently, the overall material condition of patrol coastal ships deployed overseas slowly degraded. The official explained that repairs authorized overseas are very narrow in scope and only cover maintenance absolutely necessary for the ship to conduct its mission. As a result, the official commented that organizational- and intermediate-level planned maintenance and preservation work are left to the crew and deployed maintenance support teams to take on over short periods in port, typically 5 days or less. In addition, according to a patrol coastal ship port engineer, each forward-deployed patrol coastal ship had received about 4 weeks of maintenance in port over the last 18 months and added that this level of maintenance does not equal what a traditionally deployed patrol coastal would receive. Port engineers and other maintenance staff noted challenges in keeping the patrol coastal ships operationally ready. For instance, in our focus group discussions with patrol coastal ship crews, they explained that the ship’s rotating crane that launches and retrieves the ship’s rigid inflatable boats broke down during a patrol and the ship had to rely upon the Coast Guard to help with its repair. A Navy official also explained that the extendedly deployed patrol coastal ships have a very high operational tempo, which also impacts the ability of the ship’s force to conduct organizational maintenance and increases the overall degradation of the ship over time. The official stated that onboard maintenance efforts have been able to keep the patrol coastal ships running, but that the Navy will pay a heavy price once the ships return to homeport for extensive overhauls, since repairs that are more serious will be necessary. The Challenge of Maintaining Ships on Extended Deployment Contributed to Crew Morale and Quality-of-Life Problems The Center and we found that crews expressed concern about the extra workload they endured to maintain high ship readiness. Specifically, the Center concluded that while the Sea Swap demonstration showed a benefit for the Navy—saving dollars and increasing forward presence— many sailors spoke of the burdens and loss of traditions. According to the Center, Sea Swap crews performed more work and experienced fewer benefits and traditions than what may have originally drawn them to the Navy. For instance, the Center’s report noted that some Sea Swap crewmembers found that the maintenance workload was high throughout the entire deployment. Other complaints were that whenever the Sea Swap ships pulled into an Arabian Gulf port, other ships’ sailors left on liberty while the Sea Swap crews remained on board doing maintenance. This intense maintenance schedule was a morale problem and a frequent topic that arose during the Center’s crew interviews. Our focus groups with Sea Swap destroyer crews identified similar concerns. For instance, extra maintenance work related to painting and preserving the ship was left to the ship’s crew to accomplish. In addition, Sea Swap officers in our focus groups indicated that unreported work and high workloads disrupted sailor quality of life and that there was no increase in time or resources to get maintenance done. They also told us that more equipment inspections by in-theater support teams were needed while in port. The officers explained that the ship’s crew had to inspect and fix different equipment throughout the ship because in-theater support teams were not available. According to the Sea Swap officers and crew, this affected their quality of life since liberty time was reduced to accommodate ship maintenance needs. Our focus groups with patrol coastal ship rotational crews also indicated that increased maintenance tasks and workloads adversely affected crew morale and quality of life. Patrol coastal ship senior chiefs told us that rotational crews had difficulty meeting ship preservation requirements, loading supplies, and documenting ship maintenance logs for non-working items during port visits of 5 days or less. In addition, crewmembers on each rotational patrol coastal ship complained that they received no liberty ports; that all port visits became working ports due to the ship’s maintenance needs; and that, given the small size of the ships, they needed time away from other crewmembers to decompress. Furthermore, a patrol coastal ship commanding officer said that his deployed patrol coastal ship required too many maintenance demands and noted that the ship was maintenance-intensive from the day his crew took over. Conclusions Rotating crews aboard surface ships on extended deployments appears to be a feasible alternative to the traditional way the Navy operates that could enhance its effectiveness. Successfully overcoming issues that could impede using this alternative and to gain support for implementing this change require knowledge of the various rotational options and their impact on operational requirements, ship condition, and crew morale. However, the Navy has not taken several key steps that could help it better plan, manage, and monitor the implementation of this crewing approach and therefore may not realize its full potential. For example, the Navy has not established the analytical framework to evaluate all rotational crewing options and related costs. In the absence of formal measurable goals, objectives, and metrics for assessing feasibility, cost, and other factors, including crew quality of life, the Navy does not have clear criteria for deciding when to use rotational crewing and which option best fits the situation. Furthermore, until the Navy more systematically collects data on current and potential surface ship rotational crewing options, including complete and accurate cost data for cost-effectiveness analyses, it will lack valuable information for making informed decisions about the potential for applying rotational crewing to current and future ships as well as whether it can get maximum return on investment and offset billions of dollars in future total ownership costs. The Navy’s implementation of crew rotations also lacks effective guidance to ensure oversight and accountability. For example, the Navy does not provide guidance that specifies standard policies and procedures for rotating crews to ensure consistent management of and accountability for ship operations during crew rotations. Until it does, crews may continue to have problems consistently documenting ship condition and accounting for ship inventories during ship turnover, which could lead to additional work burdens on the on-coming crew and potentially affect readiness. Furthermore, without more formal guidance built on systematically collected, recorded, and disseminated lessons learned from all rotational experiences that specify standard policies and procedures, the Navy may repeat mistakes. Finally, the Navy does not know enough about the implications of maintenance on ships using rotational crews as a means to extend their deployments. The Center for Naval Analyses noted in its report on the Sea Swap demonstration that if that option is to become a more standard practice, the Navy needs to further review the overall maintenance process. However, until the Navy fully assesses the additional maintenance demands and related crew quality-of-life issues experienced by all ships implementing this crewing approach, and evaluates alternative maintenance strategies, it runs the risk that it will degrade the long-term condition of ships and discourage crew support for rotational crewing. Recommendations for Executive Action To ensure that the nation’s multibillion-dollar investment in Navy ships yields the greatest possible benefits at the lowest possible total cost, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following four actions: Systematically evaluate the feasibility and cost-effectiveness for current and potential application of several rotational crewing alternatives for its surface forces by establishing formal measurable goals, objectives, and metrics for assessing feasibility, costs, and other factors, including crew quality of life, and systematically collecting and developing complete and accurate cost data, including ship total ownership costs, in order to perform accurate cost-effectiveness analyses. Provide guidance that specifies standard policies and procedures for rotating crews to ensure consistent management of and accountability for ship operations during the rotation. Systematically collect, record, and disseminate lessons learned pertaining to rotational crewing in the Navy Lessons Learned System to enhance knowledge sharing. Conduct a study of the maintenance processes used for all ships involved in rotating crews and examine, as part of the study, opportunities to mitigate the crews’ concerns about maintenance workload to improve their quality of life. Agency Comments In written comments on a draft of this report, DOD agreed with the recommendations and cited actions it will take to implement the recommendations. DOD’s comments are presented in their entirety in appendix IV. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Navy, and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-4402 or e-mail me at stlaurentj@gao.gov. Key staff members that contributed to this report are listed in appendix V. Appendix I: Ships Included in Our Evaluation Ohio-Class Ballistic Missile Submarine Nuclear-powered Ohio-class ballistic missile submarines, also known as Trident submarines, provide the sea-based leg of the triad of U.S. strategic deterrent forces and the most survivable nuclear strike capability. There are 14 Ohio-class ballistic missile submarines in the fleet, homeported in Kings Bay, Georgia, and Bangor, Washington. Each submarine has about 15 officers and 140 enlisted personnel. The average procurement unit cost for each Ohio-class ballistic missile submarine is $2 billion (in fiscal year 2004 dollars). To maintain a constant at-sea presence, a Blue-Gold rotational crewing concept is employed on these submarines. Each ship has a “Blue” Crew and a “Gold” Crew, each with its own respective ship command. The ship deploys with one of these crews for 77 days, followed by a 2- to 3-day crew turnover and a 35-day maintenance period. For example, after a Blue Crew deployment, the Gold Crew takes command of the boat after a 3-day turnover process. The Blue Crew assists the Gold Crew in conducting maintenance repairs. During the Gold Crew’s patrol, the Blue Crew stands down and enters a training cycle in its homeport. Spruance-Class Destroyer The DD-963 Spruance-class destroyer has multimission offensive and defensive capabilities, and it can operate independently or as part of other naval formations. These ships were developed for the primary mission of antisubmarine warfare. Many of these ships were subsequently modernized with a vertical launch system and a Tomahawk cruise missile capability that greatly expanded the role of the destroyer in strike warfare. The crew consists of 30 officers and 352 enlisted personnel. The average procurement unit cost is $489.6 million (in fiscal year 2004 dollars). The Pacific Fleet conducted Sea Swap rotational crewing with four ships of this class, with the U.S.S. Fletcher being the forward-deployed unit. The three other destroyers were decommissioned coincident with the crew exchange. That is, each on-coming crew decommissioned its ship prior to swapping with the off-going crew of the U.S.S. Fletcher. As a result, after their 6-month deployment, the off-going crewmembers dispersed to a variety of new assignments, just as if their own ship were being decommissioned. Further, the Spruance-class destroyer swap rotation was initially planned on three ships but was extended by adding a fourth destroyer. As a result, the U.S.S. Fletcher remained deployed for over 22 months. All of the Spruance-class destroyers will be decommissioned by the end of fiscal year 2006. Arleigh Burke-Class Guided Missile Destroyer The DDG-51 Arleigh Burke-class guided missile destroyers—first commissioned in July 1991, with primary homeports in San Diego and Norfolk—provide multimission offensive and defensive capabilities, operating independently or as part of other naval formations. The DDG-51 is equipped with the Aegis combat system, a vertical launching system for launching antiaircraft and Tomahawk missiles, and an advanced antisubmarine warfare system. Each destroyer crews 23 officers and 300 enlisted personnel, and has a procurement average unit cost of $976 million (in fiscal year 2004 dollars). Navy plans call for a force of 62 Arleigh Burke-class guided missile destroyers. At the end of fiscal year 2004, this force will total 43 ships. The Navy is conducting a Sea Swap rotational crewing system to rotate entire crews from one hull to another on selected ships in the Naval Surface Force Pacific Command’s fleet of Arleigh Burke-class destroyers. Cyclone-Class Patrol Coastal The Cyclone-class patrol coastal ships are small Navy vessels used to conduct surveillance and shallow-water interdiction operations in support of maritime homeland security operations and coastal patrol of foreign shores. The Cyclone-class patrol coastal ship first entered into service in 1993. The patrol coastal force consists of 13 ships—4 stationed in San Diego, California, and 9 in Little Creek, Virginia. The crew consists of 4 officers and 24 enlisted personnel. The procurement average unit cost is $19.4 million (in fiscal year 2004 dollars). The Navy is using a crew swap model in which the entire crew of 28 crewmembers rotates from one hull to another. The rotations are occurring between patrol coastal ships in the United States and those deployed in the Arabian Gulf to increase operation days and reduce transit times. Operational requirements have delayed the decommissioning of 8 ships and the transfer of 5 ships equipped with loading ramps to the Coast Guard. High Speed Vessel Two (HSV-2) Swift The HSV-2 Swift is a high speed (almost 50 knots), wave-piercing aluminum-hulled catamaran that was acquired as an interim mine warfare command and support ship and a platform for conducting joint experimentation, including Marine Corps sea basing. The Navy leased and accepted delivery of the catamaran from the builder, Incat Australia, in Australia, in August 2003. The Swift was leased for 1 year at a cost of $27 million, with a 4-year option ($58 million). The Swift employs two crews of 41 members each and uses the Blue-Gold crewing option. The Gold Crew is based out of the Naval Amphibious Base Little Creek, Norfolk, Virginia. It operates the ship as a joint experimental platform with Marine Corps troops embarked, testing experimental and near-shore combat ship concepts. It also conducts special operations warfare. The Blue Crew is based out of Naval Station, Ingleside, Texas. This crew operates the ship as a mine warfare command and control ship. The Mine Warfare Command is in charge of coordinating overall mission scheduling for the ship and crews. The crews are responsible for the ship, but not its mission equipment. Each command that brings modules aboard ship must supply personnel to operate the modules. The Swift operates on a nominal 117-day cycle (plus or minus 10 days), including a 3-to-4 day turnover between crews, with a 4-month on/4-month off cycle. Crew exchanges take place in the crews’ respective homeports or at overseas locations. Next Generation Guided Missile Destroyer, the DD(X) The DD(X) is a next generation, multimission surface combatant ship tailored for land attack that has not been built. The Navy intends to operate the DD(X) independently or as part of other naval formations. The DD(X) is expected to provide precision firepower at long ranges in support of forces ashore using two 155-mm advanced gun systems and 80 vertical-launch system tubes for the Tomahawk cruise missiles and other weapons. For fiscal year 2005, the Navy is requesting $221 million to begin building the first DD(X) and $1.2 billion for research and development for the program. The first ship is planned for delivery to the Navy in 2012. The Navy estimates that the first DD(X) will cost about $2.8 billion, including about $1.0 billion in detailed design and nonrecurring engineering costs for the class. The Navy earlier indicated it was planning to procure 24 DD(X) vessels through fiscal year 2017, before shifting to procurement of the next generation cruiser in fiscal year 2018. Recently, however, the Navy indicated it might accelerate the start of the cruiser procurement to sometime between fiscal year 2011 and 2014 and reduce the number of DD(X) destroyers it intends to buy to between 10 to 16. Current DD(X) design planning anticipates a crew size of 125 to 175 persons. The procurement contract establishes the requirement to consider deploying ships up to 3 years and requires the design agent to conduct and complete an analysis of crewing options that would support extended forward deployments, including standard, Sea Swap, Horizon, and Blue-Gold crewing options. The contract also requires the design agent to ensure that the DD(X) system can be effectively operated with an optimized crew and provide the crew with the highest quality of life, while minimizing total ownership cost. Littoral Combat Ship The littoral combat ship—a new class of Navy surface combatants and the smallest member in the DD(X) family of next generation surface combatant ships—is intended to be fast, agile, stealthy, affordable, and tailored for specific missions, such as antisubmarine, antisurface, or mine warfare in heavily contested littoral, or near-shore, waters, and it will use interchangeable mission modules tailored for specific missions. The Navy’s goal is to develop a platform that can be fielded in relatively large numbers to support a wide range of joint missions, with reconfigurable mission modules to assure access to the littorals for the Navy forces in the face of threats from surface craft, submarines, and mines. It is also expected to have the capability to deploy independently to overseas littoral regions and remain on station for extended periods either with a battle group or through at-sea replenishment. Baseline ship planning is for a single crew; rotational crewing concepts are being explored as a secondary option. Crew size is expected to range between 15 to 50 core crewmembers, which do not include the crew for the mission package. The Navy has plans to build 56 ships, with the first to be delivered in fiscal year 2007 for an estimated cost of $20 billion. Each sea frame hull has an average unit cost of $147.5 million to $216.4 million (in fiscal year 2004 dollars). The mission modules’ average procurement cost is $177 million (in fiscal year 2004 dollars) per ship set. The resulting average cost for a littoral combat ship platform is $324.6 million to $393.4 million (in fiscal year 2004 dollars). Appendix II: Scope and Methodology To assess whether the Navy has systematically evaluated the feasibility and cost-effectiveness of rotational crewing concepts for existing and future classes of surface ships, we interviewed Department of Defense (DOD) and Navy Headquarters and fleet officials, met with cost analysis experts in the government and the private sector, reviewed key acquisition documents and crew employment plans, and reviewed rotational crewing studies performed for and by the Navy. Studies we reviewed included “Future Force Operational Plan,” Executive Summary of the Horizon Concept Generation Team, Chief of Naval Operations Strategic Studies Group XVI (June 1997); “Crew Rotation: The MCM-1 Experience,” Center for Naval Analyses (May 1998); “Alternative Naval Crew Rotation Operations,” Center for Naval Analyses (October 2001) “Task Force Sierra Force Structure For The Future Phase One,” Naval War College (undated); “Alternative Approaches to Meet New Operational Commitments,” Briefing by the Deep Blue Team, Chief of Naval Operations (undated); “Sea Swap,” Warfare Analysis & Research Department, Naval War College (June 2003); and “Sea Swap Assessment,” Center for Naval Analyses (September 2004). We also conducted meetings with several of the commanding and executive officers of the Sea Swap destroyers, the HSV-2 Swift, and selected patrol coastal ships and strategic ballistic missile submarines. To assess whether the Navy has effectively managed rotational crewing on surface ships and leveraged lessons learned, we visited Naval Surface Force Command, U.S. Pacific Fleet, San Diego, California; Submarine Group Nine Command, Bangor, Washington; Mine Warfare Command, Corpus Christi, Texas; and Amphibious Group Two Command, Norfolk, Virginia. We also met with officials from the Deputy Chief of Naval Operations for Naval Warfare (Plans, Policies, and Operations; Surface Warfare; and Submarine Warfare) to review Navy guidance and plans for conducting crew rotations. We also conducted over 40 focus group meetings with Navy officers and crews involved in crew rotations on the guided missile destroyer U.S.S. Higgins, selected ballistic missile submarines, the HSV-2 Swift, and selected patrol coastal ships (see page 58 for more information on the objective, scope, and methodology of the focus groups). Further, we reviewed Navy Lessons Learned System instructions and visited the Navy Warfare Development Command, Newport, Rhode Island, to query the Navy Lessons Learned System to determine recorded lessons learned pertaining to crew rotations. To assess how ship maintenance may impact implementation of rotational crewing, we reviewed relevant laws and Navy regulations pertaining to maintenance of U.S. Navy ships. We discussed ship material condition and associated sailor workload in over 25 focus groups with crews from the Sea Swap guided missile destroyers and from selected patrol coastal ships that had participated in crew rotations. We also obtained ship material condition assessments, called Total Ship Readiness Assessments, for the U.S.S. Higgins and the U.S.S. Decatur. We discussed the methodology and results of the assessments with officials from the Fleet Technical Support Center, San Diego, California; the Southwest Regional Maintenance Center, Commander Pacific Fleet, San Diego, California; the Naval Surface Warfare Center, Corona Division, Corona, California; and the Naval Surface Force Pacific, San Diego, California. We met with and obtained maintenance guidance and reports from Navy officials at Combined Fleet Forces Command, Norfolk, Virginia; Surface Force Atlantic, Norfolk, Virginia; Surface Force Pacific, San Diego, California; Commander U.S. Pacific Fleet, Honolulu, Hawaii; Amphibious Group Two Command, Little Creek, Virginia; and maintenance experts in the Offices of the Assistant Secretary of the Navy (Research, Development and Acquisition) and the Chief of Naval Operations, Washington, D.C. We also obtained written responses to our questions from U.S. Naval Forces Central Command. In addition, we reviewed the Center for Naval Analyses’s Sea Swap Assessment report and discussed the report’s findings with officials from the Center. To compare reenlistment rates for crews on Sea Swap guided missile destroyers and non-Sea Swap guided missile destroyers in the U.S. Pacific Fleet, we obtained Unit Honor Roll reports, derived from the Enlisted Master File, from the Commander, U.S. Pacific Fleet, Honolulu, Hawaii. We did not analyze Spruance-class destroyer data for two reasons: (1) we did not conduct focus groups with these crews and (2) the rotational crewing experience was not as complete or complicated as that experienced by crews on Arleigh Burke-class guided missile destroyers. Based upon discussions with Pacific Fleet officials we also excluded selected ship crews from our non-Sea Swap guided missile ship analysis because we wanted the ships we analyzed to reflect the standard ship and crew option as closely as possible. The ships and crews we excluded were: (1) precommissioning crews because of their small sample sizes and nondeployed status, (2) the U.S. Milius and its crew because it was an optimal manning experiment ship, and (3) the U.S. Paul Hamilton because this crew was on an extended, 10-month deployment. We compiled reenlistment averages for the ships we analyzed in 6-month blocks that roughly corresponded with Sea Swap guided missile destroyer program and crew deployments, beginning November 1, 2001, and ending on April 30, 2004, and that included pre-deployment, deployment and post- deployment data for these crews. While we did not validate the casualty report and sailor reenlistment data used in this report, we discussed the data with DOD officials to determine that the data were sufficiently reliable for our analysis. We did validate the Navy Lessons Learned System data and determined the data were sufficiently reliable for our analysis. We conducted our review from July 2003 through July 2004 in accordance with generally accepted government auditing standards. Focus Groups with Crews on Rotational Crewing Ships We conducted focus group meetings with Navy submarine and ship officers and enlisted personnel who were involved in crew rotations. Focus groups involve structured small group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. As with typical focus group methodologies, our design included multiple groups with varying group characteristics but some homogeneity—such as rank and responsibility— within groups. Each group involved 7 to 10 participants. Discussions were held in a structured manner, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. Our overall objective in using a focus group approach was to obtain views, insights, and feelings of Navy submarine and ship officers and enlisted personnel involved in crew rotations. Scope of Our Focus Groups To gain broad perspectives, we conducted over 40 separate focus group sessions with multiple groups of Navy ship officers and enlisted personnel involved in crew rotations on the guided missile destroyer U.S.S. Higgins, selected ballistic missile submarines, the HSV-2 Swift, and selected patrol coastal ships. Table 4 identifies the composition of the focus groups on each of the vessels. Across focus groups, participants were selected to ensure a wide distribution of officers, enlisted personnel, seniority, and ship departments. GAO analysts traveled to each naval station to conduct the majority of the focus groups. Six of the focus groups were conducted on board the U.S.S. Higgins while it transited to its homeport after its extended deployment. Methodology for Our Focus Groups A guide was developed to assist the moderator in leading the discussions. The guide helped the moderator address several topics related to crew rotations: training, maintenance, infrastructure and operations, management and oversight, readiness, crew characteristics, quality of life, lessons learned, and overall satisfaction with the rotational crewing experience. Each focus group discussion began with the moderator describing the purpose of our study and explaining how focus groups work. Participants were assured anonymity of their responses, in that names would not be directly linked to their responses in write-ups of the sessions and that all of the responses for the session would be summarized. The participants were then asked open-ended questions about the impact of crew rotations on each of the topics. All focus group questions were moderated by a GAO analyst who was assisted by a GAO subject matter expert, while two assistants took notes. Content Analysis We performed a systematic content analysis of the open-ended responses in order to categorize and summarize participants’ experiences with crew rotations. Based on the primary topics developed in the focus group guide, individual GAO analysts reviewed the responses from one of the crews and created their own respective lists of subcategories within each of the primary focus group topics. The analysts then met collectively to generate a proposed list of topic primary categories and subcategories. To ensure inter-rater reliability, one of our analysts reviewed the responses from each vessel type and assigned each comment to a corresponding category. Another analyst also reviewed each response and independently assigned the same comment to a corresponding category. Any comments that were not assigned to the same category were then reconciled and adjudicated by the two analysts, which led to the comments being placed into one or more of the resulting categories. Agreement regarding each placement was reached between at least two analysts. All initial disagreements regarding placement into categories were discussed and reconciled. The responses in each category were then used in our evaluation of how the Navy’s experiences with rotational crewing have been effectively managed and the effect of maintenance overseas on ships homeported in the United States during extended deployments. Limitations of Focus Groups Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the focus group participants’ reasons for the attitudes held toward specific topics and to offer insights into the range of concerns and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, they represent the responses of Navy ship officers and enlisted personnel from more than 40 selected groups. Second, while the composition of the groups was designed to assure a distribution of Navy officers, enlisted personnel, seniority, and ship departments, the groups were not randomly sampled. Third, participants were asked questions about their specific experiences with crew rotations. The experiences of other Navy ship officers and personnel involved in crew rotations, who did not participate in our focus group, may have varied. Because of these limitations, we did not rely entirely on focus groups, but rather used several different methodologies to corroborate and support our conclusions to objectives two and three. Appendix III: Summary List of Department of the Navy Guidance Implementing 10 U.S.C. 7310 Department of the Navy guidance related to the implementation of Title 10, United States Code, section 7310(a) restrictions on overseas maintenance, or that define terms used in the law, is noted below. Chief of Naval Operations Chief of Naval Operations Instruction 4700.7K (July 2003), “Maintenance Policy for U.S. Navy Ships,” defines voyage repairs as “corrective maintenance of mission- or safety-essential items necessary for a ship to deploy or to continue on its deployment.” Naval Sea Systems Command Naval Sea Systems Command Fleet Modernization Program Management and Operations Manual (June 2002, Rev. 2), SL720-AA-MAN-010, Glossary, defines voyage repairs as “emergency work necessary to repair damage sustained by a ship to enable the ship to continue on its mission and which can be accomplished without requiring a change in the ship’s operating schedule or the general streaming notice in effect.” Military Sealift Command Commander Military Sealift Command Instruction 4700.15A (February 2, 2000), “Accomplishing Ship Repair in Foreign Shipyards,” states that voyage repairs include corrective maintenance on mission or safety essential items necessary for a ship to deploy, to continue on its deployment, or comply with regulatory requirements; scheduled maintenance, only to the extent that said maintenance is absolutely necessary to ensure machinery and equipment operational reliability or comply with regulatory requirements; and voyage repairs do not include corrective maintenance actions that may be deferred until the next scheduled regular overhaul and drydocking availability in the United States or Guam without degrading operational readiness, habitability standards, or personnel safety, or adversely impacting regulatory compliance. Appendix IV: Comments from the Department of Defense GAO’s Comments The following are GAO’s comments on the Department of Defense’s letter dated October 25, 2004. 1. We have added a discussion of the methodology we used in our Sea Swap destroyer reenlistment analysis. See appendix II. 2. No change needed in report. 3. No change needed in report. 4. We agree that expanded scope predeployment inspections and maintenance for ships scheduled for extended deployments are prudent. We also agree that ships scheduled for extended deployments would benefit from a clearly defined process to delineate those increased requirements. 5. Our report noted that increased maintenance tasks contributed to diminished crew morale. We agree with DOD’s comment that many other factors also contributed to the diminished morale for sailors crewing on rotational crewing ships. 6. Our report did not recommend revising Title 10 requirements. 7. No change needed in report. 8. No change needed in report. 9. No change needed in report. 10. No change needed in report. 11. No change needed in report. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Jim Bancroft, Kelly Baumgartner, Larry Bridges, Lee Cooper, Corrie Dodd-Burtch, Joseph Kirschbaum, Kate Lenane, Elizabeth Morris, Richard Payne, Charles Perdue, Terry Richardson, Roderick Rodgers, Bill Russell, Rebecca Shea, Jennifer Thomas, Julie Tremper, John Van Schaik, and R.K. Wild made key contributions to this report. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
The Navy has traditionally maintained overseas presence by deploying ships for 6 months. Rotating crews aboard ships that remain deployed for longer periods is an alternative the Navy could pursue to increase the utilization of ships. Senior Navy officials have also cited crew rotations as a way to reduce part of the Navy's plans for a larger force structure and reportedly free billions of dollars for other priorities. On its own initiative, GAO examined the Navy's efforts to evaluate and implement several rotational crewing options and the impacts of ship maintenance on extended rotational crewing deployments. The Navy has initiated change by demonstrating that rotating crews aboard surface ships on extended deployments may be a feasible alternative to traditional 6-month ship deployments. To effectively institutionalize and implement change, best practices show that a comprehensive analytical framework provides useful information to decision makers. However, the Navy has not established such an analytical framework--consisting of formal measurable goals, objectives, and metrics--that could be used to assess the feasibility of various rotational crewing options and determine their impact on operational requirements, ship condition, and crew morale. Further, the Navy has not systematically collected or developed accurate cost data to perform complete cost-effective analyses. Absent such information, the Navy may not know the full impact of rotating crews on surface ships, the extent to which the various options should be implemented, or whether it is getting maximum return on investment. Because rotating crews on surface ships is evolving as an alternative, the Navy has not provided effective guidance when implementing the practice and has not systematically leveraged lessons learned. Effective guidance and sharing of lessons learned are key tools used to institutionalize change and facilitate efficient operations. While the Navy has well-established crew rotation policies and procedures for ballistic missile submarines that include appropriately documenting a ship's condition and turnover procedures for accountability, it has not provided comparable guidance to surface ships. As a result, the Navy unnecessarily risks repeating mistakes that could decrease warfighting effectiveness and crew morale. Furthermore, the impact of ship maintenance on the implementation of rotational crewing has not been fully assessed. Effective maintenance strategies help ensure ships can perform their missions without adverse impacts on crew morale. It is a challenge to ensure the mission capability of ships that are deployed for longer periods because most maintenance and repair is usually completed between 6-month deployments. While rotating crews has enabled the Navy to keep ships deployed for up to 24 months, the service has not fully examined all issues related to the best maintenance strategies that could affect a ship's condition and crew's morale. Absent effective strategies, the Navy risks degrading long-term ship condition and discouraging crew support for rotational crewing.
Background The primary objective of the Chief Financial Officers (CFO) Act of 1990 Public Law (101-576) is improving the financial management of federal agencies. Among the specific requirements of the CFO Act is that each agency CFO develop an integrated agency accounting and financial management system, including financial reporting and internal controls. Such systems are to comply with applicable principles and standards and provide complete, reliable, consistent, and timely information that is responsive to the agency’s financial information needs. DOD’s Financial Management Regulation also specifies that the department’s CFO is to develop and maintain an integrated DOD accounting and financial management system, including financial reporting and internal controls, that provides for the integration of accounting and budgeting information. In addition, the Joint Financial Management Improvement Program (JFMIP)Framework for Federal Financial Management Systems states that the financial management system should not only support the basic accounting functions for accurately recording and reporting financial transactions, but must also be the vehicle for integrated budget, financial, and performance information that managers use to make decisions on their programs. Further, the Federal Financial Management Improvement Act (FFMIA) of 1996 requires agencies to implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable accounting standards, and the standard general ledger. The legislative history of FFMIA expressly refers to JFMIP requirements and Office of Management and Budget (OMB) Circular A-127 as sources of the financial management systems requirements. JFMIP’s Framework for Federal Financial Management Systems defined an agency’s financial management operations to encompass the relationships among the program delivery financing function, the budget formulation/transaction tracking function, and the financial accountability function. The integration of systems is a key element to achieving these functional relationships. OMB Circular A-127, Financial Management Systems, requires federal agencies to “. . . establish and maintain a single, integrated financial management system.” Benefits of a Single Integrated Financial Management System A single integrated financial management system ensures that adequate financial controls are in place through the linkage of the budget formulation, financial accountability, and transaction processes. In addition, an integrated financial management system provides for improvements in efficiency, including reductions in the potential for errors and rework. Figure 1 is a simplified example of how a single integrated financial management system for asset acquisition can help achieve greater control and accountability. As shown in this example, integration would allow contract data to be entered initially by acquisition personnel when an asset was ordered. This information would be available to accounting personnel to record the obligation and property management to recognize that an asset is to be delivered. Upon asset receipt, property management personnel enter the asset in property management records. Those records are available for accounting personnel for payment purposes, for acquisition personnel to monitor contract delivery, and for property management personnel to monitor program results and the use of budgetary resources. Under a single integrated financial management system, greater asset control and accountability is achieved because data associated with assets acquired are available simultaneously to accounting, property management, and acquisition personnel. Alternatively, in the absence of a single integrated system, OMB Circular A-127 permits a unified set of financial systems that are planned for and managed together, operated in an integrated fashion, and linked together electronically in an efficient and effective manner to provide agencywide financial system support necessary to carry out an agency’s mission and support its financial management needs. Under a unified integrated system, data reside in multiple systems that are linked by interfaces. Data integrity can be ensured by compensating controls, such as reconciliation. For example, if property management records do not include data on an asset for which the accounting records indicate payment has been made, the necessary steps can be taken to determine if the asset was in fact acquired and to determine its current location, or if the accounting records need to be corrected. DOD’s audited financial statements prepared under the CFO Act provide for an annual scorecard on the department’s progress in resolving its financial management deficiencies. To date, DOD has not passed this test of its ability to produce reliable financial information. The most recent audits of DOD’s financial statements identified pervasive weaknesses across virtually the full spectrum of the department’s systems and controls, including material weaknesses in DOD’s ability to maintain accountability and control over virtually every category of physical assets, including military equipment; account for the full cost of its operations and the extent of its liabilities; properly record and reconcile disbursements, which has resulted in numerous erroneous and several fraudulent payments. Correcting the department’s long-standing systems’ weaknesses will be critical if DOD is to resolve these serious financial management weaknesses. Until the department can successfully integrate its information systems, its ability to efficiently and effectively maintain accountability over its vast resources, prevent wasted resources, and achieve broader management reforms will continue to be impaired. Concept of Operations Key to Improving Financial Management Another key element of improving an agency’s financial management is the development of a high-level description of how it carries out its financial management responsibilities—a concept of operations. The importance of this step was emphasized by the Congress in including this requirement in DOD’s fiscal year 1998 authorization act. A concept of operations defines how an entity’s operations are (or will be) carried out. It includes a high-level description of the operations that must be performed and who must perform them. As we noted in a June 1997 letter, for the concept of operations to be useful, it should encompass (1) all of DOD financial management, not just the finance and accounting functions performed by the Defense Finance and Accounting Service (DFAS), and (2) both current and future financial management operations to document how the department is working today and to obtain mutual agreement from all parties on how DOD will conduct its financial management operations in the future. First Biennial Plan Important Step Toward Improving Financial Management In developing its concept of operations as part of its Biennial Plan, DOD has taken an important step in improving its financial management operations. DOD has reported, for the first time, the importance of the programmatic functions of personnel, acquisition, property management, and inventory management to the department’s ability to support consistent, accurate information flows to all information users. Specifically, the department’s Biennial Plan recognizes that approximately 80 percent of its financial data is derived from program functions and identifies the integrity of these data as critical to the success of the future financial improvement efforts. Recognizing the root of the problem is the first step towards finding the appropriate solution. DOD’s Biennial Plan is an ambitious undertaking. The 1998 authorizing legislation requires that the department’s Biennial Plan address all aspects of financial management. The Biennial Plan encompasses over 900 pages of text and provides information on over 200 separate financial management improvement initiatives. In addition, according to DOD, the Biennial Plan incorporates the department’s response to the annual financial reporting requirements specified in other regulatory legislation, including the following: the CFO Act requirement for a CFO Five Year Plan, the Federal Financial Management Improvement Act requirement for a Remediation Plan for correcting systems deficiencies, and the Federal Managers’ Financial Integrity Act requirement for a Statement of Assurance for the agency’s financial management systems. The DOD Inspector General, in the DOD financial statement audit report, must provide his opinion on whether the Biennial Plan satisfies the FFMIA requirements for a systems remediation plan. For the purposes of this analysis, we did not evaluate whether the plan meets the other legislative requirements. Because of the range and amount of detailed information contained in the department’s Biennial Plan, it is divided into two volumes. Volume I of the Biennial Plan includes an executive summary followed by three main sections on the concept of operations, the current environment, and the transition plan intended to describe the department’s goals for achieving the target financial management environment and to identify the strategies and corrective actions necessary to move through the transition. A section on the 12 specific topics that are required to be addressed is also included. Volume II of the Plan provided information on the specific financial management improvement initiatives that, according to the department, are intended to implement the transition plan. The initiatives include improvements to existing systems, development of new systems, and studies to develop strategies and goals for specific problem areas. DOD’s Concept of Operations Missing Key Elements In its Biennial Plan, DOD stated that the purpose of its concept of operations was to describe how the department will structure and manage financial operations in the future to be in compliance with applicable regulatory requirements. The Biennial Plan further stated that the department will use this concept of operations to guide the evolution of its financial management policies, systems, functions, and improvement initiatives by specifying the target environment needed to meet regulatory requirements and produce auditable financial statements. However, the concept of operations does not address two critical elements that are necessary for producing sustainable financial management improvement over the long term. Specifically, the concept of operations does not address (1) how its financial management operations effectively support not only financial reporting but also asset accountability and control, and (2) budget formulation. First, the department’s concept of operations does not clearly address the department’s fundamental financial management responsibilities for asset accountability and control. DOD’s concept of operations appears to focus primarily on financial reporting and the information needed from the program managers to prepare auditable financial statements. The flow of information among functional areas, such as how acquisition will provide information to property management is not clear. This flow of information helps promote accountability. Maintaining financial accountability over DOD’s assets is an area of continuing concern because, in the current environment, the department must rely on fundamentally weak controls. DOD currently obtains the data needed by the department’s accounting personnel for financial reporting from its property management systems after items are received and entered into those systems. There is no reconciliation of that information with acquisition and payment data. Without such reconciliations, DOD’s ability to maintain effective asset accountability and control is impaired. While acknowledging the importance of integration, the plan enumerates the costs and disadvantages of integration. These include (1) data structures would need to be standardized across integrated systems, (2) maintenance of shared data must be timely and well executed since many integrated systems may be affected, and (3) business processes, procedures, and practices must be modified commensurate with the integrated network. These could all be viewed as advantages of integration, and including them as disadvantages sends mixed messages on the department’s intentions to integrate its systems. Defining the needed integrated relationships is vital to ensuring that adequate financial controls not only facilitate financial reporting, but also help maintain effective asset accountability. As stated, under an integrated system, DOD’s accounting and logistic functions would obtain data on asset acquisitions from the department’s acquisition community data. These data could then be reconciled with subsequent logistics records as the assets are placed in service at DOD locations around the world. Second, the department’s concept of its financial operations does not include the budget formulation processes. The DOD plan states that it intentionally excluded budget formulation because it is performed as part of the department’s Planning, Programming, and Budgeting System (PPBS). However, budget formulation is one of the central processes involved in any agency’s financial management operations and must be included in the department’s concept of operations to develop a fully integrated financial management system. One of the primary goals of the CFO Act is to better integrate budget and accounting information. The CFO Act requires each agency CFO to monitor budget execution and to develop and maintain systems which integrate accounting and budget information. The integration of budget formulation with budget execution and accounting is necessary to help ensure that budgets consider financial implications and that policy decisions are based on sound financial information. Furthermore, JFMIP’s Framework document identified the integration of budgeting and accounting as the first step to establishing a firm financial management information foundation. Such integration would provide a record of historical costs and performance data that is key to reliably estimating future costs. Therefore, it is important for DOD to determine how actual cost and financial management data from other systems will flow to PPBS, which incorporates DOD’s budget formulation systems and processes, and be used in the budget process. DOD stated in its Government Performance and Results Act Annual Performance Plan for Fiscal Year 1999 that it will use existing data systems and reports supporting the PPBS process to verify and validate performance information. However, as discussed in a June 1998 report on the results of our review of DOD’s Annual Performance Plan for Fiscal Year 1999, the DOD performance plan does not address known system deficiencies. For example, we previously reported that the weaknesses in the Army’s systems used to account for and control major equipment items and real property, adversely affected its ability to make reliable budget requests for procurement and real property maintenance. However, because budget formulation is excluded from DOD’s concept of operations, it does not discuss how PPBS will be supported by these systems and how known deficiencies will be addressed. Transition Plan Does Not Provide a Road Map From the Current Environment to DOD’s Future Financial Management System The transition plan, while an ambitious statement of DOD’s planned improvement efforts, has two important limitations: (1) clear links are not provided between the envisioned future operations and the numerous planned improvement initiatives to determine whether the proposed transition will result in the target financial management environment and (2) actions to ensure feeder systems’ data integrity are not addressed. Links Not Fully Described A vital part of any transition plan is a description of how the specific initiatives in the plan bridge the gap between the current environment and the envisioned environment. Thus, describing how the current environment operates is an important step in being able to choose and implement the improvement initiatives. In other words, DOD needs to know where it stands now to help it map out how it will get to its final destination—improved financial management. The plan’s discussion of the current environment included key information such as (1) the roles and responsibilities of the Under Secretary of Defense (Comptroller), DFAS, military departments, the defense agencies and the DOD management oversight structure, (2) the operational structure of finance and accounting including DFAS functions, the military departments’ and defense agencies’ finance and accounting functions, and the DOD technical supporting structure, and (3) the status of impediments to auditable financial statements, including inadequate program feeder and core systems. In addition, Volume II of the department’s Biennial Plan includes overview information on over 200 specific initiatives. However, Volume II does not discuss how each of these discrete initiatives will contribute to DOD’s ability to achieve its envisioned concept for its financial management operations. In addition, the transition plan generally does not provide a high-level description of how information currently flows from one function to another. While DOD’s planned concept of operations and transition plan are organized by function, information by function on the current environment is generally not included. These omissions make it difficult to track from the current environment to the target financial management environment by function and to determine how the many initiatives included in the transition plan will move the department from the “as is” to the future. A clear link between the department’s envisioned concept for its financial operations and each of its specific initiatives by function will be essential if DOD is to ensure that each of these initiatives receives the proper priority attention and resources. However, based on information included in the various sections of the Biennial Plan, we were able to identify one example of a specific function for which DOD depicted how it was planning to move from the current environment to the target environment. This type of high-level description of how the department plans to move from its current “as is” to the envisioned future financial environment could serve as a model for the department’s other functional areas. Specifically, the type of high-level “road map” provided in the plan for the transition from the department’s “as is” contract payment function to its envisioned operation of that function is illustrated by excerpts from the plan shown in figures 2 through 4. Figure 2 illustrates the transition of the contract payment function to the target procurement payment system. Figure 2 indicates that there are 16 existing systems supporting the contract payment function. The figure illustrates that DOD will move from the 16 existing systems to 8, and finally to a single contract payment system, the Defense Procurement Payment System (DPPS). Figure 3 shows DPPS as part of the target environment and illustrates how data from DPPS will become part of the DFAS database and will be available to run DFAS’ accounting and finance applications. Figure 4, which is included in DOD’s concept of operations, provides a different view of the planned financial and accounting structure illustrated in figure 3. This is one instance in the plan where users can follow a specific function from the current environment through the DFAS’ planned financial and accounting system architecture to the DFAS’ Corporate Database as illustrated and discussed in the concept of operations. However, even for this one instance, the high-level information is lacking some key details. For example, the plan does not identify the 16 existing systems illustrated in figure 2, their owners, or where they operate. Therefore, we could not determine whether these systems are included in the inventory, although the plan acknowledges that it is critical that an accurate inventory be maintained of all feeder systems required to provide program data to DOD’s financial management systems. The plan indicates that there are currently 109 finance and accounting systems and 83 feeder systems for a total of 192 DOD financial management systems. The plan states that the 109 finance and accounting systems will be reduced to 32 by fiscal year 2003. However, the transition plan does not provide a clear description of what systems will be eliminated and how—even at a high-level—nor does the plan discuss how the number of feeder systems will be reduced. Actions Not Defined for Ensuring Data Integrity “As an estimated 80 percent of the data needed for financial management come from program systems, the use of modern, fully-integrated, and fully-interfaced program feeder systems is necessary for the Department to be able to provide its managers with the information they need to make informed decisions. The current use of a variety of non-integrated databases precludes the easy or reliable interfacing of information from program functional areas (i.e., personnel, acquisition, and logistics) with the Department’s core finance and accounting systems.” The department has also acknowledged problems with the accuracy of data from these feeder systems. In addition, financial statement audit reports have confirmed significant problems with the accuracy of the data produced by the department’s supporting logistical, budgetary, and program operating systems. However, the department’s transition plan does not explicitly address how these acknowledged significant feeder system data integrity problems will be resolved. Ideally, data should be processed at the original point of entry in a manner to ensure that only accurate, complete data are entered into all systems that subsequently process that data. Without identifying specific actions that will ensure feeder system data integrity, it is unclear whether the department will be able to effectively carry out not only its financial reporting, but also its other financial management responsibilities. Further Details Will Be Needed to Evaluate the Workability of DOD’s Planned Financial Management Environment Certain additional detailed information would be necessary to determine whether implementation of the department’s future financial management environment is “workable”—that is, whether the planned environment is practical, cost-effective, and feasible. Such details are not within the scope of the high-level, strategic financial management improvement plan that DOD was asked to provide and we were asked to analyze. The additional, detailed information that would be necessary includes the systems architecture, which is comprised of logical and technical components. DOD officials have stated that they recognize that additional information will be necessary and that they are developing further details on these issues. The Congress and OMB have recognized the importance of a systems architecture. For example, the Clinger-Cohen Act of 1996 requires that the department-level Chief Information Officers develop, maintain, and facilitate integrated system architectures. Also, in an October 25, 1996, memorandum, “Funding Information Systems Investments,” the Director of OMB stated that “investments in major information systems proposed for funding in the President’s budget should be consistent with Federal, agency, and bureau information architectures which: integrate agency work processes and information flows with technology to achieve the agency’s strategic goals . . .” As we have described in other reports, the purpose of the logical architecture is to ensure that the systems meet the business needs of an organization. Therefore, the logical architecture should be further detailed information fleshing out DOD’s concept for its financial management operations. For example, while the concept of operations may describe, at a high level, how acquisition must share information with accounting and logistics, the logical model would, among other things, describe the specific data and how the data will be manipulated. For each business function required to carry out the mission, it defines the specific information needed to perform the function, and describes the individual systems that produce the information. In addition, an essential element of the logical architecture is the definition of specific information flows. After the logical architecture is defined, DOD will have an understanding of both its portfolio of desired systems and how these systems will collectively carry out the department’s objectives. A technical architecture is necessary to detail specific information technology and communications standards and approaches that will be used to build systems, including those that address critical hardware, software, communications, data management, security, and performance characteristics. The purpose of a technical architecture is to ensure that systems are interoperable, function together efficiently, and are cost-effective over their life cycles. Conclusions Until DOD amends its Biennial Plan to incorporate budget formulation and functional information sharing, the Congress will have little assurance that DOD’s efforts to reform its acknowledged deficient financial operations are likely to be successful. Ensuring that accounting data are used to formulate budgets and that program information is shared among functional areas is a fundamental concept that underpins an effective financial management structure. Further, until DOD precisely documents its current environment, clearly links its initiatives to bridge the gap between its current environment and its concept of how it intends to operate in the future, and develops initiatives to address feeder systems’ data accuracy problems, the Congress cannot be sure that DOD has a workable, clear transition plan to achieve its vision. Finally, further details will be needed to assess whether implementation of DOD’s envisioned future concept for its financial management operations is practical, cost-effective, and feasible, including documentation of the logical and technical architectures that will support its future concept of operations. Recommendations In order to help ensure that DOD’s first-ever Biennial Plan provides a sound foundation for fundamentally reforming the department’s financial management operations, we recommend that the Secretary of Defense take the following immediate actions to develop and issue a supplemental plan. Revise the concept of operations to reflect, at a high level, the full range of the department’s financial management operations, including its key asset accountability and budget formulation responsibilities. Describe how, at a high level, data will be shared among the various DOD functional areas to ensure that the benefits of full systems integration will be realized in accordance with relevant legislative requirements and JFMIP guidance. Clarify the role of each of the described initiatives in bridging the gap between the current environment and the envisioned future concept of operations. Identify the steps the department will take to ensure that it will build reliability into the data provided by its feeder systems. Agency Comments and Our Evaluation In commenting on a draft of this report, the Under Secretary of Defense (Comptroller) indicated that DOD took issue with each of the report’s major findings and with all of the recommendations. The department’s comments reflect a basic disagreement with us over the role and definition of financial management and how this function should support various critical program functions. Our views on the scope and requirements of accounting, finance, and feeder systems are fully supported by the mandates and goals of the CFO Act of 1990 and the Federal Financial Management Information Act of 1996, as well as by OMB and JFMIP guidance, reports, and pronouncements. In its overall comments on a draft of this report, DOD stated that it appreciated the recognition that the report provides regarding the magnitude of the effort that the department expended in the preparation of the report and the challenges that the department will face in implementing its ambitious financial management reform initiatives. However, DOD disagreed that the Biennial Plan lacked critical elements and stated that parts of the report appear to reflect a lack of awareness of the Department’s actions for improving its financial management. In addition, DOD stated that the draft report contained misleading statements and used inflammatory language. We disagree with DOD’s comments. As shown in the following discussion, most of our responses to them can be traced to this fundamental disagreement over the role of financial management in supporting the agency’s operations. First, with respect to our concern that its plan lacks critical elements relative to asset accountability and control, or to budget formulation, DOD stated that the plan was explicitly limited to accounting and finance functions and that the department considers both asset accountability and budget formulation to be outside the scope of accounting and finance functions. DOD’s response further stated that the department does not perform accountability for its nonfinancial resources through its finance and accounting systems and that to do so would, among other things, require an investment of hundreds of millions, perhaps billions, of dollars in new systems or system changes. Moreover, the Under Secretary of Defense (Comptroller) in his opening statement stated that the plan “addresses both its financial systems and program feeder systems that originate and provide the majority of the financial source data.” The Biennial Plan itself explicitly refers to feeder systems and includes initiatives that are intended to address the need for feeder systems to be fully integrated with accounting and finance systems. For example, the plan included information on initiatives to improve CBSX and REMIS—two systems that provide data on mission critical assets of the Army and Air Force, respectively. Because financial information in such systems includes data on units and condition of assets, those feeder systems must ensure data integrity and be easily reconciled with the accounting and finance systems. Our concern is that the Biennial Plan does not explain how the feeder systems will meet accounting and internal control requirements, such as those related to asset accountability and budget formulation. Among the specific requirements of the CFO Act is that each agency CFO ensure that agency accounting and financial management systems include adequate financial reporting and internal controls. Such systems are to comply with applicable principles and standards and provide complete, reliable, consistent, and timely information that is responsive to the agency’s financial information needs. In addition, if the plan were limited to a narrow view of accounting and finance functions, it could not be used to meet additional regulatory requirements, as DOD intended. For example, FFMIA requires agencies to implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable accounting standards, and the standard general ledger. FFMIA defines financial management systems to include the financial systems and financial portions of mixed systems (feeder systems) necessary to support financial management. OMB reinforced these concepts in its June 1998 Federal Financial Management Status Report and Five-Year Plan, where it stated a goal of providing high quality financial information on federal government operations which fully supports financial and performance reporting. Further, the Authorization Act that mandated the plan explicitly required that it “address all aspects of financial management within the Department of Defense, including the finance systems, accounting systems, and data feeder systems of the Department that support financial functions of the Department.” Further, our report outlines the potential benefits of modern systems in helping to achieve accountability, including the integration of logistics, accounting, and acquisition data. OMB Circular A-127, which prescribes policies and standards to be followed by executive departments and agencies in developing, operating, evaluating, and reporting on financial management systems, describes a unified set of financial systems as those that are “planned for and managed together, operated in an integrated fashion, and linked together electronically in an efficient and effective manner to provide agency-wide support necessary to carry out an agency’s mission and support its financial management needs.” Thus, OMB requires that a financial management improvement plan include efforts to address asset accountability. Moreover, by DOD’s own estimates, the logistics and other feeder systems necessary to properly account for and to ensure accountability over assets supply over 80 percent of the data used to support DOD’s financial reporting and management. We agree fully with DOD’s comment that commanders, not accountants, should remain responsible for the department’s physical assets. Our point is that improved accuracy of feeder system data, with the benefits of controls incorporated into sound financial accounting and reporting, could assist commanders and program managers in fulfilling their asset accountability responsibilities. Systems improvements would not only help DOD comply with accounting and reporting requirements, but would also help provide better information and assurance to program managers to improve efficiency and strengthen accountability. Regarding the costs of new systems or systems changes to meet financial management requirements, we asked DOD for support of its estimate of “hundreds of millions, perhaps billions, of dollars,” but DOD indicated that the number is not supported by a documented cost estimate; but rather an informed approximation based on experience. In this regard, DOD is already spending huge sums to upgrade its feeder systems as well as its accounting and finance systems. For example, our analysis of DOD’s fiscal year 1999 Information Technology Exhibits, which support the department’s overall budget request, showed that DOD has requested a total of about $6 billion for fiscal years 1998 and 1999 to develop new or modify existing systems supporting functions that are likely to include feeder systems. Our point is that these ongoing efforts, with their large investments, should incorporate the requirements needed to achieve an integrated financial management system that meets legislative mandates and implementing guidance. With regard to budget formulation, we did not say or imply the process used to formulate the budget was deficient or that any changes were needed in PPBS. Our focus was on the need to have that process supported by accurate and timely budget execution and accounting data. This will only happen if the systems are originally designed to include the requirement to link accounting data to support the budget formulation process. Because budget formulation is excluded from DOD’s concept of operations, the plan does not address how the Department’s Planning, Programming, and Budgeting System will be supported by existing systems, nor how known deficiencies in those systems will be addressed. As stated in our report, such budgeting and accounting integration is called for by the CFO Act, DOD’s regulations, JFMIP, and FFMIA. DOD’s Financial Management Regulation mirrors the requirements in the CFO Act by specifying that the department’s CFO is to develop and maintain an integrated DOD accounting and financial management system, including financial reporting and internal controls, that provides for the integration of accounting and budgeting information. JFMIP’s Framework for Federal Financial Management Systems states that the financial management system should not only support the basic accounting functions for accurately recording and reporting financial transactions, but must also be the vehicle for integrated budget, financial, and performance information that managers use to make decisions on their programs. As stated previously, FFMIA requires that agencies implement and maintain financial management systems that substantially comply with federal financial management systems requirements. The integration of budgeting and accounting is also a key tenet of OMB’s efforts to improve financial management across government. Specifically, as part of its June 1998 Federal Financial Management Status Report and Five-Year Plan, OMB set out a vision of an environment where “Program and financial managers work in partnership to achieve the full integration of financial (finance, budget, and cost), program, and oversight information and processes.” Supporting its overall vision, OMB set out a number of goals, including “Building a partnership to ensure the functioning together of information resource management, program management, and financial management, including budgeting.” DOD’s approach, however, unless broadened, will unfortunately ensure continued isolation of functional systems. The approach is inefficient, does not effectively utilize advances in technology, and misses opportunities to better support program managers. Second, DOD stated that it does not agree that the Biennial Plan is critically flawed by the exclusion of a detailed discussion of the (1) links between the over 200 planned improvements and the envisioned future operations to determine whether the proposed transition plan will result in achievement of the target financial management environment and (2) actions to ensure feeder systems’ data integrity. DOD stated that many of the initiatives are intended to improve the department’s financial management in the interim period and others are geared more to the implementation of new processes or systems to replace outdated processes or noncompliant systems. The department stated that more details on each of the individual initiatives are in other documents that supported the initiative and should not be duplicated in the plan. DOD’s response to our draft report reinforces our point that the plan does not present an easily understood explanation of the transition from its current “as is” to the envisioned future financial environment. The catalog of initiatives in the plan does not indicate which initiatives are intended to be interim fixes and which are long-term efforts. The plan also does not indicate how those that are interim initiatives will fit in with the long-term initiatives. We have previously reported on this issue in regard to DOD’s technological initiatives identified as key elements of its efforts to improve the contract payment process. In that report, we stated that DOD had not defined how its short- and long-term initiatives, which were independently managed, would work in tandem. The relationship of such tasks or initiatives needs to be articulated clearly to provide a useful strategic vision. As we stated in our draft report, a vital part of any transition plan is a description of how the specific initiatives in the plan bridge the gap between the current environment and the envisioned environment. Third, DOD stated that several characterizations in the report are subjective, misleading, and unnecessarily inflammatory. All of the statements DOD referred to are supported by the results of numerous audit reports produced by us and the DOD audit community. For example, DOD took exception to our statement that the most recent audits of DOD’s financial statements identified material weaknesses in DOD’s ability to maintain accountability and control over virtually every category of physical assets, including military equipment. The DOD Inspector General was unable to render an opinion on DOD’s consolidated financial statements for fiscal years 1996 and 1997 as a result of these material weaknesses. As we stated in our April 1998 testimony on DOD’s serious financial management problems, material financial management deficiencies identified at DOD, taken together, represent the single largest obstacle that must be effectively addressed to achieve an unqualified opinion on the U.S. government’s consolidated financial statements. No major part of DOD has been able to pass the test of an independent audit. In the area of critical military weapons systems, we testified that for fiscal year 1997, the auditors found that DOD’s logistical systems could not be relied upon to provide basic information, such as, for each asset category, how many exist, where they are located, and their value. We considered our draft report to be accurate, but we have carefully considered the department’s comments regarding our characterizations and made wording revisions where appropriate. Finally, DOD’s comments are not fully responsive to our recommendations. Overall, DOD indicated that it did not believe that a supplemental plan was necessary because it was already working on detailed follow-on reports. We agree with the department that financial management is an ongoing process that requires continuous attention and updates. However, the items that we identified in our draft report that are currently not covered in the Biennial Plan are so critical to its viability that we continue to believe that the plan should be amended, especially in light of the need for the plan to support investments in systems initiatives. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), and the Director of the Defense Finance and Accounting Service. We are also sending copies to the Director of the Office of Management and Budget and interested congressional committees and members. Copies will be available to others upon request. If you or your offices have any questions concerning this report, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix III. Objectives, Scope, and Methodology To address the requirements of section 912 of the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 (Public Law 105-261), our objectives were to determine (1) whether DOD’s concept of operations included the critical elements necessary for producing sustainable financial management improvement over the long term and (2) whether the transition plan provided a “road map” from the current environment to DOD’s planned future financial management environment. This report also includes a discussion of additional technical details that would be needed to determine whether implementation of the department’s future financial management environment is practical, cost-effective, and feasible. To accomplish our objectives, we obtained the DOD Biennial Plan and compared its contents to the requirements of the National Defense Authorization Act of 1998 and to relevant laws, regulations and standards, and policy guidance documents to determine the plan’s responsiveness to the act’s requirements and the plan’s workability. Specifically, we analyzed the plan’s description of the Secretary of Defense’s concept of how the department carries out its financial management operations. This analysis included whether the Secretary’s concept covered all aspects of integrated financial management, including an integrated financial management system as defined by The Chief Financial Officers Act of 1990, The Federal Financial Management Improvement Act of 1996, The Clinger-Cohen Act of 1996, The JFMIP Framework for Federal Financial Management Systems (January 1995), and OMB Circular A-127, Financial Management Systems. We also compared the Secretary’s concept for the department’s financial management operations with the essential elements of a concept of operations identified in our reports, Financial Management: Comments on DFAS’ Draft Federal Accounting Standards and Requirements (GAO/AIMD-97-108R, June 16, 1997) and Strategic Information Planning: Framework for Designing and Developing System Architectures (GAO/IMTEC-92-51, June 1992). We conducted our review from October 1998 to December 1998 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Under Secretary of Defense (Comptroller). These comments are presented and evaluated in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. Comments From the Department of Defense The following are GAO’s comments on the Department of Defense’s letter dated January 11, 1999. GAO Comments 1. See the “Agency Comments and Our Evaluation” section of this report. 2. DOD’s comments misstate our position. We did not say or imply that the process used to formulate the budget was deficient. Our focus was on the need to have that process supported by accurate and timely budget execution and accounting data. We believe that DOD’s budget formulation process can only be as good as the information on which it is based. Audit reports by us and the DOD audit community have identified problems in the accuracy, reliability, and timeliness of that budgetary source information. For example, in our March 1998 report on the implications of Navy audit results, we stated that the financial reporting errors disclosed by the Naval Audit Service affect the budget process because the same incomplete inventory data are used both for the financial statements and as the starting point for the Navy’s process to develop budget requests for additional inventory. We have also reported that inaccurate management information in the Army’s Installation Facilities System resulted in unreliable budget requests. Further, this report stated that inadequate guidance and inconsistent reporting of information used in the budget development process added to the unreliability of the budget requests. In another example, we reported that errors in CBSX—the system that provides worldwide asset visibility over the Army’s reportable equipment items, including the Army’s most critical war fighting equipment—directly affect whether too few or too many of these critical items are procured. 3. We could not find any explicit reference to a 1999 financial management improvement plan in the 1998 Biennial Plan. Further, we continue to believe that the 1998 plan was deficient in not addressing, at a high-level, how known data accuracy problems in feeder systems will be resolved. 4. DOD’s inability to account for the full cost of its operations and the extent of its liabilities has been documented in numerous reports prepared by us and the DOD audit community. Further, DOD’s inability to ensure that the financial resources entrusted to it are used for the purpose intended by the Congress has been repeatedly documented in these reports, including our recent report, Financial Management: Problems in Accounting for Navy Transactions Impair Funds Control and Financial Reporting (GAO/AIMD-99-19, January 19, 1999). In addition, the “new” accounting requirements for the reporting of environmental and disposal liabilities were issued in 1995 and became effective for fiscal year 1997. Moreover, the Congress has required lifecycle environmental costs, including disposal costs, for major defense acquisition programs in DOD’s fiscal year 1995 Authorization Act. As we reported in a series of recent reports (with which the department concurred), DOD has the disposal cost information available to make these estimates for major weapons systems and other assets in a systematic manner, rather than on a case-by-case basis. To date, however, the department has only developed draft policy guidance for addressing these issues. 5. DOD’s problems in properly accounting for its disbursements remain a serious problem, as we have reported in the past. For example, we reported that DOD’s $18 billion total in problem disbursements as of May 31, 1996, was understated by at least $25 billion. We concluded that neither the Congress nor DOD management can rely on DOD’s reported amount to determine the extent of problem disbursements or to monitor progress made in resolving them. Further, without adequate documentation to support its disbursements—one of the major factors contributing to its inability to resolve its problem disbursements, DOD cannot know the extent to which its payments are fraudulent and improper. In addition, our recent report and testimony on several serious fraud incidents detail the ongoing control weaknesses over the department’s disbursement processes that contributed to the embezzlement of millions of dollars from DOD. 6. As stated in our draft report, we believe that the Biennial Plan should identify specific actions that will ensure feeder system data integrity, rather than the specific details indicated in DOD’s response. As we stated, certain detailed information needed to determine whether the planned environment is practical, cost-effective, and feasible are not within the scope of the high-level strategic financial management plan that DOD was asked to provide and we were asked to analyze. 7. In the Biennial Plan, the department itself acknowledged that improving its financial management operations represents “a monumental challenge.” As we stated in our April 1998 testimony, no major part of DOD has been able to pass the test of an independent audit; auditors consistently have issued disclaimers of opinion because of pervasive weaknesses in DOD’s financial management operations. Such problems led us in 1995 to put DOD financial management on our list of high-risk areas vulnerable to waste, fraud, abuse, and mismanagement. This designation continued in our recent high-risk update. While we consider our draft report to be accurate, we have carefully considered the department’s comments regarding our characterizations and made wording revisions where appropriate. Further, while DOD takes issue with focusing unwarranted negative attention on problems, we believe that fully identifying the problem and its context is the first and crucial step in properly implementing solutions that will work. For example, we have previously reported that DOD did not develop adequate information to effectively diagnose the causes of problem disbursements, implement solutions, and evaluate progress. 8. We continue to believe that without addressing the critical flaws we have identified, the improvements that can be achieved will be limited relative to the department’s total financial management operations, including asset accountability and budget formulation. While we consider our draft report to be accurate, we have carefully considered the department’s comments regarding our characterizations and made wording revisions where appropriate. 9. The department’s description of the DFAS Corporate Database and Corporate Data Warehouse does not explain, at a high-level, how information will be shared between functional areas such as acquisition and logistics. Such sharing is a critical element of a financial management system. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Norfolk Field Office Robert Wagner, Senior Evaluator Susan Mason, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) Biennial Financial Management Improvement Plan to determine whether the: (1) concept of operations included the critical elements necessary for producing sustainable financial management improvement over the long term; (2) transition plan provided a strategic-level road map from the current environment to DOD's planned future financial management environment; and (3) implementation of the department's future financial management environment is practical, cost-effective, and feasible. GAO noted that: (1) in developing this overall concept of its envisioned financial management environment, DOD has taken an important first step in improving its financial management operations; (2) the department's Biennial Plan also represents a significant landmark because it includes, for the first time, a discussion of the importance of the programmatic functions of personnel, acquisition, property management, and inventory management to the department's ability to support consistent, accurate information flows to all information users; (3) in addition, DOD's Biennial Plan includes an impressive array of initiatives intended to move the department from its current state to its envisioned financial management environment; (4) while the initiatives discussed should result in some improvements in DOD's financial management operations, the department's Biennial Plan lacks critical elements necessary for producing sustainable financial management improvement over the long term; (5) specifically, the Plan's discussion of how DOD's financial management operations will work in the future--its concept of operations--does not address: (a) how its financial management operations will effectively support not only financial reporting but also asset accountability and control; and (b) budget formulation; (6) in addition, the transition plan, while an ambitious statement of DOD's planned improvement efforts, has two critical flaws: (a) links are not provided between the envisioned future operations and the over 200 planned improvement initiatives to determine whether the proposed transition will result in the target financial management environment; and (b) actions to ensure feeder systems' data integrity are not addressed--an acknowledged major deficiency in the current environment; (7) without identifying specific actions that will ensure feeder system data integrity, it is unclear whether the department will be able to effectively carry out not only its financial reporting, but also its other financial management responsibilities; (8) additional detailed information would be necessary to determine whether implementation of the department's future financial management environment is practical, cost-effective, and feasible; (9) such details are appropriately not included in the strategic financial management improvement plan that DOD was asked to provide; and (10) DOD officials have stated that they recognize that additional information will be necessary and that they are developing further details on these issues.
Background Established in October 1994 by the Attorney General to implement the administration of community policing grants under VCCLEA, the Office of Community Oriented Policing Services announced it first grant programs in 1994. Prior to its establishment, in December 1993 the Department of Justice awarded community policing grants to hire officers under the Police Hiring Supplement. The COPS Office distributed grants in a variety of program funding categories. Hiring grants, which required agencies to hire new officers and at the same time to indicate the types of community policing strategies that they intended to implement with the grants, was the largest COPS grant program category in terms of the amounts of grant funds obligated. The hiring grants paid a maximum of $75,000 per officer over a 3-year period (or at most 75 percent of an officer’s salary) and generally required that local agencies cover the remaining salary and benefits with state or local funds. From 1994 through 2001, more than $4.8 billion in COPS obligations (or about 64 percent of COPS obligations over this period) were in the form of hiring grants. The Making Officer Redeployment Effective (MORE) grant program, which provided funds to law enforcement agencies to purchase equipment, hire civilians, and redeploy existing officers to community policing was the second largest COPS grant program, obligating more than $1.2 billion. Additional COPS grant programs provided funds for specific innovations in policing and for a variety of other purposes. Each year the COPS Office was required to distribute half of the grant funds to agencies in communities whose populations exceeded 150,000 persons and half of the grant funds to agencies in communities with populations of 150,000 or fewer persons. In the applications for hiring grants, the COPS Office requested agencies to indicate the types of community policing practices that they planned to implement with their grants. The various practices related to community policing included practices such as identifying crime problems by looking at records of crime trends and analyzing repeat calls for service, working with other public agencies to solve disorder problems, locating offices or stations within neighborhoods, and collaborating with community residents by increasing officer contact with citizens and improving citizen feedback. These types of policing practices also corresponded with general approaches to policing. For example, problem-solving policing practices may rely on crime analysis tools to help to identify crime problems and develop solutions to them. Place-oriented practices attempt to identify locations where crime occurs repeatedly and to implement procedures to disrupt these recurrences of crime. By collaborating with community residents, agencies attempt to improve citizen feedback about crime problems and effectiveness of policing to address these problems. In 2000, DOJ reported that COPS-funded officers helped to reduce crime and reported that the drop in crime that occurred after 1994 was more than what would have been expected in the absence of the passage of VCCLEA and the creation of the COPS Office. The report suggested that COPS had achieved its goal of funding 100,000 officers, and through increases in officers and the practice of community policing, the COPS program was credited with reducing crime. However, while COPS may have funded 100,000 officers, it was not apparent that the funded officers had resulted in new officers having been hired. Researchers at the Urban Institute reported in 2000, for example, their estimates that by 2003, the COPS program would have raised the level of police on the street by the equivalent of 62,700 to 83,900 full-time equivalent officers. They also indicated that it was unclear whether the program would ever increase the number of officers on the street at a single time by 100,000. The COPS Office-funded study of the effect of COPS grants on crime in over 6,000 communities from 1995 through 1999 that had received COPS grants concluded that COPS grants were effective in reducing crime. The study also reported that COPS grants that encouraged agencies to implement a variety of innovative strategies to improve public safety had larger impacts on reducing violent and property crime than did other COPS grant types. However, a study released by the Heritage Foundation, which was based upon an analysis of county-level data, was unable to replicate the findings of the COPS Office-funded study. Specifically, the Heritage study found no effect of COPS hiring grants on crime rates, but it did find that the COPS grants for specific problems—such as gangs, domestic violence, and illegal use of firearms by youth—were associated with reductions in crime. In addition, we questioned whether the sizes of the effects of COPS grants on crime that were reported in the COPS Office-funded study were large enough to be significant in a practical sense and whether they could accurately represent the expected returns on the investment of billions of dollars. Assessing the contribution of COPS funds to the decline in crime during the 1990s presents challenges for evaluators. Nationwide, crime rates began to decline in about 1991, before the COPS program announced its first grant programs in 1994 (fig. 1). Hence the factors other than COPS grants that were responsible for precipitating the decline in crime could have continued to influence its decline throughout the 1990s. Researchers have pointed to a number of factors that could have precipitated the decline in crime, including increased use of prison as a punishment for violent crimes, improved economic conditions, and the subsiding of violence that accompanied the expansion of drug markets. To the extent that any of these factors are correlated with the distribution of COPS grants, they could be responsible for impacts that have been attributed to COPS grants. Prior studies of the impact of COPS grants on crime have correlated COPS funds with crime rates, controlling for other factors that could influence crime rates. The authors of the prior studies describe various mechanisms by which COPS grants may affect crime, but their statistical models do not explicitly take these mechanisms into account in estimating the effects of the grants. By identifying and explicitly modeling mechanisms through which COPS funds could affect crimes—such as increasing the number of sworn officers on the street who are available for patrolling places or contributing to changes in policing practices that may be effective in preventing crime—the possibility of a spurious relationship between inputs (such as COPS funds) and outcomes (such as crime) can be minimized. (For additional background information, see app. II.) Results Our analysis showed that from 1994 through 2001, COPS obligated more than $7.32 billion to 10,680 agencies for which we were able to link Office of Justice Programs financial data on COPS obligations to the records of law enforcement agencies. About $4.7 billion (or 64 percent) of these obligations were in the form of hiring grants. About half of these funds went to agencies serving populations of 150,000 or fewer persons and about half was distributed to agencies serving populations of more than 150,000 persons. This distribution roughly corresponds to the distribution of index crimes across the two size categories of jurisdictions. However, in relation to violent crimes, the share of COPS funds distributed to larger jurisdictions was smaller than the share of violent crimes that they contributed to the national total. For example, agencies serving populations of more than 150,000 persons contributed about 58 percent of all violent crimes reported to the UCR during this time period while receiving about half of all COPS funds. To be specific, the smallest agencies—those serving populations of fewer than 10,000 persons— received an average of $1,573 per violent crime reported to UCR. Agencies serving populations of more than 150,000 persons received about $418 in COPS funds per violent crime. By the end of 2001, the COPS grantee agencies in our sample had spent about $5 billion (or 68 percent of the $7.3 billion obligated to them) from 1994 through 2001. Annually, the total amount of COPS expenditures made by grantees increased each year from 1994 until 2000, and then declined, while the number of agencies that drew down COPS funds peaked in 1998 at about 7,600 and declined to about 6,000 in 2001. From 1994 through 2001, a total of about 10,300 agencies spent COPS funds. The maximum number of agencies spending funds in any given year occurred during 1998, when about 7,600 agencies spent funds. From 1998 through 2000, the amount of COPS expenditures per person in the jurisdiction served by an agency increased from about $4 to about $4.80. COPS expenditures amounted to an annual average of about 1 percent of total expenditures for police services by local law enforcement agencies from 1994 through 2001. This contribution varied by year. For example, in 1999 and 2000, COPS expenditures were about 1.5 percent of total local police expenditures. (See app. III for a further discussion of COPS obligations and expenditures.) For the years 1994 through 2001, we infer from our estimates that COPS hiring grant expenditures contributed to increases in sworn officer levels above the levels that would have been expected without these funds. The additional number of sworn officers stemming from these funds varied over the years, and it increased from 1994 though 2000 and declined in 2001 (fig. 2). For example, for 1997 we estimate that COPS funds contributed about 14,000 additional officers in that year—or about 2.4 percent of the total number of sworn officers nationwide—and for 2000, COPS funds contributed about 17,000 additional officers—or about 3 percent of the total number of sworn officers nationwide. For all years from 1994 through 2001, we estimate that COPS expenditures paid for a total of about 88,000 additional officer-years over this entire period, where the total number of officer-years equals the sum of the number of officers due to COPS grant expenditures in each year. An officer-year refers to the number of officers in a given year that we could attribute to COPS expenditures, and the additional officers in a given year attributable to COPS expenditures represent a net addition to the stock of sworn officers. Using the results from our regression estimates of the effects of COPS expenditures on the level of sworn officers, we set the values for COPS expenditures to zero to predict the level of officers absent COPS funds. The difference between this number and the actual number of sworn officers yields the number of officers due to COPS expenditures. Our analysis also shows that apart from the COPS hiring and COPS MORE grants, other COPS grant types did not have a significant effect on officer strength. (See app. IV for more detailed information about the results of our analysis of COPS expenditures on officers.) We estimate that the COPS grant expenditures contributed to the reduction in crime in the 1990s independently of other factors that we were able to take into account in our analysis. Other factors that could have contributed to the reduction in crimes in the 1990s that we took into account included federal law enforcement expenditures other than COPS grants, local economic conditions and changes in population composition, and changes in state-level policies and practices that could be correlated with crime, such as incarceration and sentencing policy. Specifically, from our model of the effect of changes in sworn officers on crime, we estimate that a 1 percent increase in the number of sworn officers per capita would lead to a 0.4 percent reduction in the total number of index crimes. Through their effects on changes in officers in a given year, COPS expenditures led to varying amounts of declines in crime rates over the years from 1994 through 2001. For example, the 2.4 percent increase in sworn officers due to COPS expenditures in 1997 was responsible for about a 1.1 percent decline in the total index crime rate from 1993 to 1997, while the roughly 3 percent increase in officers due to COPS expenditures in 2000 was responsible for about 1.3 percent decline in the total index crime rate from 1993 to 2000. Put into another context, the total crime rate declined from 5,904 per 100,000 persons in 1993 to 4,367 per 100,000 persons in 2000, or by about 26 percent. Of this 26 percent drop, we attribute about 5 percent to the effect of COPS. From our analysis of violent crimes, we estimated that declines in the violent crime rate due to COPS expenditures also varied with the level of officers due to COPS funds. The declines in violent crime rates attributable to COPS increased from about 2 percent in 1997 to 2.5 percent in 2000, where both of the amounts of decline attributable to COPS expenditures are based upon comparisons with the 1993 violent crime rate (fig. 3). We further estimate that at its peak in 1998, COPS accounted for about a 1.2 percent decline in the property crime rate. Our estimates of the impacts of COPS expenditures on the broad categories of crime are supported by our findings from our crime-type- specific regression models. We find significant reductions due to COPS expenditures for the crimes of murder and non-negligent manslaughter, robbery, aggravated assault, burglary, and motor vehicle theft. Our analysis of larceny indicates that while the relationship between COPS funds and larceny is a negative one, it is not statistically significant, nor is the effect of COPS on rape statistically significant. Further, we estimated the effects of COPS grants on the total crime rate under various assumptions about lags between the receipt of COPS grants and expenditures of COPS funds. The estimates for the amount of the decline in the total crime rate that we report here—for example, the 1.3 percent of the decline in crime from 1993 to 2000—are among the smallest effects that we estimated from our various models. Under different assumptions about lags associated with COPS expenditures, the amount attributable to COPS could be as high as 3.2 percent. Interestingly, the 1.3 percent decline in the index crime rate that we attribute to COPS expenditures in 2000 is on the same order of magnitude as the contribution of COPS expenditures to total local spending on police. In 2000, for example, COPS expenditures accounted for about 1.5 percent of total local police expenditures. We further find that factors other than COPS expenditures account for the majority of the decline in the crime rate. (See app. IV for more detailed information about the results of our analysis of COPS expenditures on crime.) Our regression analysis of the Policing Strategies Survey data for 1993 and 1997 indicate that receipt of a COPS grant and the amount of per capita COPS expenditures by agencies were associated with increases in the agencies’ reported use of problem-solving and place-oriented policing practices but not crime analysis and community collaboration policing practices (fig.4). According to our review studies of the effectiveness of policing practices, problem-solving and place-oriented practices are among those that the crime literature indicates may be effective in reducing crime. With problem-solving practices, police focus on specific problems and tailor their strategies to them. Place-oriented practices include efforts to identify the locations where crime repeatedly occurs and to implement procedures to disrupt these recurrences of crime. Crime analysis includes the use of tools such as geographic information systems to identify crime patterns. Community collaboration includes attempts to improve or enhance citizen feedback about crime problems and the effectiveness of policing efforts to address them. In our regressions, we controlled for the underlying trends in the reported adoption of policing practices, agency characteristics, and local economic conditions. Our analysis of the National Evaluation of COPS Survey—which measured practices in 1996 and again in 2000—showed that while COPS grantee agencies increased their reported use of all policing practices combined, these changes were not statistically significant in regressions that controlled for the underlying trends in the reported adoption of policing practices, agency characteristics, and local economic conditions. (See app. V for more detailed information about the results of our analysis of COPS expenditures and policing practices.) Concluding Observations While we find that COPS expenditures led to increases in sworn police officers above levels that would have been expected without these expenditures and through the increases in sworn officers led to declines in crime, we conclude that COPS grants were not the major cause of the decline in crime from 1994 through 2001. Other factors—which other researchers have attempted to sort out—combined to contribute more to the reduction in crime than did COPS expenditures. This is not surprising, as COPS expenditures—while a large federal investment in local law enforcement—made a comparatively small contribution to local law enforcement expenditures for policing. Nevertheless, our analysis shows that COPS grant expenditures did reduce crime during the 1990s. Our models isolate the effects of COPS expenditures from the effects of other factors. We cannot identify another variable that is correlated with changes in COPS expenditures, officers, and crime rates in local communities that would explain away our findings. Thus, we conclude that the results of our model are sound. Further, our results do not address whether the COPS program met its goals of putting 100,000 officers on the street—and the evidence suggests that while it funded more than 100,000 officers, it may have fallen short of achieving this goal. Still, through the increases in officers that we attribute to COPS expenditures, we find that COPS grants affected crime rates. Therefore, as a demonstration of whether a federal program can affect crime through hiring officers and changing policing practices, the evidence indicates that COPS contributed to declines in crime above the levels of declines that would have been expected without it. Our work cannot identify an optimum number of officers needed by any individual agency to maximize the effect of officers on reducing crime, nor can it identify the specific types of practices that agencies should adopt in particular settings. It is highly likely that there is indeed a point where additional officers no longer affect crime. The numbers of additional officers hired as a result of COPS were relatively small compared with the sizes of individual police agencies, and these small increases led to commensurate reductions in crime rates. Given resource constraints and competing priorities at all levels of government, it is probably unlikely that most police agencies would have the resources available to hire large enough numbers of officers to go past the point of diminishing returns. Agency Comments and Our Evaluation We provided a draft of this report to the Attorney General for comment on September 13, 2005. In its written comments, the Office of Community Oriented Policing Services (COPS) drew upon information from both this report and our prior correspondence on the effects of COPS grants on crime. They said that we were careful and diligent in our research, and that our findings support conclusions reached by others and correspond with what local law enforcement leaders report. The COPS Office also expanded upon some of our main findings, which they characterized correctly. In their comments, the COPS Office introduced data and opinions about potential effects of the COPS grants that were beyond the scope of our work. We therefore cannot corroborate these statements. For example, in discussing our findings about the effects of COPS grants on sworn officers, the COPS Office introduced data about officers derived from the MORE technology grants and reports that 42,058 (or 36 percent) of the 118,397 officers that the COPS Office has funded to date are derived from the MORE grants. Our work does not corroborate either of these figures. We point out in Appendix VI that our estimates of a total of 88,000 additional officer-years takes into account the effects of MORE grant expenditures. In their comments on our finding about changes in policing practices that resulted from COPS, the COPS Office points out that the aggregate counts of policing practices that we used in our analysis provide only a superficial measure of the level of community policing taking place. We acknowledged this point in appendix VII, but chose not to speculate on the extent to which police departments increased the amount of problem solving or other policing practices they engaged in. By speculating that agencies may have increased the quantity of a specific activity, the COPS Office provides only one view of what may have happened. Another view, proffered by policing researchers, is that there is little evidence to suggest that problem-solving policing was implemented with sufficient rigor in enough departments to have contributed to declines in violent crime during the 1990s. As they point out, problem-solving activities may have increased, and they may have contributed to declines in crime, “but we simply do not know.” We are sending copies of this report to other interested congressional committees and the Attorney General. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Laurie Ekstrand at (202) 512-8777 or by e-mail at Ekstrandl@gao.gov or Nancy Kingsbury at (202) 512-2700 or by e-mail at Kingsburyn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology In response to a request from F. James Sensenbrenner, Jr., Chairman, Committee on the Judiciary, House of Representatives, this report provides the findings of our evaluation of the impact of Community Oriented Policing Services (COPS) grants on the decline in crime that occurred during the 1990s. Our objectives were to address interrelated questions about COPS funds, officers, crime, and policing practices. Specifically, regarding COPS funds: (1) From 1994 through 2001, how were COPS obligations distributed among local law enforcement agencies in relation to the populations they served and crimes in their jurisdictions, and how much of the obligated amounts did agencies spend? Regarding officers and crime: (2) To what extent did COPS grants contribute to increases in the number of sworn officers and declines in crime in the nation during the 1990s? Regarding policing practices: (3) To what extent were COPS grants during the 1990s associated with police departments adopting policing practices that the crime literature indicates could contribute to reductions in crime? Overview of Our Approach and Methodology To address our reporting objectives, we analyzed a database consisting of 12 years of data from 1990 through 2001 on local law enforcement agencies. To create this database—our primary analysis database—we obtained data from several sources, and we organized the data as a panel dataset in that it contained information on multiple law enforcement agencies over multiple years. For each agency, we obtained data on COPS and other federal law enforcement grant obligations and expenditures from the Department of Justice’s (DOJ) Office of Justice Programs (OJP), and data on index crimes and the number of sworn officers from the Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting (UCR) Program. Index crimes include the violent crimes of murder and non- negligent manslaughter, forcible rape, robbery, and aggravated assault, as well as the property crimes of burglary, larceny-theft, motor vehicle theft, and arson. As shown in table 1, in 2002, property crimes constituted 88 percent of the 11,877,218 index crimes. Among violent crimes, robberies accounted for 3.5 percent of all index crimes, and aggravated assaults accounted for 7.5 percent. We obtained data on some of the factors that the research literature on crime suggests are related to changes in crime. From the Department of Commerce’s Bureau of Economic Analysis, we obtained data on local economic conditions—including employment rates and per capita income—and from the National Center for Health Statistics and the U.S. Census Bureau—we obtained data on demographic variables—including the percentage of the population aged 15 to 24, and the racial and gender composition of the population. We also analyzed data from two surveys of nationally representative samples of police departments on the policing practices that they reportedly implemented in various years from 1993 to 2000. We refer to the first survey as the Policing Strategies Survey, and it was administered in 1993 and again in 1997. We refer to the second survey as the National Evaluation of COPS Survey, as it was completed as part of the Urban Institute’s national evaluation of the implementation of the COPS program, and we used the data from the surveys that were administered in 1996 and 2000. The multiple administrations of each survey allowed us to analyze changes in policing practices. Using agency and year identifiers, we matched and merged data from our primary analysis database with the agency-level records in each of the surveys. Prior to developing and analyzing our database, we assessed the reliability of each data source, and in preparing this report, we used only the data that we found to be sufficiently reliable for the purposes of our report. In addition, to identify policing practices that are considered to be effective in preventing crime, we analyzed reviews of research and evaluation literature. We also reviewed relevant economic and criminological literatures that addressed issues related to estimating models of the effects of federal grant funds on crime rates. We spoke with officials at the Department of Justice about the operation of the COPS programs, and we also spoke with researchers about our approach and methods. We reviewed our approach and methods with a group of experts in the field of policing and crime. The group consisted of criminologists, economists, statisticians, and criminal justice practitioners, and was convened for us by the National Research Council of the National Academies to enable participants to offer their individual views as experts in the field. We conducted our work between January 2004 and August 2005 in accordance with generally accepted government auditing standards. Methods Used to Address the Flow of Funds Reporting Objective To address our first objective, we analyzed OJP financial system data on grant obligations and expenditures and UCR data on the size of populations served by agencies and crimes occurring within the jurisdictions of the agencies that reported crime to the UCR. We used the OJP financial data to compute the amount of COPS funds obligated by COPS grants and the amount expended by local police agencies during the period from 1994 through 2001. To describe the overall COPS funding trends by grant type, we analyzed the universe of agencies in the OJP data that received any federal law enforcement grant during the period from 1990 through 2001, regardless of whether or not the agency received a COPS grant during the period and regardless of whether we were able to link the data from these agencies to records in the UCR. For the years from 1990 through 2001, the OJP data show that 13,332 agencies received any federal law enforcement grant. For analyses of COPS funds by agency population sizes and for comparisons of funding levels with levels of violent and total index crime, we limited our analysis to the sample of agencies whose crime and population data we were able to link to the OJP data. This resulted in a sample of 11,187 agencies in our primary analysis database. These 11,187 agencies accounted for 86 percent of the reported crimes in the UCR data that we received from the FBI. The COPS Office distributed grants in a variety of programs. To describe the amounts of COPS obligations and expenditures, we organized the COPS grant programs into four broader categories of grants, and we reported our results at the level of these broader categories. These four categories include: Hiring, Making Officer Redeployment Effective (MORE), Innovative, and Miscellaneous grants, and the specific grant programs within each category, along with obligated amounts from 1994 through 2001 for each grant program and category, are shown in table 2. In our analysis, we compared the distribution of COPS obligations with the distribution of crimes contributed by agencies serving populations of 150,000 or fewer persons and those serving more than 150,000 persons. We used UCR population to identify agency size and crimes. The UCR population may not reflect the population that agencies provided on the applications for COPS grants. Our analysis of the distribution of COPS funds describes the extent to which the distribution of funds is related to agency size—as measured by populations served—and the distribution of violent crimes. Methods Used to Address the Effects of COPS Expenditures on Officers and Crime To assess the effects of COPS expenditures on the number of sworn officers and crime, we developed and estimated a two-stage regression model of these relationships. In the first stage, we estimated the relationship between per capita COPS expenditures and per capita sworn officer rates in the agencies included in our sample. The per capita measures were based upon the UCR population for the jurisdiction covered by an agency. In the second stage, we estimated the relationship between changes in per capita COPS expenditures and changes in crime rates per 100,000 persons. As the relationship between officer levels and crime rates may reflect a complex and interrelated causal relationship, we used COPS hiring grants as an instrument to help to identify the relationship between officers and crime. To use COPS hiring grant expenditures as an instrument for sworn officers, we made use of the fact that, unlike the purposes of other COPS grant types, the purpose of hiring grants was limited to hiring officers. Given the number of officers, variation in hiring grant expenditures should be uncorrelated with other changes in crime. From our regression results, we calculated the elasticity of crime with respect to officers or the effect of a 1 percent change in the levels of officers on the percentage change in crime. To assess the robustness of our results, we estimated several specifications of our crime rate regression and calculated the elasticities of crime with respect to officers for each specification. We estimated these equations separately for each type of index crime. We compared the range of our estimated elasticities with those in the published literature on officers and crime. To estimate COPS’ contribution to the national decline in crime, we projected our sample results to the nation as a whole by weighting our results by the ratio of the total population in the United States to the population in the sample of agencies included in our analysis. In our regression models of the effects of COPS grant expenditures on officers and crime, we organized our primary analysis database as a panel dataset, and we limited our analysis to the 4,509 law enforcement agencies serving populations of 10,000 or more persons and that reported complete crime data for at least 1 year from 1990 through 2001. The number of agencies that reported complete crime data and served populations of 10,000 or more persons varied over time, as in 1990 about 23 percent of all agencies in the UCR data that we received from the FBI met these criteria, and in 2001 about 21 percent did. However, these agencies also reported the majority of crimes to the UCR. From 1990 through 2001, these agencies reported between 86.8 percent and 88.8 percent of all index crimes in the UCR data that we received from the FBI. Because of data concerns with agencies serving populations of fewer than 10,000 persons, we omitted these agencies from our analysis. We used fixed-effects regression models to estimate the relationships among COPS expenditures, officers, and crime. Given that we included agencies based on the completeness of their crime data in each year, and agencies provided complete crime data in different numbers of years over our 1990 through 2001 analysis period, our models used an unbalanced panel approach. In all of our models, we expressed expenditures, officers, and crime in per capita amounts. The fixed-effects models provide estimates of the amount of change in our dependent variables—the per capita sworn officer rate and the per capita crime rates—that can be attributed to changes in the per capita COPS hiring grant expenditures, controlling for other factors that could also contribute to changes in the per capita sworn officer rate. Our models included agency and year fixed effects to control for unobserved differences between agencies and changes over time within agencies in factors that could contribute to declines in crime. We introduced state-by-year fixed effects into our regressions to control for factors occurring at the state level—such as changes in incarceration or state sentencing practices—that could affect crime rates. Further, we included in our models variables that classify each agency in categories based upon their pre-1994 trends in the growth of officers and crime. These growth cell variables allow us to make comparisons between agencies that were similar in their pre-COPS program trends but that varied in the timing and amount of COPS expenditures. Finally, we included in our models measures of other federal law enforcement grant programs that also provided funds to state and local law enforcement agencies for hiring officers and other crime- prevention purposes. Specifically, we included measures of the per capita expenditures on Local Law Enforcement Block Grants, which local governments could use to hire law enforcement officers, pay overtime, purchase equipment, as well as several other purposes. Because of data limitations, we were unable to track amounts of the Edward Byrne Memorial State and Local Law Enforcement Assistance (Byrne Formula Grant Program) grants that went to local agencies. Byrne Formula Grant funds could be used to provide for personnel, equipment, training, technical assistance, and information systems, among other purposes. In addition to the formula grant program, there was also a Byrne discretionary grant program, and we included measures for these grants. In appendix VI, we provide the details about the specific models that we estimated and our methods for calculating elasticities of the relationship between changes in officers and changes in crime rates. Methods to Assess Changes in Policing Practices To assess whether COPS funds contributed to changes in policing practices, we analyzed data from the Policing Strategies and National Evaluation of COPS surveys, two nationally representative surveys of local law enforcement agencies that asked about the types of policing practices that the agencies reported implementing in various years. In each survey, chief executives or their designees were presented a list of policing practices and asked to indicate whether their agency implemented the practice. We classified items in the surveys into four categories of policing practices corresponding to general approaches to policing identified in the criminal justice literature: problem-solving practices, place-oriented practices, community collaboration activities, and crime analysis activities. Problem-solving practices call for police to focus on specific problems and tailor their strategies to the identified problems. Place- oriented practices include attempts to identify the locations where crime occurs repeatedly and to implement procedures to disrupt these recurrences of crime. Community collaboration practices include improving citizen feedback about crime problems and the effectiveness of policing efforts to address these problems. Crime analysis includes the use of tools such as geographic information systems to identify crime patterns. These tools may help an agency support other practices for preventing crime, such as problem-solving and place-oriented practices. For each agency in a survey, we created a summary index of the number of such practices that agencies reportedly implemented in the years in which the surveys were administered. We then compared mean levels of reported practices between groups of agencies that participated in the COPS program and those that did not participate in the program. We used the data from the Policing Strategies Survey to make pre- and within-COPS program comparisons of changes in reported policing practices in 1993 and in 1997. Levels of reported practices among agencies that received COPS grants were compared with levels among agencies that were not funded by COPS grants over this period. We used the National Evaluation of COPS Survey to compare levels of practices in 1996 and 2000 between groups of agencies that received COPS grants and those agencies that were not funded by COPS over this period. In appendix VII, we provide additional details about the surveys and our methods for analyzing the survey data. To assess changes in reported practices in relation to participation in the COPS program, we estimated separate regression models of the effects of the receipt of a COPS grant and per capita COPS expenditures on changes in reported policing practices, controlling for various characteristics of agencies and underlying trends in the reported adoption of policing practices. To identify policing practices that may be effective in reducing crime, we analyzed six studies that provided summaries of research on the effectiveness of policing practices and activities on reducing crime. We chose to review studies that reviewed research, rather than reviewing all of the original studies themselves, because of the volume of studies that have been conducted on the effectiveness of policing practices. (See app. VII for a list of the studies that we reviewed and additional details on policing practices and crime.) Database Construction and Samples Used in Our Analyses To construct our primary analysis database, which consisted of 12 years of data from 1990 through 2001 for law enforcement agencies that reported at least 1 complete year of crime data to the FBI’s Uniform Crime Reporting Program, we obtained data from several sources and merge-matched information from these various sources to the level of the local law enforcement agency. The sources of data that we used to compile the annual observations from 1990 through 2001 on local police departments included: Office of Justice Programs Financial Data—Annual data on the obligation and expenditures on each grant awarded by OJP. Obligations refer to the funds that are expected to be paid on a grant, and expenditures refer to the grant funds that have been paid to a recipient. Because OJP and the COPS Office share data on awards, the OJP data also included COPS grant obligation and expenditure amounts. We used data on grant obligation amounts to and annual amounts expended by each recipient of a community-oriented policing (or COPS) grant, and annual amounts of other federal local law enforcement grants expended both by agencies that received COPS funds and those that did not. We used information about place codes and OJP vendors to link these data to our other sources. The UCR—Annual data files on the number of crimes and sworn officers reported by each agency to the UCR. The data on sworn officers represent the reported number of full-time officers in each agency on October 31 of each year. We analyzed the number of sworn officers per 10,000 persons in the covered jurisdiction. We analyzed data on the violent crimes of murder and non-negligent manslaughter, forcible rape, robbery, and aggravated assault, and the property crimes of burglary, larceny-theft, and motor vehicle theft. We analyzed the crime rate per 100,000 persons in the covered jurisdiction for each type of crime, as well as the rates for all index crimes, violent crimes, and property crimes. We used the originating agency identifier (ORI) variable and place codes to link crime and officer data to other data sources. Bureau of Economic Analysis (U.S. Department of Commerce)— Annual county-level estimates of per capita income and employment for each year from 1990 through 2001. We included in our analysis of officers, crime, and policing practices, measures of economic factors that are related to crime, such as the employment-to-population ratio and per capita income. We linked these data to agency-level data using place codes. Local economic conditions within each county are applied to each agency within a county. National Center for Health Statistics (NCHS) and U.S. Census Bureau— Annual estimates of the United States resident population for each county from 1990 through 2001. Data obtained include population totals and population breakdowns by gender, race, and age. Under a collaborative arrangement with the U.S. Census Bureau and with support from the National Cancer Institute, NCHS prepared postcensal population estimates for 2000 through 2001. The Census estimates of county population from 1990 through 1999 are updated to take into account these postcensal estimates. We included in our analysis of officers, crime, and policing practices measures of demographic factors that are related to crime, such as the percentage of total population in the 15-to-24 age group—an age group associated with high crime rates—and the racial composition of populations. We linked these data to agency-level data using place codes. Law Enforcement Agency Identifiers Crosswalk (Bureau of Justice Statistics)—The crosswalk file provides geographic and other identification information for each record included in either the Federal Bureau of Investigation’s Uniform Crime Reporting Program files or in the Bureau of Justice Statistics (BJS) Directory of Law Enforcement Agencies (DLEA). The main variables each record contains are the UCR originating agency identifier number, agency name, mailing address, Census Bureau’s government identification number, and Federal Information Processing Standards (FIPS) state, county, and place codes. We utilized FIPS codes to merge records from the crosswalk with OJP financial data and then used agency ORI codes to merge the crosswalk and financial data with crime data from the UCR. Data Used in Our Analysis of Obligations and Expenditures To report on COPS obligations and expenditures, we first analyzed the amounts reported in OJP financial data before we merged the financial information onto the agency-level crime records in the UCR. In the OJP data, each record represents either an obligation or an expenditure amount, and an agency appears in the database each time it has either an obligation or an expenditure. The total amount of obligations for COPS grants for the 1990- through 2001- period in the OJP data was $7.62 billion. Second, we linked the OJP financial data to agency information in the BJS crosswalk file. We used agency identifying information in the OJP financial data—such as FIPS state, county, and place codes—to link OJP records with agencies in the crosswalk file. This resulted in our identifying 13,332 agencies that had at least one record of an obligation in the OJP financial data. Of these, 10,680 (or 80 percent) received at least one COPS grant, and among the agencies that received COPS grants, the total amount of COPS obligations was $7.32 billion (or 96 percent of all COPS obligation amounts). Third, to describe the distribution of obligations relative to agency population and crime, we selected agencies that reported complete crime data—12 months of crime data within a given year—in at least 1 year from 1990 through 2001, and we merged their records onto the records of the agencies for which we had OJP financial information. This last group contained 11,187 agencies, and 8,819 (or 78.8 percent) of these agencies received at least one COPS grant. The total amount of COPS obligations among these agencies was $6.01 billion (or 79 percent of the total amount of COPS obligations from 1994 through 2001). Data Used in Our Analysis of Officers and Crime To analyze the impacts of COPS expenditures on officers and crime, we started with the UCR data and included in our samples agencies that met specific criteria. First, we identified and included agencies that reported at least 1 year of complete crime data—that is, 12 months of crime data in a given year—to the UCR from 1990 through 2001, and we included agencies only in the years in which they provided complete crime data. Second, we excluded from our analysis agencies that the UCR classifies as “zero-population” agencies. To avoid double counting of citizens within geographic areas, the UCR program assigns population counts only to the primary law enforcement agency within each jurisdiction. Consequently, transit police, park police, university police, and similar agencies that are contained within these jurisdictions are assigned a value of zero for population. Because of the fact that jurisdictions among zero-population agencies overlap with primary agencies, calculation of precise per capita crime rates for these nonprimary agencies is problematic. Many state police agencies also enforce laws among populations that are policed by other local agencies, which also makes problematic calculating per capita crime rates for state police agencies. Additionally, given that state police agencies often have multiple substations in varied locations throughout the state, the correct allocation of the proportion of federal dollars to each substation is unknown. As a result, we excluded zero-population and state police agencies from our analysis. Further, we included in our analysis agencies whose crime records we were able to merge-match and link with OJP financial data about COPS and other federal law enforcement grant expenditures, as well as link with Bureau of Economic Analysis and Census data on economic and population characteristics. Overall, we identified 13,133 agencies that provided complete crime data for at least 1 year from 1990 through 2001, that were not zero-population agencies, and that we were able to link to our other data sources. For example, in 1990, we found 10,160 agencies out of 17,608 that met our conditions. These 10,160 agencies represented 57.7 percent of the agencies that were included in the 1990 data that we obtained from the FBI, but they contained 93.2 percent of the crimes included in the 1990 data. That the agencies that we included in our sample in 1990 represented about 58 percent of all agencies but 93 percent of all crimes indicates that most of the agencies that we omitted with our criterion of providing complete crime data within a year were small agencies that reported relatively small amounts of crime to the national total. For 2001, the 9,733 agencies that reported complete crime data and were not zero-population agencies represented 49.1 percent of all agencies in the UCR data in 2001 and covered 94.8 percent of all crimes (table 3). In our analysis of officers and crime, we further limited our sample to agencies that covered populations serving 10,000 or more persons. Complete crime data for agencies serving populations of fewer than 10,000 persons were missing for a large percentage of agencies, and we determined that the data for these smaller agencies were unreliable for the purposes of this report. In 1990, we found 4,051 of agencies serving populations of 10,000 or more persons, which represented 23 percent of the agencies included in the data that we received from the UCR for 1990 but also represented 86.8 percent of the crimes (table 3). Data Used in Our Analysis of Reported Changes in Policing Practices To assess changes in reported policing practices, we analyzed data from two separate surveys of nationally representative samples of local law enforcement agencies. The surveys asked key officials at agencies about the types of policing practices that they reportedly used. Both surveys consisted of two administrations or waves of observations on the agencies in their respective samples. The first survey, the National Survey of Community Policing Strategies (or Policing Strategies Survey), was administered in 1993 and again in 1997. A total of 1,269 agencies in the 1993 and 1997 samples responded to both waves of the survey. We limited our analysis to the 1,188 agencies that had complete data on each of the policing practices items that we included in our analysis and that we were able to link to our larger database on crime, officers, money, and economic conditions. These agencies amounted to about 94 percent of the agencies that responded to both waves of the survey. For comparability with our analysis of the effects of COPS grants on officers and crime, we limited our analysis to the sample of agencies that served jurisdictions with populations of 10,000 or more persons. The second survey, which we call the National Evaluation of COPS Survey, was conducted by the National Opinion Research Center for the Urban Institute in its national evaluation of the implementation of the COPS program. Of the 1,270 agencies that responded to both the 1996 and 2000 administrations of the survey, we were able to link the data from 1,067 agencies to our larger database on crime, officers, money, and economic conditions. We restricted our analysis to agencies that served jurisdictions having populations of 10,000 or more persons, and we excluded from our analysis state police agencies and other special police agencies. (See app. VII for more information about the sample of agencies that we analyzed.) Reliability and Validity of the Data That We Used Prior to developing our database, we assessed the reliability of each data source. To assess the reliability of the various data sources, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) reviewed related documentation, including data dictionaries, codebooks, and published research reports that made use of the data sources; and (3) worked closely with agency officials to identify any data problems. When we found discrepancies (such as nonpopulated fields or what appeared to be data entry errors) we brought them to the agencies’ attention and worked with them to correct the discrepancies before conducting our analyses. We determined that the data were sufficiently reliable for the purposes of our report. In our regression analysis of the effects of COPS expenditures on crime, we use the UCR reported crime rates as our dependent variables. Crimes reported to the UCR are those brought to the attention of law enforcement agencies and subsequently reported to the UCR, or reported crimes. Reported crimes are a subset of all crimes committed, which is the sum of reported crimes plus crimes that are not reported to the police. Our ultimate interest, however, lies in determining whether COPS expenditures affected the crime rate for all crimes, whether or not they were reported to the UCR. This raises issues related to analyzing reported crimes to learn about all crimes. Because data on all crimes—reported and unreported—committed within local jurisdictions are unavailable in national data systems, we use the data on reported crimes. The nature of the relationship between reported crimes and all crimes therefore determines whether the results of our analysis of COPS expenditures on reported crime would lead to biased estimates of the effects of COPS expenditures on all crimes. Under certain circumstances, it is possible that our analysis of the effects of COPS on the reported crime rate could lead to overestimates of the effect of COPS on the crime rate for all—reported plus unreported—crimes. This would lead us to overstate the effect of COPS in reducing crime. Several conditions could lead to overestimates of the effects of COPS expenditures on reducing crime. If the reported crime rate and the crime rate for all crimes diverge, we would attribute to COPS a larger reduction in crime than is warranted. If these crime rates diverge, the reported crime rate would either decline at a faster rate or increase at a slower rate than the rate for all crimes, and our analysis of the effects of COPS on the reported crime would reveal either larger declines or smaller increases than would occur if we had data on the rate for all crimes. A divergence between the reported crime rate and rate for all crimes could arise for either or both of two reasons: Citizens do not report all of the crimes they experience to the police, or the police do not record and send to the UCR all of the crimes that citizens report to them. To assess whether citizens decreased the rate at which they reported crimes to the police, we reviewed data from the National Crime Victimization Survey (NCVS). These data are drawn from a nationally representative sample of households and are gathered independently of the police agencies that report crime to the UCR. They therefore provide a measure of crime that is independent of the reporting practices of police agencies. Respondents in the NCVS are asked about their experiences as victims of crimes. If respondents were victims of crime, they are asked if they or others reported the criminal victimization to the police. Using the NCVS data, it is possible to assess whether the rate at which citizens report crimes to the police has changed over time. These data show that during the 1990s, victims generally increased the rate at which they reported crimes to the police. As figure 5 shows, the decline in violent crime over the decade was steeper for all crimes reported in the survey than for the violent crimes reported to the police. Consequently, because the rates diverged rather than converged, victims’ practices of reporting of crime to the police during the 1990s are not likely to lead us to overestimate the effects of COPS grants on the crime rate. For police recording practices to lead to overestimates of the effects of COPS grants on crime, it would be necessary for the agencies that received COPS grants to decrease the rate at which they recorded and reported crimes to the UCR. Research on police recording practices suggests that agencies are unlikely to underreport serious crimes, such as murder, rape, robbery, and aggravated assault. Other studies found, second, that as police agencies adopt computer technology and become more sophisticated in recording crimes, they became more likely to increase the rate at which they included all citizen-reported crimes to the UCR. As COPS MORE grants provided funds for technology—such as laptop computers in police cars—that would have increased the level of sophistication within agencies, COPS grantee agencies would be more likely to report a larger percentage of the crimes that citizens drew to their attention. Consequently, changes in police reporting practices that stem from COPS grants and lead to increases in police reporting of crimes to the UCR are likely to lead us to underestimate the magnitude of effects of COPS grants on reducing crime. Two other conditions that could affect our estimates include the following: (1) Criminals who commit the crimes that are not reported to the police are unresponsive to the effects of COPS expenditures, and (2) as the number of police increase, the number of reported crimes increases, independently of the true crime rate. If criminals who commit crimes that go unreported to the police are unresponsive to police presence, then we would overestimate the effects of COPS on crime only if criminals changed their behavior to victimize more persons who would be unlikely to report crimes to the police. This appears to be an unlikely occurrence, as the NCVS data show a convergence between the total number of criminal victimizations, especially for violent crimes, and the number of crimes reported to the police. Second, if the size of the police force systematically affects the willingness of victims to report crime to the police or a police department’s likelihood of recording and reporting to the UCR crime victims’ reports, then these changes could lead to biased estimates of the impact on the crime rate. However, if changes in reporting behaviors occurred as the result of the COPS program, the likely impact on our estimates of the effect of COPS grants on crime through their effects on the number of officers is that we would underestimate the effects of the grants on crime. Given these considerations, our analysis of the effects of COPS expenditures on crime is more likely to underestimate than overestimate the effect of COPS funds on changes in the true crime rate. Appendix II: Background on the COPS Program and Studies of the Impacts of COPS Grants on Crime Established in October 1994 by the Attorney General to implement the administration of community policing grants under the Violent Crime Control and Law Enforcement Act (VCCLEA) of 1994, the Office of Community Oriented Policing Services announced its first grant program in November 1994. Prior to its establishment, in December 1993 the Department of Justice began making community policing grants to state and local law enforcement agencies that the COPS Office monitored. In 1993, DOJ awarded community policing grants under the Police Hiring Supplement Program, which was established by the Supplemental Appropriations Act of 1993 (P.L. 103-50 (1993)). The grants made under this program were funded by DOJ’s Bureau of Justice Assistance. Two goals of the COPS Office were to advance community policing by providing funding for 100,000 community policing officers and to promote the practice of community policing, an approach to policing that involves the cooperation of law enforcement and the community in identifying and developing solutions to crime problems. COPS attempted to achieve these goals by providing law enforcement agencies with grants to hire officers, purchase equipment, and implement innovative policing practices. COPS and Other Local Law Enforcement Grants Distributed throughout the 1990s According to our analysis of Office of Justice Programs data, from 1994 through 2001, the COPS Office distributed more than $7.6 billion in grants. Grants were made in a variety of grant program funding categories. Table 2 in appendix I contains more information about these funding categories. The largest amount of COPS grant funds obligated—about $4.8 billion, or 64 percent of the total—was in the form of hiring grants. These grants required agencies to hire new officers and at the same time to indicate the types of community policing strategies that they intended to implement. Hiring grants paid a maximum of $75,000 per officer over a 3-year period (or at most 75 percent of an officer’s salary) and generally required that local agencies cover the remaining salary and benefits with state or local funds. Hiring programs authorized under VCCLEA and administered by the COPS Office included the Phase I program, which funded qualified applicants who had applied for the Police Hiring Supplement but were denied because of the limited funds available; COPS AHEAD (Accelerated Hiring, Education, and Deployment) for municipalities with populations 50,000 and above; and COPS FAST (Funding Accelerated for Smaller Towns) for towns with populations below 50,000. In June 1995, Phase I, COPS AHEAD, and COPS FAST were replaced by the Universal Hiring Program. The next largest grant category was the Making Officer Redeployment Effective (MORE) grant program, which provided funds to law enforcement agencies to purchase equipment and hire civilians, with the goal of expanding the amount of time spent on community policing. COPS obligated more than $1.3 billion—or about 17 percent of total obligations—as MORE grants. Additional COPS grant programs provided funds for specific innovations in policing. For example, the Distressed Neighborhoods Pilot Project grants provided funds to communities with high levels of crime or economic distress to hire officers and implement a variety of strategies to improve public safety, and the Methamphetamine Initiative provided funds to state and local agencies to support a variety of enforcement, intervention, and prevention efforts to combat the methamphetamine problem. About $418 million—or about 5.5 percent of the total—was obligated under these innovative grant programs. The COPS Office also provided grants for a variety of other purposes, including funding to meet the community policing training needs of officers and representatives of communities and local governments (through a network of Regional Community Policing Institutes), and grants to law enforcement agencies to hire and train school resource officers to help prevent school violence and improve school and student safety (the COPS in Schools Program). Over $1 billion—or about 14 percent of total obligations—was obligated among these miscellaneous grant programs. In each year, the COPS Office was required to distribute half of the grant funds to agencies in communities whose populations exceeded 150,000 persons and half of the grant funds to agencies in communities with populations of 150,000 or fewer persons. During the 1990s, other federal law enforcement grant programs also provided funds to state and local law enforcement agencies for hiring officers and other crime prevention purposes. The Edward Byrne Memorial State and Local Law Enforcement Assistance (Byrne Formula Grant Program) was a variable pass-through grant program administered by the Bureau of Justice Assistance (BJA). According to our analysis of data that we obtained from OJP, from 1990 through 2001, the Byrne Formula Grant Program distributed between $415 million and $520 million in grants. States were required to pass through to local jurisdictions amounts of funding based upon a variable pass-through formula. Byrne Formula Grant funds could be used to provide for personnel, equipment, training, technical assistance, and information systems, among other purposes. According to an evaluation of the Byrne formula grant program, about 40 percent of Byrne subgrant funds—the amounts passed through the states to local jurisdictions—were for multijurisdictional task forces. In addition to the formula grant program, there also was a Byrne discretionary grant program. According to an official at the Bureau of Justice Statistics (BJS), a large percentage of the Byrne discretionary funds were targeted for specific programs. The Local Law Enforcement Block Grant (LLEBG) Program was also administered by BJA. The LLEBG grant funds amounted to about an average of $475 million per year from 1996 through 2000. According to BJS officials, these funds were allocated by a formula based upon violent crimes as reported in FBI’s crime index. LLEBG funds were available to local governments for hiring law enforcement officers, paying overtime, purchasing equipment, as well as several other purposes. According to the Urban Institute’s evaluation of the implementation of the COPS program, agencies that received COPS grants reported using both Byrne and LLEBG funds to support their transitions to community policing. Additional grant programs that provided funds to local law enforcement agencies included the Juvenile Accountability Incentive Block Grants, Weed and Seed Grants, and several Office on Violence Against Women grants, according to a BJS official. Debates over whether the COPS Office Met Its Goals for Officers and Promoted Community Policing The amount of COPS funding was more than sufficient to fund the federal portion for 100,000 officers. According to the Attorney General’s report, from 1994 through 2000, the COPS Office awarded more than 30,000 grants to over 12,000 law enforcement agencies and funded more than 105,000 community policing officers. However, a research report by the Heritage Foundation questioned how effective the COPS Office had been in putting 100,000 officers on the street. The study analyzed trends in the number of officers and concluded that the COPS program had not added 100,000 additional officers above historic trends. In its review of the COPS Office’s performance for the fiscal year 2004 budget, the Office for Management and Budget (OMB) indicated that by 2002, COPS grants funding was sufficient for almost 117,000 officers, a number that exceeded the program’s original commitment to fund 100,000 officers. At the same time, OMB acknowledged that fewer than 90,000 officers had been hired or redeployed to the street. OMB reported that the COPS Office counted 88,028 COPS-funded officers on duty as of August 2002—or about 75 percent of funded officers. In their report of October 2002 on the COPS program, researchers at the Urban Institute updated earlier estimates of COPS-funded officers. They projected that over the years from 1994 through 2005, COPS-funded officers would add between 93,400 and 102,700 officers to the nation’s communities on a temporary basis, but that not all of these officers would be available for service at any one point in time. They further estimated that the permanent impact of COPS, after taking into account postgrant attrition of officers and civilians, would be between 69,100 and 92,200 officers. In addition to promoting the hiring of officers, the COPS Office sought to promote community policing. COPS hiring grant applications asked agencies to report the types of practices that they planned to implement with their grants, such as identifying crime problems by looking at records of crime trends and analyzing repeat calls for service, working with other public agencies to solve disorder problems, locating offices or stations within neighborhoods, and collaborating with community residents by increasing officer contact with citizens and improving citizen feedback. In 2000, the Attorney General reported that 87 percent of the country was served by departments that practiced community policing. Studies that have addressed the extent to which the COPS Office grants caused the spread of community policing suggest that COPS grants accelerated the adoption of these practices but did not launch the spread of community policing. The Police Foundation’s study of community policing practices during 1993—1 year before the COPS Office began making grants—indicated that the practice of community policing was fairly widespread, especially in larger police departments. The Police Foundation researcher reported that 47 percent of the agencies surveyed in 1993 reported that they either were in the process of adopting or had adopted community policing, but that 86 percent of municipal agencies with more than 100 sworn personnel were either in the process of implementing or had implemented community policing. In their evaluation of the implementation of the COPS program, Urban Institute researchers credited COPS with promoting community policing, but the researchers concluded that COPS funds seemed to have fueled movements that were already accelerating rather than have caused the acceleration. In a later report, they pointed out that for large agencies, the problem-solving practices that they examined were already widespread by 1995, and almost no COPS grantees reported adopting problem-solving practices for the first time between 1998 and 2000. Some of the types of practices that agencies planned to implement with their COPS grants correspond with approaches to policing that recent reviews of policing practice suggest are effective in preventing crime. For example, our review of policing practices indicates that problem- solving policing and place-oriented policing practices—such as those in which officers attempt to identify the locations where crime occurs repeatedly and to implement procedures to affect crime—are among the types of practices that research has demonstrated to be effective in preventing crime. These practices were among the types that agencies could implement with their COPS grants. Debates about COPS’ Contribution to the Decline in Crime in the 1990s In 2000, the Attorney General reported that COPS-funded officers helped to reduce crime. The Attorney General’s report to Congress asserted that the drop in crime that occurred after 1994 was more than would have been expected in the absence of the passage of VCCLEA and the creation of the COPS Office. As evidence of the impact of COPS grants on crime, it proffered the inverse relationship between increases in the per agency number of police officers and decreases in the per agency levels of violent crimes. Studies of the impact of COPS grants on crime that attempted to take into account factors other than just the underlying trends in crime were released in 2001. A COPS Office-funded study examined the impact of COPS grants on local crime rates in over 6,000 communities from 1995 through 1999. Analyzing changes in crime rates in communities that had received COPS grants, the study concluded that COPS hiring grants were effective in reducing crime and that COPS grants for innovative policing practices had larger impacts on reducing violent and property crime than did other types of COPS grants. However, a study released by the Heritage Foundation, which was based upon the analysis of county-level data, was unable to replicate the findings of the COPS-funded study. Specifically, the Heritage study found no effect of COPS hiring grants on crime rates, but it found that grants for specific problems—such as gangs, domestic violence, and illegal use of firearms by youth—were associated with reductions in crime. In addition, our review of the COPS-funded study found that its methodological limitations were such that the study’s results should be viewed as inconclusive. The inconclusiveness of the findings of studies was reflected in OMB’s assessment of the performance of the COPS program. According to OMB, although the COPS Office used evaluation studies to assess whether its grants had an impact on crime, the results of the findings were inconclusive, and OMB rated the COPS program as “Results Not Demonstrated” in 2004 using its Program Assessment Rating Tool (PART). Issues in Assessing the Contribution of COPS Grants to the Decline in Crime in the 1990s Assessing whether COPS funds contributed to the decline in crime during the 1990s is complicated by many factors. Nationwide, the decline in crime began before 1993, which was before the COPS program made its first grants. According to the FBI’s data on index crimes—the violent crimes of murder, rape, aggravated assault, and robbery and the property crimes of burglary, larceny, and motor vehicle theft—the decline in the overall index crime rate, as well as the property and violent crime rates started as early as 1991 or 1992 (fig. 6). As COPS grants cannot be the cause of the start of the decline in crime rates, the other factors that led to the decline in the crime rate could also have affected the decline in crime during the period that the COPS Office made its grants. Factors such as a downturn in handgun violence, the expansion of imprisonment, a steady decline in adult violence, changes in drug markets, and expanding economic opportunities are among those suggested as related to the decline in crime—especially violent crime—in the 1990s. To the extent that these factors also are correlated with the disbursement of COPS funds, this increases the challenges involved in isolating the effects of COPS grants. Other federal funds for local law enforcement could also have contributed to expanding the number of police officers and contributed to declines in crime. If the distribution of non-COPS funds such as LLEBG and Byrne grants is correlated with that of COPS funds, and if research does not take these funds into account, a study could attribute some of the effect on crime of these other grant funds to COPS grants. COPS grants were distributed in ways that make rigorous evaluations of their causal impacts difficult to implement. Receipt of a COPS grant was not randomly assigned; therefore, it is difficult to determine whether the agencies that received grants are the same ones that, in the absence of the grant, would have experienced reductions in crime. The amount of funding certain agencies receive may also relate to the agency’s ability to combat crime. For example, certain police chiefs may be more capable than others at acquiring funds and also more up-to-date on policing methods. This underlying capacity of an agency to organize policing, rather than the receipt of a particular grant, would then be the cause of a crime decline as opposed to a particular grant. Additionally, COPS grants were fairly widespread throughout police departments and the nation as a whole. This distribution of grants leaves relatively few unfunded agencies to serve as comparison groups against which to assess the performance of the agencies that received COPS grants. The roughly 12,000 agencies that the former Attorney General reported received COPS grants by 2000 represent about 61 percent of the agencies that reported crime to the Uniform Crime Reports. The mechanisms by which COPS funds could affect crime have not been explicitly examined. For example, the two prior studies that we cited did not examine whether COPS grants potentially affect crime through changes in police officers or through changes in policing practices, both of which may have been affected by COPS funds. Additional officers may affect crime by increasing police presence, by increasing arrests that lead to incapacitation of offenders, or by deterring offenders by increasing the likelihood of capture. Changes in policing practices toward problem- solving or place-oriented practices that focus police resources on recurring crime problems could also lead to reductions in crime. Appropriate methodologies from research on crime have been developed to address issues that could confound efforts to assess the impacts of COPS grants on crime rates. For example, if COPS grants are to affect crime through their impacts on the number of officers, then isolating the effects of increases in officers on crime presents a challenge in assessing the direction of the relationship between officers and crime. If additional officers are hired in response to increases in crime rates, then it could appear that crime causes officers. Alternatively, if additional officers lead to reductions in crime below the levels that they would have been without the officers, then it would appear that officers caused changes in crime. To isolate the causal effect of COPS grants, researchers employ the use of instruments for causal variables. One suggestion in the research literature for an instrument for police officers is COPS hiring grants. To the extent that COPS hiring grants buy only officers, COPS hiring grants can be used as an instrument for the actual number of police officers and therefore be used to estimate the relationship between crime and police officers in a way that takes into account the possibility of this simultaneous relationship. Second, particular forms of statistical models take advantage of information about the variation in the amount and timing of COPS grants among agencies to assess how changes in the number of sworn officers and crime rates are associated with these two sources of variation. These fixed-effects regression models use a panel of data—or repeated observations on the same units, in this case, police agencies, over several time periods—to assess the effects of changes in the number of sworn officers and crime rates that are associated with variation in the timing and amount of COPS grant expenditures. These regression methods also allow for the introduction of controls for unobserved preexisting differences between units (agencies) and differences over time within units. Incorporating each agency’s underlying trajectories (or growth rate trends) in crime rates and sworn officers into the modeling of the effects of COPS funds allow for explicit comparisons within groups of agencies sharing similar trajectories, which helps to control for potential biases associated with preexisting trends. By identifying and explicitly modeling the mechanisms through which a program could have its effects—such as COPS funds leading to increases in the number of officers and their effects on crime—the possibility of a spurious relationship between inputs (such as COPS funds) and outcomes (such as crime) can be minimized. Appendix III: COPS Grant Obligation and Expenditure Patterns This appendix addresses how COPS obligations were distributed among local law enforcement agencies in relation to the populations they served and the crimes in their jurisdictions. It also addresses how much of the obligated amounts agencies spent. Specifically, it covers (1) the amount of COPS obligations between 1994 and 2001, (2) the distribution of grant funds to larger and smaller agencies relative to total index and violent crimes, (3) the number of agencies in our sample that received COPS grants, (4) the amounts of COPS expenditures, and (5) the amount of these expenditures relative to total local law enforcement expenditures. Smaller Agencies Received Larger Amounts of COPS Obligations per Crime than Did Larger Ones Our analysis showed that from 1994 through 2001, COPS obligated more than $7.32 billion to 10,680 agencies for which we were able to link OJP financial data on COPS obligations to the records of law enforcement agencies. As shown in table 4, about $4.7 billion (or 64 percent) of these obligations were for hiring grants. Equipment and redeployment grants made under the MORE category of grants amounted to about $1.2 billion (or about 17 percent) of total obligations. As shown in table 5, from 1994 through 2001, slightly more than half of the COPS obligations in the sample of agencies for which we were able to link OJP financial data to the records of agencies that reported crime and population to the FBI’s Uniform Crime Reporting Program went to those agencies serving populations of 150,000 or fewer persons and slightly less than half went to those agencies serving populations of more than 150,000 persons, roughly consistent with the requirements of COPS authorizing legislation. The largest agencies—those serving populations of 150,000 or more persons—accounted for more than half of all violent crimes reported to the UCR. Specifically, in our sample, these agencies accounted for about 58 percent of all violent crimes reported in the UCR from 1994 through 2001. Their share of all violent crimes declined slightly from 60 percent from 1994 through 1997 to 57 percent from 1998 through 2001. These agencies received about 47 percent of all COPS obligations, a share that is disproportionately small relative to their contribution to all violent crimes. However, as shown in table 5, the amount of COPS obligations going to agencies serving populations of 150,000 or fewer persons and those serving populations of more than 150,000 persons was about equal to the distribution of all index crimes occurring within these agencies. Table 6 shows that law enforcement agencies serving the smallest populations received the largest amounts of COPS obligations on a per crime basis. For example, agencies serving populations of fewer than 10,000 persons received, on average, $1,573 per violent crime reported from 1994 through 2001. By comparison, agencies serving populations of more than 150,000 persons received $418 per reported violent crime. Most Agencies Had Received Their First COPS Grant by 1996 As shown in table 7, of the 10,680 agencies included in our analysis, just under half (49 percent) had received at least their first COPS grant by 1995, and 71 percent had received at least their first grant by 1996. Of the 9,845 agencies that received at least one COPS hiring grant, 53 percent had received their first hiring grant by 1995, and 73 percent had done so by 1996. We estimated that about 67 percent of the agencies that reported complete crime data to the UCR for at least 1 year from 1990 through 2001 received a COPS grant by 2001. The percentages of agencies that received COPS grants varied by the size of agencies, as measured by the size of the population in the jurisdictions served by the agencies. As table 8 shows, as the population served by the agencies increased, the percentage of agencies that received a COPS grant also increased. Among the largest agencies—those serving populations of more than 150,000 persons—about 95 percent received a COPS grant. By comparison, among agencies serving populations of fewer than 10,000 persons, about 61 percent in our sample of agencies received at least one COPS grant. Total COPS Expenditures and Per Capita Expenditures Peaked in 2000, and Smaller Agencies Spent More than Larger Ones on a Per Capita Basis By 2001, agencies had drawn down about $5 billion in COPS funds (or roughly 68 percent of all obligations awarded from 1994 through 2001). As figure 7 shows, total COPS expenditures increased annually from 1994 to 2000. Total expenditures exceeded $900 million per year in each year from 1998 through 2001, and in 2000, they exceeded $1 billion. COPS hiring grant expenditures totaled $3.5 billion (or roughly 70 percent of the roughly $5 billion in hiring grant obligations made from 1994 through 2001). Hiring grant expenditures peaked in 1998—exceeding $690 million—and declined slightly in 1999 and 2000. The number of agencies that spent COPS funds peaked in 1998 and declined thereafter, as figure 8 shows. In 1998, more than 7,500 agencies were spending COPS funds. However, by 2001, the number had fallen to about 6,000. COPS expenditures per population in the jurisdictions that spent funds— per capita expenditures—also increased as the total amount of COPS expenditures increased. Total per capita COPS expenditures peaked in 2000 at $5.6 per person. Hiring grant expenditures per capita similarly peaked at $4.8 per person in 2000. The per capita expenditure amounts varied by size of agency, as smaller agencies generally spent more on a per capita basis than did larger agencies. Agencies serving populations of fewer than 10,000 persons spent about twice as much COPS grant monies on a per capita basis than did the larger agencies. For example, per capita COPS expenditures for agencies serving fewer than 10,000 persons averaged $6.6 as compared with about $3.4 for agencies serving populations of more than 150,000 persons. COPS Expenditures Amounted to about 1 Percent of All Local Law Enforcement Expenditures From 1994 through 2001, COPS expenditures amounted to about 1 percent of total local expenditures for nationwide police services, based upon BJS data on criminal justice expenditures and our analysis of OJP data on COPS grant expenditures. From 1994 through 2001, total local expenditures for police services increased from about $46 billion to $72 billion. During the years from 1998 through 2000, when COPS expenditures neared and then exceeded $1 billion per year, the contribution of COPS expenditures to local police expenditures increased to about 1.5 percent of total local expenditures for police services. Appendix IV: COPS Expenditures Led to Increases in Sworn Officers and Declines in Crime This appendix addresses our second reporting objective, which has two parts: determining the extent to which COPS grant expenditures contributed to increases in the number of sworn officers in police agencies, and determining the extent to which COPS grant expenditures led to reductions in crime through their effects on sworn officers. COPS Expenditures Led to Increases in Sworn Officers above Levels That Would Have Been Expected without Them and Were Responsible for about 88,000 Officer- Years We found that COPS hiring grants were significantly related to increases in sworn officers above levels that would have been expected without the expenditures, after controlling for economic conditions in the counties in which agencies were located, population composition, and preexisting trends in agencies in the growth rate of sworn officers. Further, the effects of COPS hiring grants were consistent across several different regression models, including those that controlled for state-level factors that could affect the size of local police forces—such as state-level differences in the amount of funding provided to local departments. Overall, the parameter estimates from our models indicate that each $25,000 in COPS hiring grant expenditures was associated with roughly an additional 0.6 officers in any given year. With the exception of MORE grants, no other types of COPS grant expenditures were associated with increases in officers. Using the results from our regression models, we calculated for each year from 1994 through 2001 the number of sworn officers nationwide that would have been on the street absent the COPS expenditures in each year. The difference between this amount and the actual level of sworn officers yielded the number of officers due to COPS expenditures in a given year. The number of officers due to COPS increased from 84 in 1994 to 17,387 in 2000, and then declined to 12,226 in 2001 (table 9). The increase and decrease in the number of officers due to COPS followed the pattern of COPS expenditures, which peaked in 2000 and then declined (see fig. 7 in app. III). Adding up the number of officers due to COPS in each year across the years from 1994 through 2001, we arrive at a total of about 88,000 sworn officer-years due to COPS expenditures. From 1997 through 2000, when COPS expenditures neared or exceeded $1 billion per year, we estimated that the expenditures led to increases in sworn officers of between 2.4 percent and 2.9 percent above levels expected without them. In years prior to 1997, and in 2001, when COPS expenditures were lower, the percentage of officers due to COPS expenditures were lower than occurred from 1997 through 2000. An officer-year is the number of officers in a given year that were associated with COPS expenditures. According to this measure, an individual officer—or person—might be included in our counts of officers due to COPS in several years. Therefore, our estimate of the total number of officer-years arising from COPS expenditures is not equivalent to the number of officers that the COPS Office reportedly funded, nor does it represent an estimate of the total number of officers as a result of COPS grants. For a given year, however, our estimate represents the number of COPS-funded officers on the street. (For additional details on the methods we used to estimate the effects of COPS expenditures on officers, see app. VI.) LLEBG Funds Also Contributed to Increases in Officer Strength In addition to our findings of the effects of COPS expenditures on the level of sworn officers, we found that Local Law Enforcement Block Grants expenditures also contributed to increases in officers above levels expected without them. Our finding about LLEBG grants effects on sworn officers is consistent with interview and survey responses reported by Urban Institute researchers in their evaluation of the implementation of the COPS program. In their interviews with police chiefs, they found that the chiefs reported that they used LLEBG to supplement COPS funds. LLEBG grants could be used for a variety of purposes in addition to funding officers. COPS Expenditures Led to Reductions in Crime through Increases in Officers Estimating the impact of COPS expenditures on changes in crime rates through their effects on the number of sworn officers, we found that COPS expenditures were associated with declines in crime rates for total, violent, and property crimes, as compared with their baseline levels in 1993, the year prior to the distribution of COPS grants. The amounts of decline in crime rates varied among crime types and across years. The variation in the decline in crime rates in various crime types arose from our estimates of the effects of changes in officers on crime rates, and the variation over time within crime types arose from the variation in COPS expenditures. For example, for the total crime rate, we found that the impact of COPS peaked in 1998, as for that year, we estimated that COPS led to a reduction in the total crime rate of almost 1.4 percent from the level of crime in 1993. From 1999 and 2000, COPS expenditures of between $920 million and about $1 billion led to reductions in the total crime rate of about 1.3 percent, again, as compared with the 1993 level. In years prior to 1998 and in 2001, when COPS expenditures were lower than their levels in 1998 through 2000, the declines in total crime arising from COPS expenditures also were less than 1.3 percent (table 10). Similarly, for violent and property crimes, we found that the amount of decline associated with COPS expenditures varied from year to year, and for both of these crime categories, the largest decline in crime occurred during 1998. COPS expenditures led to a decline in violent crime of almost 2.6 percent in 1998, compared with violent crime levels in 1993. For 1999 and 2000, COPS expenditures led to about a reduction of about 2.4 percent in violent crime, from the 1993 level. For property crimes, the impact of COPS expenditures from 1998 through 2000 was between 1.1 percent and 1.2 percent, as compared to the 1993 level (table 10). Our estimates of the impact of COPS expenditures on crime through their effects on the number of officers represent the effects of COPS expenditures on crime net of the effects of other factors that we controlled for in our model—including changes in economic conditions, population composition, and pre-COPS program trends in police agencies’ growth rate of sworn officers and growth rate in crime. By controlling for pre-COPS program growth rates in officers and crime, we made comparisons between agencies within population size categories that had similar growth rates in officers and crime but which differed on the timing and amount of COPS expenditures. In addition, through the use of state-by- year fixed effects, we controlled for state-level factors that could affect crime rates, such as changes in sentencing policy or state incarceration. As our estimates of the impact of COPS expenditures on crime come, in part, from our estimates of the effects of changes in officers on crime, we compared our estimates of the effect of changes in officers on changes in crime with estimates of these effects that appear in recent research. We found that each 1 percent increase in sworn officers was associated with about a 0.4 percent decline in total crime, about a 0.8 percent decline in violent crime, and a slightly less than 0.4 percent decline in property crime. Our estimates of this relationship—the elasticity of crime with respect to officers—is consistent with estimates that appear in recent literature of the effects of changes in police officers on changes in crime rates. Others report elasticities that are similar to ours. For example, in a study that used COPS granted officers to estimate the effect of increases in officers on crime, the authors reported an estimated elasticity for violent crime of –0.99 (a 1 percent increase in officers led to a 0.99 percent decline in violent crimes) and a property crime elasticity of –0.26. In another paper that used electoral cycles to estimate the effect of increases in officers on crime, the author provides a set of elasticities under different model specifications. The elasticity for property crimes was calculated to be about –0.3, and the elasticity for violent crimes was about –1.0. (See app. VI for more information on the methods that we used to calculate our elasticities and to estimate the impact of COPS expenditures on crime.) Various Specifications of Our Regressions Yielded Consistent Findings about the Effect of COPS Expenditures on Crime While we found that COPS expenditures were associated with reductions in total crime and the violent and property crime categories, when we examined the effects of COPS expenditures on specific types of index crimes, we found significant reductions in murder, robbery, aggravated assault, burglary, and motor vehicle theft. We found a negative association between COPS expenditures and larceny, but this effect was not statistically significant. Finally, we found a positive but statistically insignificant association between COPS expenditures and rape. (See table 17 in app. VI.) Additionally, for agencies that served populations of 10,000 or more persons, we found that the effects of COPS expenditures on the total crime rate were consistent across agencies that served populations of varying sizes with the exception of agencies that served populations of between 25,000 and 50,000 persons. The magnitude of the effects tended to increase with the size of agencies, where agency size refers to the population served by the agency. In general, as the size of agencies increased, we found that the impact of COPS expenditures on the total crime rate also increased. For agencies serving populations between 25,000 and 50,000, we observed a negative relationship between COPS expenditures and crime. However, the estimated effect was not statistically significant. (See table 18 in app. VI.) As there are uncertainties associated with formulated regression models, and point estimates derived from a single regression model can give misleading information, we estimated our regressions under different assumptions about how COPS expenditures could affect crime. Under the various models, we introduced lagged effects, nonlinear effects for COPS hiring grants, and effects for the year of receipt of COPS grants—to test whether the impact of COPS occurred in the years in which the money was spent. From the various specifications, we estimated the elasticity of crime with respect to officers. We found that the elasticity for total crimes ranged from –0.41 to –0.95. The elasticity that we used to calculate the impact of COPS on the decline in index crimes was –0.42, which is at the lower end of the range of elasticities that we estimated. Therefore, under assumptions different from the preferred specification about how COPS expenditures are related to officers and crime, we would arrive at a larger estimated impact of COPS on the decline in crime than we report above. Also, under the varying assumptions about how COPS expenditures are related to crime, we estimated elasticities of violent crimes with respect to officers and elasticities of property crimes with respect to officers. For violent crimes, the elasticities derived from these regressions ranged from –0.76 to –1.8. The elasticity that we used to estimate the impact of COPS on the decline in violent crimes was –0.8. This elasticity is at the lower end of the range of elasticities that we estimated, which implies that the impacts of COPS on violent crimes could be larger than the impacts that we reported. For property crimes, the range of estimated elasticities was from –0.35 to –0.80. (See table 20 in app. VI.) In addition to our findings of the effects of COPS expenditures on crime, we found that LLEBG expenditures were consistently associated with declines in total crime rates and declines in the murder, rape, robbery, aggravated assault, burglary, and larceny crime rates. Only for motor vehicle theft did we not find a significant effect of LLEBG expenditures. However, because LLEBG grant funds are related to the levels of violent crime occurring within a jurisdiction, the relationship between LLEBG expenditures and crime may be one of bidirectional causality. By this, we mean because LLEBG grant amounts were determined in part on the levels of violent crime, violent crime in a community can be construed as a cause of LLEBG grants in addition to an effect of having received them. (See table 17 in app. VI.) Factors other than COPS Expenditures Contributed Larger Amounts to the Reduction in Crimes, but COPS Contribution Was in Line with COPS Expenditures The decline in crimes attributable to COPS expenditures accounted for at most about 10 percent of the total drop in crime from 1993 to 1998, and about 5 percent of the drop from 1993 to 2000. Therefore, various factors other than COPS expenditures were responsible for the majority of the total decline in crime during the 1990s. While in our regression models of the effects of COPS funds on crime, we were able to control for the effects of many factors that could be related to the decline in crime, we did not attempt to estimate the amount that each of these factors individually had contributed to the overall drop in crime. Rather, by isolating the amount by which crime rates declined because of COPS and comparing that amount with the total decline in crime from our 1993 baseline year, we calculated COPS contribution to the overall decline in crime. The amount of the total drop in crime not associated with COPS expenditures reflects the amount due to factors other than COPS. While COPS’ contributions to the decline in crime rates did not account for the majority of the total drop in crime rates, the amounts of declines in crime rates attributable to COPS were on the same order of magnitude as were COPS expenditures’ contributions to local law enforcement expenditures for police. From 1994 through 2001, COPS expenditures amounted to slightly more than 1 percent of total local expenditures for police services nationwide. As we found and reported, COPS expenditures were responsible for about a 1.4 percent decline in the total crime rate. Appendix V: COPS Expenditures Associated with Policing Practices That Crime Literature Indicates Are Effective in Preventing Crime This appendix addresses our third reporting objective: determining the extent to which COPS grant expenditures during the 1990s were associated with police departments adopting policing activities or practices that the crime literature indicates could contribute to reductions in crime. Specifically, it describes the results of our analyses of the relationships between COPS grant expenditures and changes in policing practices reported in two surveys of local law enforcement agencies, and it summarizes our assessment of studies that conducted systematic reviews of research on the effectiveness of various policing practices. Our analysis of the first of the two surveys of policing practices compares changes in reported policing practices between 1993 and 1997, that is, prior to the distribution of COPS grants and after many COPS grants had been distributed. In our analysis of the second survey, we compare changes from 1996 to 2000, or during the implementation COPS program. In addition, we provide a limited summary of our analysis of systematic reviews of evaluations of policing practices that could contribute to reductions in crime. (See app. VII for the details related to our methodology for analyzing policing practices.) Comparisons of Pre- and Within-COPS Grant Program Levels of Reported Policing Practices Show That COPS Grantee Agencies Reported Larger Increases than Non-COPS Agencies Prior to the implementation of COPS grants, many local law enforcement agencies had adopted a number of problem-solving, place-oriented, crime analysis, and community collaboration policing practices. Problem-solving practices refer to efforts by the police to focus on specific problems and tailor their strategies to the identified problems. Place-oriented practices include attempts to identify the locations where crime repeatedly occurs and to implement procedures to disrupt these recurrences of crime. Crime analysis includes the use of tools such as geographic information systems to identify crime patterns. Community collaboration includes attempts to improve or enhance citizen feedback about crime problems and the effectiveness of policing efforts to address them. Our analysis of the Policing Strategies Survey data for 1993—the year before COPS grants were distributed—indicates that surveyed agencies that received a COPS grant between 1994 and 1997 reported higher mean levels of the above policing practices than agencies that did not receive a COPS grant between 1994 and 1997. For example, in 1993, the mean number of all practices reported by grantee agencies was about 13 out of a possible 38 practices, while the mean number of all practices reported by nongrantee agencies was about 11 practices. However, among the agencies that received a COPS grant between 1994 and 1997, there were larger increases in the mean level of all reported practices between 1993 and 1997 except for those related to crime analysis. COPS grantee agencies reported in 1997 an increase of about 3.5 practices overall, as compared with a mean increase of less than 2 practices by the agencies that did not receive COPS grants during this period. The largest differences between COPS grantees and nongrantee agencies in the reported increase in practices occurred for the problem-solving and place- oriented practices (table 11). From a series of regression models of the effects of COPS grants on changes in policing practices, we found that both the receipt of a COPS grant, and the amount of per capita COPS expenditures by agencies were associated with increases in the levels of reported policing practices between 1993 and 1997. Our regressions control for the underlying trend in the reported use of policing practices, for differences in agency characteristics that could be associated with increases in reported levels of policing practices—such as the size of the jurisdiction—and changes in the economic and social characteristics of the county in which the agency was located. We estimated separate regressions of the effect of the receipt of a COPS grant and of the cumulative per capita amount of COPS expenditures on the levels of reported policing practices. Our regression models for estimating the effects of receipt of a COPS grant on the change in police practices between 1993 and 1997 show that agencies that received at least one COPS grant had significantly larger changes in the overall number of practices than did agencies that did not receive a COPS grant during this period. Specifically, according to our analysis of the survey data, the average number of practices increased by 2.9 over this period, and the receipt of a COPS grant accounted for 1.8 of this reported increase. Further, when we examined our results from separate regressions for the different categories of practices, we found that receipt of a COPS grant was associated with significant increases in reported levels of problem-solving and place-oriented practices, but was not related to changes in community collaboration or crime analysis practices. (See app. VII for details.) Our regression models further show that changes in practices were also associated with the cumulative amount of per capita spending on COPS grants. All other things being equal, a $1 increase in per capita spending was associated with an increase of 0.23 policing practices. As we found for the effects of the receipt of a grant on changes in police practices, these regressions also showed that the level of per capita spending on COPS grants was significantly associated with increases in problem-solving and place-oriented practices. However, per capita spending on COPS grants was also associated with increases in crime analysis practices. (See app. VII for details.) The Effects of COPS Grants on Agencies’ Reported Increases in Policing Practices Differed across Agencies Serving Populations of Different Sizes Receipt of a COPS grant was associated with increases in the overall adoption of policing practices among agencies serving populations of different sizes. Regardless of the size of populations served, agencies that received COPS grants adopted almost twice as many practices between 1993 and 1997 as agencies that did not receive COPS grants. However, in both years, agencies serving larger populations also reported higher mean levels of policing practices (table 12 and fig. 9). Our regressions of the effect of COPS expenditures on changes in reported levels of policing practices between 1993 and 1997, indicate, however, that the effects of receiving a COPS grant were larger in agencies in jurisdictions serving fewer than 50,000 persons and in jurisdictions serving more than 150,000 persons, than in agencies in jurisdictions serving populations of between 50,000 and 150,000 persons. Reported Levels of Policing Practices among COPS Grantees Did Not Increase Overall from 1996 to 2000 Our analysis of the National Evaluation of COPS Survey data on policing practices in 1996 and in 2000 also showed that agencies that received COPS grants reported larger increases in the mean level of policing practices than did non-COPS grantee agencies, but that the effects were not statistically significant. The findings suggest that there was no continued overall increase in reported policing practices in the period from 1996 to 2000. Regardless of when agencies received COPS grants and made COPS expenditures, we found that COPS grantee agencies reported larger increases in policing practices between 1996 and 2000 than did the agencies that did not have COPS grants in these years. For example, for the agencies that received their first COPS grant in 1996 or before, the average increase in reported use of policing practices from 1996 to 2000 was about 21 percent, and for the agencies that made COPS grant expenditures after 1996, the average increase in reported use of policing practices was about 17 percent. By contrast, for the agencies that had not made any COPS grant expenditures by 2000, there was about a 0.2 percent decrease in the reported use of policing practices from 1996 to 2000, and for the agencies that did not make any COPS grant expenditures after 1996, there was about a 3 percent increase in the reported use of policing practices from 1996 to 2000 (table 13). Although we observed larger average increases in reported policing practices among agencies that spent COPS grant funds than among agencies that did not spend COPS grant funds, when we controlled for underlying trends in the reported adoption of policing practices and agency characteristics, we found that changes in per capita COPS expenditures made between the period preceding wave 1 of the survey (1994 through 1996) and the period following wave 1 of the survey (1997 through 2000) were not associated with changes in reported overall policing practices between 1996 and 2000 (app. VII). This suggests that there was no continued overall increase in reported policing practices in the period from 1996 to 2000, as a function of COPS grant expenditures. Crime Literature Provides Evidence for Effectiveness of Some Policing Practices Our analysis of six systematic reviews of evaluations of the effectiveness of various policing practices in preventing crime indicates that the current evidence ranges from moderate to strong that problem-oriented policing practices and place-oriented practices are either effective or promising as strategies for addressing crime problems. For example, problem-oriented approaches that focus on criminogenic substances such as guns and drugs appear to be effective in reducing both violent and property crimes. And hot spots approaches—place-oriented approaches that temporarily apply police resources to discrete locations where crime is concentrated at much higher rates than occur jurisdictionwide—have also been found to be effective in reducing crime. However, the magnitudes of the effects of these interventions are difficult to estimate, especially on citywide crime rates, as the interventions that were reviewed as effective generally were concentrated in comparatively small places. Further, the enduring nature of these interventions is not fully understood. It is not known, for example, how long the effects of a problem- or place-oriented intervention persist. In addition, some of the reviews point out that research designs undertaken to date make it difficult to disentangle the effects of problem- oriented policing from hot spots policing. There is suggestive, but limited, evidence that the combination of these practices may be more effective in preventing or reducing crime than any one strategy alone. In contrast to the findings on problem-oriented and place-oriented policing practices, there is little evidence in the literature for the effectiveness of community collaboration practices—such as increasing foot patrol, establishing community partnerships, and encouraging citizen involvement—in reducing or preventing crime. Appendix VI: Methods Used to Estimate the Effects of COPS Funds on Officers and Crime In this appendix, we describe the methods we used to address our reporting objective regarding the impacts of the COPS funds on officers and crime: determining (1) the extent to which COPS grant expenditures contributed to increases in the number of sworn officers in police agencies in the 1990s and (2) the extent to which COPS expenditures contributed to declines in crime in the 1990s through their effects, if any, on officers. Prior Literature on the Relationship between Officers and Crime Addresses Issues Relating to Estimating the Effects of COPS Funds on Crime In examining the effect of COPS funds on crime, we estimate the impacts of the funds on crime through their impacts on officers. The effect of police on crime has a theoretical basis in the economics literature. Economic models posit that criminals weigh the gains from criminal activity against its costs—the possibility of arrest and incarceration. Anything that increases the probability of arrest, such as additional police, will thus deter criminal activity; we might call this the deterrence effect. A second effect stems from arrests directly. If criminals are arrested and incarcerated, they will not be able to commit street crimes; we might call this the incapacitation effect. The relationship between police and crime has been studied empirically, with mixed results. Several reviews of research that investigated this relationship have reported that a minority of papers find a significant negative relationship between increases in the number of officers and crime. However, these reviews also point out that many of the studies have methodological flaws. In a report to Congress on what works in crime prevention, Lawrence Sherman and others drew upon a limited body of research that addressed the methodological concerns and concluded that increases in the number of police officers work to prevent crime. One of the major methodological issues associated with estimating the relationship between police officers and crime is the issue of reverse causality. This issue revolves around determining how to disentangle the relationship between the number of police officers and crime, as municipalities having higher crime rates generally also have more officers. For example, Detroit has twice as many police per capita as Omaha and four times the violent crime rate, but it would be incorrect to conclude that the additional officers in Detroit were the cause of its higher crime rate than Omaha’s. By simply comparing a municipality’s police force and crime rate to those in other municipalities, one would incorrectly infer that Detroit’s higher crime rate was caused by its additional police officers. Repeated observations on crime and police in a locality lead to a more robust research design by controlling for the time-invariant differences in rates of crime and police between areas. This is done by introducing fixed effects into regression models. Using this approach, the question that the analysis attempts to address becomes: Do we see the crime rate fall as the number of police rises? By controlling for the “baseline” crime rates in different areas, some researchers have estimated a negative relationship between police and crime. However, if the rise in the number of police in a locality is a response to increasing crime rates, including fixed effects does not resolve the issue of reverse causality raised by the Detroit example. A next step is to introduce an instrument—for example, a variable that affects the size of the police force but that, given this size, does not affect crime. In one study, the researcher made use of the fact that the size of a police force increases before an election. If the only way that crime is affected by the election is through the number of police, then this approach can be used to estimate the relationship between crime and police. In this study, the researcher found that crime fell in several index categories before an election. A series of more recent papers that used instruments found a negative relationship between police and crime. Two studies used an increase in police presence because of a terrorist alert and showed declines in nonterrorist-related crimes within a single city. In a study of Buenos Aires, the researchers found that police stationed in response to a terrorist threat on Jewish centers caused a decline in automobile theft. In another paper, the researchers showed that crime fell in Washington, D.C., on days when the Department of Homeland Security increased the terror alert level. At the national level, researchers at the University of Maryland used the number of police officers granted through the COPS program as an instrument for the actual number of police and estimated negative relationships between increases in police officers and crime. Our Approach to Estimating the Effects of COPS Expenditures on Officers and Crime We adopted a two-stage approach to estimating the effects of COPS expenditures on crime. Much as the University of Maryland researchers did, we used COPS funds as a source of variation to explain officers. However, while the University of Maryland researchers used officers granted by COPS funds, we used COPS expenditure amounts—the actual COPS dollars spent by agencies in given years—as the source of variation. We began with an analysis of the “first stage” and tested whether COPS funds had an effect on the number of officers. To the extent that hiring funds affected the number of police but did not affect crime in any other way, these funds would be a valid instrument for estimating the effect of officers on crime. We then estimated the “reduced form,” or the relationship between COPS expenditures and crime. Using parameters estimated from these regressions, we are able to calculate the relationship between police and crime. This approach has limitations, however. For example, we learn very little about how agencies operate. If agencies were to use the additional officers to employ different police tactics, and were able to reduce crime, we would be unable to say whether it was the increase in officer numbers or tactics that was the true cause of the decrease. Thus, we would be unable to contribute to the question of whether increases in officer strength are either necessary or sufficient to reduce crime, without a change in police tactics. A second concern is that agencies that were more likely to take initiative in applying for and receiving COPS grants might be those that were also more effective in preventing crime. These agencies might also be those that achieved larger or more rapid declines in crime. If this were the case, we might incorrectly associate declines in crime with COPS grant expenditures because of other possible factors. To assess this potential, we estimated a regression that predicted whether an agency spent COPS funds in a given year from 1994 through 2001 based on demographic characteristics, economic conditions, and lagged property and violent crime rates. From the regressions, we predicted the probability of spending COPS grant funds—or the propensity of agencies to spend COPS funds. Whether or not an agency actually spent COPS funds, it received a propensity score, based upon the values of its characteristics in the model that predicated the probability of spending COPS funds. Agencies that actually spent COPS funds can then be compared to similar agencies— those with similar propensity scores—that did not spend COPS funds. We grouped agencies into five categories based on their propensity scores. Within each of these five categories, we compared the patterns of violent crime rates and property crime rates between the agencies that spent COPS funds and those that did not spend them. Our analysis showed that within these groupings of agencies having similar propensity scores, the agencies that actually spent COPS funds generally had larger declines in crime rates than did those that did not spend COPS funds. Another question is whether a drop in a specific crime type, such as automobile theft, in a certain locality is a net gain for society as a whole. For example, the rationality of criminals may lead them to respond to an increase in the number of police by moving to an area with fewer police or switching to a different type of crime. In addition, there is the possibility that an increase in the number of police increases the reporting rate of crimes, and not the crimes themselves. This possibility, however, would lead us to underestimate the effects of COPS funds on crime, as discussed in appendix I. Model of the Effect of COPS Expenditures on the Number of Police Officers Our main specification estimated the effect of COPS funds on officers, using the following control variables: (1) POLICEit = βHIREit+ βMOREit+ βINNOVit+ βMISCit + βBYRNEDISit + βLLEBGit + βNONCOPSit + γXit + α+ α + αst + (quartile of prior growth rates) * (population stratification) * year POLICEit is the dependent variable, the sworn officers per 10,000 in population in agency i in year t; HIREit is the amount of money paid in Hiring grants; and MOREit are COPS MORE grants; INNOVit are COPS grants for innovative policing, and MISCit refers to the remaining types of COPS grants; all are expressed as expenditure in per capita amounts. BYRNEDISit are Byrne discretionary grant expenditures, LLEBGit are LLEBG grant amounts, and NONCOPSit are all other federal non-COPS law enforcement grants; all are expressed in per capita amounts. We introduce these variables to control for other federal funds. Xit contains a number of demographic and economic control variables, including local employment rates, per capita income, and population composition variables that measured the percentage of population 15 to 24 years old and the percentage of the population that was nonwhite. The economic and demographic controls were measured at the level of the county within which a particular agency was located. The parameters for these variables are represented by γ. We included state-by-year fixed effects—represented by αst—to correct for changes in crime policy at the state level, such as changes in the number incarcerated and changes in sentencing policy. We included agency fixed effects—represented by α—to capture time invariant differences across agencies, and time fixed effects—represented by α—to capture changes affecting the entire nation. Because of how the money was distributed, there may be some concern that our estimate of the effect of the COPS money on officers is biased. For example, it might be that agencies that received a disproportionate share of the money relative to their populations had the benefit of preexisting positive growth of numbers of officers, in addition to possible declines in crime. If the trends continued, we might be incorrectly associating increases in officers or decreases in crime with the amount of COPS money received, rather than these preexisting trends. To address this concern, we separated the agencies into four groups, based on the growth rate in both officers and crime during 1990–1993, when the COPS program was introduced. We constructed each combination of these groups, producing 16 cells. These cells were then “interacted” with each year and four population categories, for a total of 768 effects. In essence, each agency was compared with another agency that had a similar “trajectory” of crime and officers in the pre-COPS period. These growth trends are represented by the (quartile of prior growth rates) expression in equation (1). Finally, to obtain estimates of the effects of COPS expenditures on officers relative to the average person in the United States, we estimated weighted regressions where the weights were the population served by an agency. Model of the Effect of COPS Expenditures on Crime As with our methodology in estimating the effect of COPS funds on officers, we estimate the effect of COPS funds on crime. Our main specification used the following controls in the following equation: (2) CRIMEit = µHIREit + µMISCit + µ + δ + δst + (quartile of prior growth rates) * (population stratification) * year The independent variables are identical to those defined for equation (1). The dependent variable (CRIMEit) is the UCR total—or index—crime rate. We also estimate separate equations for the crime rates of components of the crime index: murder and non-negligent manslaughter, forcible rape, robbery, aggravated assault, burglary, larceny theft, and motor vehicle theft. Again, the parameters of interest are µstate-times-year fixed effects; and we also include the pre-1993 growth rate variables. The Implied Relationship between Police Officers and Crime Unlike the other COPS grant types, COPS hiring grants were to be used specifically for hiring officers. Consequently, variation in the number of officers coming from COPS hiring grants should be unrelated to other changes in police expenditures. In this sense, it may be a valid instrument for officers. Using the coefficients of officers in equations (1) and (2), we calculated an estimate of the change in crime with respect to change in officers: (µ/β and β / β/CRIME and CRIMEquadratic term for COPS hiring grant expenditures provides a test for nonlinear effects of COPS hiring grants on crime. This specification examines whether the effects of officers on crime diminish as the number of officers rises above certain levels. Data Used in Our Analysis We use data on 4,247 police agencies that reported complete crime (12 months of crime) in any year and that served populations of 10,000 or more persons. These agencies represented about 23 percent of the agencies that appeared in the UCR data that we received from the FBI. However, they also covered more than 86 percent of the crimes and they represented about 77 percent of the population in the UCR data that we received. Because of concerns about data quality, we restricted our sample to agencies that met these criteria of complete crime reporters and serving populations larger than 10,000 persons. Across years, the number of agencies that met these conditions varies, so our panel of data is unbalanced. We used grant expenditure data from the OJP financial data, which we linked to the crime and officer records of agencies. We included county level demographic and economic data from the Census Bureau, the National Center for Health Statistics, and the Bureau of Economic Analysis. (See app. I for more information regarding the construction of the dataset.) Table 15 provides the means and standard deviations of the variables included in the regression models. As shown in the table, the per capita expenditures derived from COPS hiring grants exceeded the per capita amounts from other federal grants. Explanation of the Results of Our Analysis In this section, we discuss our regression analyses and describe how we arrived at the results that are discussed in this report. The Effect of COPS Expenditures on the Number of Police Officers To arrive at the effects of COPS expenditures on officers, we estimated specifications for equation (1), as shown in table 16. With only the fixed effects, the models explain more than 90 percent of the variation in officer strength. In specification 1, we added only the COPS hiring grant expenditures per capita to the model that contained only the fixed effects. The effects of hiring grants are significant at the 1 percent level, and the coefficient indicates that an additional dollar of hiring grant expenditures per capita changes the officer rate (measured per 10,000 persons) by 0.317. In specifications 2 through 5, we introduce various combinations of the growth rate cells, demographic and economic conditions, and the other grant types. Across specifications 2 through 5, the estimated coefficient on the hiring grant variable remains fairly consistent, ranging from 0.227 in specification 5 to 0.261 in specification 3, where the interpretation of the coefficient is the effect of a $1 increase in per capita COPS hiring grant on the per 10,000 person rate of officers. Specification 5 presents our preferred specification, in that it includes all of the relevant controls. Using the coefficient on COPS hiring grant expenditures from specification 5, we calculate the effect of $25,000 in COPS hiring grant expenditures in a given year to produce roughly 0.6 additional officers in a given year. Finally, in addition to the COPS hiring grant expenditures, COPS MORE and LLEBG grant expenditures also consistently predict officer strength, as indicated by the MORE and LLEBG parameter estimates in specifications 2 through 5. Effect of COPS Expenditures on Crime Our reduced-form estimates of the effects of COPS expenditures on crime, the result of our estimating equation (2) appear in table 17. This first column (labeled “Officers”) repeats the results from specification 5 of table 16. The other columns of table 17 show the parameter estimates for the effects of hiring grants and outside funds on the crime rate for index crimes and separately for type of index crime (except for arson). With the exception of rape, COPS hiring grant expenditures per capita have a negative effect on index crime rates and the crime rate for each type of index crime. Further, while the direction of the effect of the hiring grant variable on the larceny rate is negative, the effect is not significant at the 5 percent level. LLEBG expenditures have a negative and significant effect on all crime types. The other grant fund types have a negative effect on some crime types. We estimated the effect of COPS hiring grant expenditures on index crimes to be -29.19. In other words, $1 in COPS hiring grant expenditures per capita translates into a reduction of almost 30 index crimes per 100,000 people. The Effects of Different Population Sizes across Agencies Given the variation in per capita COPS expenditures that occurred across agencies serving populations of different sizes, we explored whether COPS hiring grants had different effects on crime rates based on the size of the population served by agencies. We stratified agencies into four population size groups: those serving populations of between 10,000 and 25,000 persons; between 25,000 and 50,000 persons; between 50,000 and 150,000 persons; and more than 150,000 persons. We found that the effect of the hiring grant was consistent across all population categories less than 150,000, but insignificant in the population category of more than 150,000 persons. We found that negative effect of COPS hiring grants on index crime rates ran across all population size categories. However, the effects of hiring grants were largest in the 50,000 to 150,000 population category, and insignificant in the 25,000 to 50,000 population category (table 18). Calculations of the Elasticity of Crime with Respect to Officers As COPS hiring grants were to be used only to hire officers, we explored their use as an instrument to predict the effect of officers on crime. Assuming that COPS grants were used in that way, our preferred specification from our regressions crime on COPS hiring grants and other outside funds would produce estimates of the elasticity of crime with respect to officers that are shown in table 19. To assess the degree to which the elasticities that we calculated were in line with those appearing in the economics of crime literature, we compared our elasticities with those estimated by Evans and Owens (2004), Levitt (1997), Levitt (2002), and Klick and Tabarrok (2005). Our estimates are in line with those in the literature (table 19). In addition, Evans and Owens report aggregate point elasticities for violent and property crimes of –0.99 and –0.26, respectively, and Levitt reports aggregate point elasticities for violent and property crimes of –0.44 and –0.50, respectively. Our aggregate elasticities for violent and property crimes fall between these two sets of estimated point elasticities. Equations (1) and (2) depend on certain assumptions about the way that COPS hiring grant expenditures and other outside funds affect officers and crime. For example, the specifications reported previously only allow the effect of the federal funds to affect crime contemporaneously. However, it may take a certain amount of time for the expenditures to have an effect on either officers or crime, as it may take a certain amount of time for new officers to become fully acclimated to a department, or to become proficient in their duties. To explore the robustness of our findings under varying assumptions about how COPS hiring grant expenditures could affect officers and crime, we recalculated our elasticities after estimating our regressions under the specifications outlined previously in table 20. We report the elasticities that we calculated from these various regression models (in the last three rows of the table). The elasticities for index crimes range from –0.41 to –0.95; those for violent crimes range from –0.76 to –1.8; and those for property crimes range from –0.35 to –0.8. The elasticities that we report in our results all fall at the lower end of the range of elasticities that we estimated. Estimating the Net Number of Officers Paid for by COPS Expenditures We used our regression results to derive estimates of the net number of officers paid for by COPS grant expenditures separately for each year. By net number of officers, we refer to the increase in the number of officers on the street attributable to COPS net of attrition. For example, if at the beginning of a year, there were 100 officers on the street, while during a year COPS grants were responsible for hiring 10 officers and 5 officers left the force, the net number of officers due to COPS would be 5. To obtain the total number of officer-years due to COPS expenditures, we summed the number of officers across years. Table 21 presents the estimated number of officers that COPS expenditures funds paid for in each year. In column 1 we present the actual number of per capita officers used in our regressions that generated the results in table 21. Not shown in the table, but used in the calculation of the number of officers due to COPS expenditures are the per capita amounts of COPS expenditures, including COPS hiring, MORE, innovative, and miscellaneous grant expenditures. Column 2 presents our estimate of what the per capita number of officers would have been absent the COPS expenditures. Columns 3 and 4 show the number of officers per capita and the percentage of officers per capita explained by COPS expenditures. Column 5 presents our estimates of the number of officers in each year in the sample of agencies that we analyzed that were explained by COPS expenditures. To arrive at the number of officers in the United States due to COPS expenditures, we weighted the numbers in column 5 up to the U.S. population total (in column 6). On the basis of this analysis, in year 2000, for example, when they peaked, the COPS expenditures per capita were responsible for about 2.9 percent of the net increase in officers in the United States, or more than 17,000 officers. Across all years, we estimate that COPS was responsible for an increase of about 88,000 officer-years during the years from 1994 through 2001. Estimating the Number of Crimes Reduced by COPS Expenditures On the basis of our analysis of the increase in officers attributable to COPS expenditures, we estimated the amount of crime that could be attributable to COPS, given the estimated effect of COPS expenditures on officers. On the basis of our analysis of the number of officers due to COPS expenditures and our estimated elasticities of crime with respect to officers, we can estimate the number of crimes associated with COPS expenditures through the increase in officers attributable to these expenditures. In table 22, we show our calculations of the decline in crime attributable to COPS for each year, compared with the 1993 levels of crime, the pre-COPS baseline year. Columns 1 through 3 of table 22 give the average crime rates per 100,000 persons in the agencies in our sample. Columns 4 through 6 give the percentage change from 1993 in crime rates for each category of crime. Columns 7 though 9 report data on officers. Column 7 reports the growth in the officer rate from 1993 due to the change in COPS expenditures. Column 8 presents the growth (from column 7) as a percentage change from 1993. Columns 9 through 11 provide estimates of the percentage change in crime rates from 1993 using the elasticities shown in table 22. Finally, columns 12 through 14 provide the estimated amount of change in crime rates from 1993 that arise from COPS expenditures. Appendix VII: Methods Used to Assess Policing Practices Our objective in assessing policing practices was to determine the extent to which COPS grant expenditures were associated with police departments’ adoption of policing activities or practices that may have contributed to reduction in crime during the 1990s. To determine whether COPS grants were associated with changes in policing practices, we analyzed data from two national surveys of local law enforcement agencies on the policing practices that they reportedly implemented in various years from 1993 to 2000. In addition, we analyzed systematic reviews of research on the effectiveness of policing practices in preventing crime. Methods to Address Changes in Policing Practices To address whether COPS grants were associated with changes in policing practices that may be associated with preventing crime, we analyzed data from the two administrations of the Policing Strategies Survey (in 1993 and 1997) and two of the four administrations of the National Evaluation of COPS Program Survey (in 1996 and 2000). Because the purposes of the surveys differed, each used different samples of agencies (with some agencies appearing in both surveys). The Policing Strategies Survey drew a sample representative of all municipal police, county police, and county sheriff agencies in the United States with patrol functions and with more than five sworn officers in 1992, and the National Evaluation of COPS Program Survey drew a sample that was representative of all law enforcement agencies believed to be in existence in the United States that had received, or were eligible to receive a COPS grant. Each survey provided respondents in police agencies with lists of items that identified specific types of policing practices, and respondents were asked whether they had implemented each of the practices on the list. Survey responses were obtained from knowledgeable officials within each agency, such as the police chief or the chief’s designee. The number of items related to policing practices differed between the two surveys. We classified items in the surveys into four categories of policing practices corresponding to general approaches to policing identified in the criminal justice literature: problem-solving practices, place-oriented practices, community collaboration activities, and crime analysis activities. Problem- solving practices call for police to focus on specific problems and tailor their strategies to the identified problems. Place-oriented practices include attempts to identify the locations where crime occurs repeatedly and to implement procedures to disrupt these recurrences of crime. Community collaboration practices include improving citizen feedback about crime problems and the effectiveness of policing efforts to address these problems. Crime analysis includes the use of tools such as geographic information systems to identify crime patterns. These tools may help an agency support other practices for preventing crime, such as problem- solving and place-oriented practices. Three social science analysts with research experience in criminal justice independently reviewed the list of policing practice items in each survey and placed each item in one of the four categories or determined that the item did not fit in any of the four categories. Following initial classification, the analysts met to discuss and address any inconsistencies in their classification of items. After classifying practices, we created an index to represent the total number of problem-solving, place-oriented, community collaboration, and crime analysis practices, and we gave each agency that responded to both waves of a survey a score equal to the number of these practices that the agency reportedly implemented in the survey years. We also identified, for each agency, the number of practices in each of the four categories. We then analyzed the levels and changes in reported practices within each survey. Our analysis focused on the differences in levels of practices reported by agencies that received COPS grants and those that did not receive them. To assess the influence of COPS grant expenditures on reported practices, we analyzed changes in reported practices as a function of the per capita amounts of COPS dollars spent by agencies. For agencies that did not receive COPS grants, we set their per capita COPS expenditure amounts to zero. A limitation of our analysis is that the surveys did not ask explicitly about the extent to which each listed practice was implemented by law enforcement agencies. Thus agencies that report that they had implemented a specific practice may vary considerably, from sporadic use of the practice among a subset of officers in the agency to more frequent use of the practice throughout the agency. Characteristics and Analysis of the Policing Strategies Survey The Policing Strategies Survey was administered in 1993 and again in 1997. The Police Foundation administered the 1993 wave of the survey, and ORC Macro International, Inc. and the Police Executive Research Forum administered the 1997 wave of the survey. The sampling frame for both the 1993 and 1997 waves consisted of 11,824 local police and sheriffs’ departments listed in the Law Enforcement Sector portion of the 1992 Justice Agency list developed by the U.S. Bureau of the Census. In constructing the sampling frame, state police departments, special police agencies, agencies that did not perform patrol functions, and agencies with fewer than five sworn personnel were excluded from the larger list of all law enforcement agencies. A total of 2,337 police and sheriffs’ departments were selected to be in the main sample for the 1993 survey, and surveys were mailed to 2,314 of them after 23 agencies were found to be out of scope before the surveys were mailed. Follow-up mailings and facsimile reminders were sent to nonrespondents. The overall response rate for the 1993 survey was 71.3 percent. All of the agencies in the first sample were then selected for participation in the 1997 survey. The survey employed a multiphased data collection approach, using postal mail for the first phase, followed by facsimile reminders, a second mailing, and computer-assisted telephone interviewing for nonrespondents. The response rate for the 1997 survey was 74.7 percent. A total of 1,269 agencies were present in both the 1993 and 1997 surveys. The sample was a stratified random sample with probability of inclusion varying by the number of sworn personnel (5-9; 10-49; 50-99; and 100 or more sworn personnel). We identified agencies in the Policing Strategies Survey that responded to both waves of the survey and had complete data on each of the policing practices items, and of these, we were able to link the data from 1,188 agencies to our larger database on crime, officers, money, and economic conditions. For comparability with the analyses of the effects of funding on officers and crime, we limited our analysis to those agencies serving jurisdictions with populations of 10,000 or more persons. This resulted in usable data on 1,003 agencies. We used the Policing Strategies Survey data to compare reported changes in the types and levels of policing practices that occurred during the COPS program with pre-COPS levels of practices. The analyses reported in this appendix are weighted to adjust for the sample design effects. The findings are generalizable to all municipal police agencies, county police agencies, and county sheriff agencies in the United States with patrol functions and serving jurisdictions with populations of 10,000 or more persons. We used 38 items on policing practices from the Policing Strategies Survey. We combined 12 practices pertaining to increasing officer contact with citizens and improving citizen feedback into a community collaboration index. We used 6 items on the crime analysis units within police departments to create our index of crime analysis. We combined 8 practices pertaining to increasing enforcement activity or place management in buildings, neighborhoods, or other specific places into an index of place-oriented practices. And we compiled the data on 12 items that reflected organizational efforts to reduce or interrupt recurring mechanisms that may encourage crime into a problem-solving practices index. The classification of items from the Policing Strategies Survey into our four indexes of types of policing practices is shown in table 23. The Policing Strategies Survey provided us with an opportunity to assess changes in reported policing practices using a pre-COPS grant and within- COPS grant program framework. The 1993 administration of this survey occurred several months prior to the distribution of the first COPS grants, while the 1997 administration occurred after COPS grants had been made to about 75 percent of the agencies in the sample. To implement the pre- within examination of the effects of COPS grants on policing practices, we first compared the levels of practices in 1993 and 1997 between the group of agencies that had received a COPS grant by 1997 and the group that had not received a COPS grant by 1997. Second, we estimated separate regressions of the effect of the receipt of a COPS grant and of the cumulative per capita amount of COPS expenditures on the levels of reported policing practices. To assess the extent to which COPS grant expenditures were associated with changes in reported policing practices, we estimated regressions of the changes in reported policing practices that occurred within agencies as a function of the cumulative per capita amount of COPS grant expenditures that they made during the years from 1994 through 1997. We used two-factor fixed- effects regression techniques, which allowed us to control for unobserved characteristics of agencies and underlying trends in the adoption of policing practices. We also controlled for economic conditions and population changes in the localities in which the agencies were located. In addition, we used weighted regressions to address nonresponse patterns and the probability with which the original sampling units were drawn. Our regression equations show that both the receipt of a COPS grant and the amount of per capita COPS expenditures by agencies were associated with increases in the levels of reported policing practices between 1993 and 1997. Agencies that received at least one COPS grant had significantly larger changes in the overall number of practices than did agencies that did not receive a COPS grant during this period. Specifically, of the roughly 2.9 average increase in the number of practices reported by agencies over this period, the receipt of a COPS grant accounted for 1.8 of the increase in the reported increase in practices. Further, when we examined our results from separate regressions for the different categories of practices, we found that receipt of a COPS grant was associated with significant increases in reported levels of problem-solving and place-oriented practices, but was not related to changes in community collaboration or crime-analysis practices (table 24). Our regression models further show that changes in practices were also associated with the cumulative amount of per capita spending on COPS grants. All other things being equal, a $1 increase in per capita spending was associated with an increase of 0.23 policing practices. As we found for the effects of the receipt of a grant on changes in police practices, these regressions also showed that the level of per capita spending on COPS grants was significantly associated with increases in problem-solving and place-oriented practices. However, per capita spending on COPS grants was also significantly associated with increases in crime analysis practices. Characteristics and Analysis of the National Evaluation of COPS Survey The National Evaluation of COPS Survey was conducted by the National Opinion Research Center for the Urban Institute in its national evaluation of the implementation of the COPS program. The sampling frame for the survey consisted of 20,894 law enforcement agencies believed to be in existence between June 1993 and June 1997 who had either received a COPS grant during 1995 or appeared to be potentially eligible for funding but remained unfunded through 1995. The list of COPS grantees was obtained from applicant records from the grants management database from the COPS Office, and included those agencies that had been funded from the following programs: FAST, AHEAD, Universal Hiring Program, and MORE. The list of potentially eligible grantees was derived from the FBI’s UCR and National Crime Information Center data files. The sampling frame was stratified by COPS grantee status (Not Funded, FAST or AHEAD, Universal Hiring Program (UHP), MORE), and by population (jurisdictions with populations of fewer than 50,000 persons and those with populations of 50,000 or more persons), and agencies in each stratum were sampled at a different rate in order to select a representative sample of law enforcement agencies. A total of 2,098 agencies were selected to be in the sample. Telephone interviews with agency representatives were conducted in 1996 (wave 1) and 2000 (wave 4). A total of 1,471 agencies responded to wave 1 of the survey in 1996, for a 77 percent response rate. In 2000, all wave 1 respondents were recontacted, and interviews were completed with 1,270, or 86 percent, of the target agencies. We were able to link the data from 1,067 of the agencies that responded to both of these waves of the survey to our larger database on crime, officers, money, and economic conditions. For comparability with the analyses of the effects of funding on officers and crime, we excluded from our analysis state police agencies, and other “special” police agencies, as well as law enforcement agencies serving jurisdictions with populations of fewer than 10,000 persons. This resulted in usable data on 724 agencies. We used the National Evaluation of COPS Survey to compare levels of practices in 1996 and 2000 between groups of agencies that received COPS grants and those agencies that were not funded by COPS over this period, and to assess changes in reported practices in relation to per capita COPS expenditures. The analyses reported in this appendix are weighted to adjust for nonresponse and the multiple counting of agencies that received more than one COPS grant. The findings are generalizable to all law enforcement agencies in the United States serving jurisdictions with populations of 10,000 or more persons. We used 19 items on policing practices from the National Evaluation of COPS Survey, and we classified these items into the same 4 categories of practices as we did with the Policing Strategies Survey data (table 25). However, because of the shortage of items covering place-oriented practices, for analysis purposes we combined these 3 items with the 7 problem-solving items into one index of problem solving and place oriented practices. Unlike the Policing Strategies Survey, which provided a pre-COPS and a within-COPS measure of policing practices, both observations (in 1996 and 2000) on policing practices in the National Evaluation of COPS Survey occurred while the COPS program was making grants. This complicates our analysis, as in 1996 there were agencies that had already received and spent COPS funds, and to the extent that COPS expenditures were associated with the adoption of policing practices, the level of such practices that they reported in 1996 would reflect their experiences with COPS grants. Some of these agencies continued to spend COPS funds throughout the years from 1996 through 2000. However, some of the agencies that spent COPS funds in 1996 ceased to spend them during the intervening years before 2000. A third group of agencies consists of those that had not received their first COPS grant in 1996 but had received a grant before 2000. This third group is analogous to our group of agencies that received COPS grants in the Policing Strategies survey, with the exception that while members of this group received their first COPS grant after the first administration of the National Evaluation survey in 1996, their practices in 1996 could have been influenced by the COPS program indirectly. A final group of agencies is those that did not receive a COPS grant before the 1996 administration of the survey or during the years from 1997 through 2000. Because the effects of experience with COPS grants before and after 1996 could differ, we chose to make two types of comparisons. First, we examined the mean changes in policing practices from 1996 to 2000 for each of the following groups of agencies: (1) agencies that made expenditures on COPS grants in 1994 through 1996, (2) agencies that made expenditures on a COPS grant in 1997 through 2000, (3) agencies that made no expenditures on a COPS grant after 1996, and (4) agencies that made no expenditures on a COPS grant in 1994 through 2000. These mean comparisons allowed us to see whether changes in practices were associated with receipt of a grant in either the early period of the program (through 1996) or when the program was more fully implemented (1997 through 2000). We then examined whether the level of COPS expenditures between the two administrations of the survey were associated with changes in practices between 1996 and 2000 by regressing the change in practices on the change in cumulative per capita COPS expenditures between the period preceding wave 1 of the survey (1994 through 1996) and the period following wave 1 of the survey (1997 through 2000). As with the Policing Strategies Survey, we used two-factor fixed-effects regression techniques, which allowed us to control for unobserved characteristics of agencies and underlying trends in the adoption of policing practices. We also controlled for economic conditions and population changes in the localities in which the agencies were located. In addition, we used weighted regression to address the complex design of the National Evaluation of COPS Survey. We estimated separate regressions of the effect of the receipt of a COPS grant and of the cumulative per capita amount of COPS expenditures on the levels of reported policing practices. There were no significant differences in the overall adoption of policing practices associated with changes in per capita spending on COPS grants (table 26). Methods to Review Policing Practices To determine whether the certain types of policing practices may be effective in reducing crime, we analyzed systematic reviews of research studies on the effectiveness of policing practices. How We Selected Studies We identified six studies that provided summaries of research on the effectiveness of policing practices on reducing crime. We chose to review studies that reviewed research, rather than reviewing all of the original studies themselves, because of the volume of studies that have been conducted on the effectiveness of policing practices. We reviewed the following studies: Braga, Anthony. “Effects of Hot Spots Policing on Crime,” Annals, AAPSS, vol. 578 (November 2001), pp. 104-125. Eck, John. “Preventing Crime at Places” in Sherman, L., et al. (eds.) Preventing Crime: What Works, What Doesn’t, What’s Promising: A Report to the United States Congress. Washington, D.C.: National Institute of Justice, 1998. Eck, John, and Edward Maguire. “Have Changes in Policing Reduced Violent Crime? An Assessment of the Evidence.” in Blumstein, A., and J. Wallman, eds., The Crime Drop in America. United Kingdom: Cambridge University Press, 2000. Sherman, Lawrence. “Policing for Crime Prevention,” in Sherman, L., et al. (eds.) Preventing Crime: What Works, What Doesn’t, What’s Promising: A Report to the United States Congress. Washington, D.C.: National Institute of Justice, 1998. Skogan, Wesley, and Kathleen Frydl. “The Effectiveness of Police Activities in Reducing Crime, Disorder, and Fear,” in Skogan, W., and K. Frydl, (eds.) Fairness and Effectiveness in Policing: The Evidence, Washington, D.C.: National Academies Press, pp. 217-251, 2004. Weisburd, David, and John Eck. “What Can Police Do to Reduce Crime, Disorder, and Fear?” Annals, AAPSS, Vol. 593 (November 2004), pp. 42- 65. A limitation of basing our work on reviews is that we did not assess the original studies, but rather we relied on the descriptions and assessments as provided by the authors of the reviews. Sometimes the reviews did not cite specific information about the strength of the methodology of the underlying studies that were included in reviews. How We Reviewed Studies We developed a data collection instrument to capture systematically information about the methodologies of the reviews, the types of policing practices reviewed, findings about each type of practice, and the reviewers’ conclusions about the effectiveness of a particular practice or group of practices in reducing crime. Each research review was read and coded by a social science analyst who had training and experience in reviewing research methodologies. This analyst recorded, for each practice discussed in the research review, (1) the types of crimes against which the practices were used (e.g., all crimes, violent crimes, property crimes, disorder); (2) whether the practice was generally effective in reducing crime, had no effect in reducing crime, or the impact was ambiguous; (3) whether there was displacement of crimes away from the areas where the practices were used; and (4) whether there were negative effects of the practices (e.g., complaints against the police or the diversion of resources from other policing activities). A second, similarly trained analyst then read the reviews and verified the accuracy of the information recorded by the first analyst. We then summarized the findings about each practice from the data collection instruments prepared for each of the six reviews. Some practices were discussed in only one review, while others were discussed in more than one review. The Research Literature Shows That Some Policing Practices May be Effective in Reducing Crime Our analysis of six systematic reviews of evaluations of the effectiveness of various policing practices in preventing crime indicates that the current evidence ranges from moderate to strong that problem-oriented policing practices and place-oriented practices are either effective or promising as strategies for addressing crime problems. For example, problem-oriented approaches that focus on criminogenic substances such as guns and drugs appear to be effective in reducing both violent and property crimes. And hot spots approaches—place-oriented approaches that temporarily apply police resources to discrete locations where crime is concentrated at much higher rates than occurs jurisdictionwide—have also been found to be effective in reducing crime. However, the magnitudes of the effects of these interventions are difficult to estimate, especially on citywide crime rates, as the interventions that were reviewed as effective generally were concentrated in comparatively small places. Further, the enduring nature of these interventions is not fully understood. It is not known, for example, how long the effects of a problem- or place-oriented intervention persist. In addition, some of the reviews point out that research designs undertaken to date make it difficult to disentangle the effects of problem- oriented policing from hot spots policing. There is suggestive, but limited, evidence that the combination of these practices may be more effective in preventing or reducing crime than any one strategy alone. In contrast to the findings on problem-oriented and place-oriented policing practices, there is little evidence in the literature for the effectiveness of community collaboration practices—such as increasing foot patrol, establishing community partnerships, and encouraging citizen involvement—in reducing or preventing crime. Appendix VIII: Comments from the Department of Justice Appendix IX: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above the following individuals made key contributions to this report: William J. Sabol, Tom Jessor, David R. Lilley, Benjamin A. Bolitzer, George H. Quinn, Jr., and Grant M. Mallie. Others contributing included David P. Alexander, Harold J. Brumm Jr., Scott Farrow, Kathryn E. Godfrey, Adam T. Hatton, Ronald La Due Lake, Terence C. Lam, and Robert Parker. Bibliography Attorney General of the United States. Report to Congress: Office of Community Oriented Policing Services, Washington, D.C.: U.S. Department of Justice, September 2000. Blumstein, Alfred, and Joel Wallman (eds.). The Crime Drop in America, Cambridge, England: Cambridge University Press, 2000. Braga, Anthony. “Effects of Hot Spots Policing on Crime,” Annals, AAPSS, vol. 578 (November 2001), pp. 104-125. Cook, Philip. “The Clearance Rate as a Measure of Criminal Justice System Effectiveness,” Journal of Public Economics, vol. 11, 1979, pp. 135-142. Davis, Gareth, et al. “The Facts about COPS: A Performance Overview of the Community Oriented Policing Services Program,” Washington, D.C.: The Heritage Foundation, September 25, 2000. Di Tella, Rafael and Ernesto Schargrodsky. “Do Police Reduce Crime? Estimates Using the Allocation of Police Forces after a Terrorist Attack.” American Economic Review. March 2004, 94(1). pp. 115-133. Dunworth, Terence, Peter Haynes, and Aaron J. Saiger. National Assessment of the Byrne Formula Grant Program, Washington, D.C.: National Institute of Justice Research Report, June 1997. Eck, John. “Preventing Crime at Places” in Sherman, L., et al. (eds.) Preventing Crime: What Works, What Doesn’t, What’s Promising: A Report to the United States Congress. Washington, D.C.: National Institute of Justice, 1998. Eck, John and Edward Maguire. “Have Changes in Policing Reduced Violent Crime? An Assessment of the Evidence.” in Blumstein, A., and J. Wallman, eds., The Crime Drop in America. United Kingdom: Cambridge University Press, 2000. Ehrlich, Isaac. “Participation in Illegitimate Activities: A Theoretical and Empirical Investigation.” Journal of Political Economy, May June, 81(3), pp. 521-65 (1973). Evans, William N., and Emily Owens. “Flypaper COPS,” College Park, Maryland.: University of Maryland. Available online www.bsos.umd.edu/econ/evans/wpapers/Flypaper%20COPS.pdf., 2005. Executive Office of the President. Performance and Management Assessments: Budget of the United States, Fiscal Year 2004, Washington, D.C.: The White House, 2003. Federal Bureau of Investigation, Crime in the United States, 2002, Uniform Crime Reports, Washington, D.C.: U.S. Department of Justice. Printed annually. GAO, Technical Assessment of Zhao and Thurman’s 2001 Evaluation of the Effects of COPS Grants on Crime, GAO-03-867R (Washington, D.C.: June 13, 2003). Johnson, Calvin C., and Jeffrey A. Roth, The COPS Program and the Spread of Community Policing Practices, 1995-2000. Washington, D.C.: The Urban Institute, June 2003. Klick, Jonathan, and Alexander Tabarrok. “Using Terror Alert Levels to Estimate the Effect of Police on Crime.” Journal of Law and Economics, April, Vol. XLVIII (2005). Koper, Christopher S., et al. Putting 100,000 Officers on the Street: A Survey-Based Assessment of the Federal COPS Program, Washington, D.C.: The Urban Institute, October 2002. Levitt, Steven D. “The Effect of Prison Population Size on Crime Rates: Evidence from Prison Overcrowding Litigation” Quarterly Journal of Economics, 111 No. 2. (May, 1996). Levitt, Steven D. “Using Electoral Cycles in Police Hiring to Estimate the Effect of Police on Crime.” American Economic Review 87 (1997): 270- 290. Levitt, Steven D., “The Relationship between Crime Reporting and Police: Implications for the Use of Uniform Crime Reports,” Journal of Quantitative Criminology, Vol. 14, No. 1998: pp. 61-81. Levitt, Steven D. “Using Electoral Cycles in Police Hiring to Estimate the Effects of Police on Crime: Reply” American Economic Review, September 2002, 92(4), pp. 1,244-50. Lynch, James P., “Exploring the Sources of Non-response in the Uniform Crime Reports: Things to Do Before Multiple Imputation.” Paper presented at the Annual Meetings of the American Society of Criminology Research Conference, November 19, 2003. Marvell, Thomas, and Carlisle Moody. “Specification Problems, Police Levels, and Crime Rates.” Criminology 1996, Vol., No. 4, pp. 609-46. McCrary, Justin. “Using Electoral Cycles in Police Hiring to Estimate the Effect of Police on Crime: Comment.” American Economic Review. September 2002, 92(4), pp. 1,236-43. National Center for Health Statistics. Estimates of the July 1, 2000—July 1, 2003, United States resident population from the Vintage 2003 postcensal series by year, county, age, sex, race, and Hispanic origin, prepared under a collaborative arrangement with the U.S. Census Bureau. Available on the Internet at http://www.cdc.gov/nchs/about/major/dvs/popbridge/popbridge.htm. September 14, 2004. Rosenthal, Arlen M., and Lorie Fridell. National Survey of Community Policing Strategies Update, 1997, and Modified 1992-1993 Data [Computer file]. Inter-university Consortium for Political and Social Research (ICPSR) version. Calverton, Maryland: ORC Macro International, Inc. , 2002. Ann Arbor, Michigan: ICPSR , 2002. Roth, Jeffrey, et al. National Evaluation of the COPS Program—Title I of the 1994 Crime Act, Washington, D.C.: National Institute of Justice, August 2000. Sherman, Lawrence. “Policing for Crime Prevention,” in Sherman, L., et al. (eds.) Preventing Crime: What Works, What Doesn’t, What’s Promising: A Report to the United States Congress. Washington, D.C.: National Institute of Justice, 1998. Skogan, Wesley, and Kathleen Frydl. “The Effectiveness of Police Activities in Reducing Crime, Disorder, and Fear,” in Skogan, W., and K. Frydl, (eds.) Fairness and Effectiveness in Policing: The Evidence, Washington, D.C.: National Academies Press, pp. 217-251, 2004. Swimmer, Eugene. “The Relationship of Police and Crime: Some Methodological and Empirical Results,” Criminology, vol. 12, 1974: pp. 293-314. Weisburd, David, and John Eck. “What Can Police Do to Reduce Crime, Disorder, and Fear?” Annals, AAPSS, vol. 593 (May 2004), pp. 42-65. Wycoff, Mary Ann, Community Policing Strategies: A Comprehensive Analysis, Washington, D.C.: The Police Foundation, November 1994. Zhao, J., and Q. Thurman. A National Evaluation of the Effect of COPS Gants on Crime from 1994 to 1999. Report submitted to the Office of Community Oriented Policing Services, Washington, D.C.: U.S. Department of Justice, December 2001. Zhao, J., and Q. Thurman. Funding Community Policing to Reduce Crime: Have COPS Grants Made a Difference from 1994 to 2000? Report submitted to the Office of Community Oriented Policing Services, Washington, D.C.: U.S. Department of Justice, July 2004.
Between 1994 and 2001, the Office of Community Oriented Policing Services (COPS) provided more than $7.6 billion in grants to state and local communities to hire police officers and promote community policing as an effective strategy to prevent crime. Studies of the impact of the grants on crime have been inconclusive. GAO was asked to evaluate the effect of the COPS program on the decline in crime during the 1990s. GAO developed and analyzed a database containing annual observations on crime, police officers, COPS funds, and other factors related to crime, covering years prior to and during the COPS program, or from 1990 through 2001. GAO analyzed survey data on policing practices that agencies reportedly implemented and reviewed studies of policing practices. GAO assessed: (1) how COPS obligations were distributed and how much was spent; (2) the extent to which COPS expenditures contributed to increases in the number of police officers and declines in crime nationwide; and (3) the extent to which COPS grants during the 1990s were associated with policing practices that crime literature indicates could be effective. In commenting on a draft of this report, the COPS Office said that our findings are important and support conclusions reached by others. About half of the COPS funds distributed from 1994 through 2001 went to law enforcement agencies in localities of fewer than 150,000 persons and the remainder to agencies in larger communities. This distribution roughly corresponded to the distribution of major property crimes but less so to the distribution of violent crimes. For example, agencies in larger communities received about 47 percent of COPS funds but accounted for 58 percent of the violent crimes nationwide. From 1994 through 2001, COPS expenditures constituted about 1 percent of total local expenditures for police services. For the years 1994 through 2001, expenditures of COPS grants by grant recipients resulted in varying amounts of additional officers above the levels that would have been expected without the expenditures. For example, during 2000, the peak year of COPS expenditures by grant recipients, they led to an increase of about 3 percent in the level of sworn officers--or about 17,000 officers. Adding up the number of additional officers in each year from 1994 through 2001, GAO estimated that COPS expenditures yielded about 88,000 additional officer-years. GAO obtained its results from fixed-effects regression models that controlled for pre-1994 trends in the growth rate of officers, other federal expenditures, and local- and state-level factors that could affect officer levels. From its analysis of the effects of increases in officers on declines in crime, GAO estimated that COPS funds contributed to declines in the crime rate that, while modest in size, varied over time and among categories of crime. For example, between 1993 and 2000, COPS funds contributed to a 1.3 percent decline in the overall crime rate and a 2.5 percent decline in the violent crime rate from the 1993 levels. The effects of COPS funds on crime held when GAO controlled for other crime-related factors--such as local economic conditions and state-level policy changes--in its regression models, and the effects were commensurate with COPS funds' contribution to local spending on police protection. Factors other than COPS funds accounted for the majority of the decline in crime during this period. For example, between 1993 and 2000, the overall crime rate declined by 26 percent, and the 1.3 percent decline due to COPS, amounted to about 5 percent of the overall decline. Similarly, COPS contributed about 7 percent of the 32 percent decline in violent crime from 1993 to 2000. From 1993 though 1997, agencies that received and spent COPS grants reported larger changes in policing practices and in the subsets of practices that focus on solving crime problems or focus on places where crime is concentrated than did agencies that did not receive the grants. The differences held after GAO controlled for underlying trends in the reported use of these policing practices. From 1996 to 2000, there was no overall increase in policing practices associated with COPS grants. In its review of studies on policing practices, GAO found that problem-solving and place-oriented practices can be effective in reducing crime.
Background The nation’s major cash assistance program to poor families, Aid to Families with Dependent Children (AFDC), provides cash benefits to needy families with children who lack support from one or both of their parents because of unemployment, incapacity, absence, or death. Funded with federal and state dollars, the program operates as an individual entitlement—that is, everyone who meets the eligibility requirements is entitled to receive benefits. In fiscal year 1993, AFDC benefits supported 5 million families and more than 9.5 million children each month and cost over $25 billion in federal and state funds. The Family Support Act of 1988 created the Job Opportunities and Basic Skills Training (JOBS) program, which requires the states to enroll an increasing proportion of their adult AFDC recipients (primarily women) in the education, training, and employment-related activities they need to get jobs and avoid long-term welfare dependency. The states are permitted substantial flexibility in designing and implementing their JOBS programs, but they are required to provide participants with the support services deemed necessary, such as child care and transportation. Federal funds to match state JOBS expenditures are capped, but most states have not reached the limit of that cap. However, as we reported last December, the share of AFDC recipients active in JOBS is limited; only about one fourth of those required to participate were served in an average month in fiscal year 1993. Rapid growth in the AFDC caseload since 1989 and concern about program costs and beneficiaries’ long-term dependence have led to widespread dissatisfaction with the AFDC program and to several congressional proposals to reform it. Some provisions of current proposals represent continuity with previous legislative efforts to strengthen the employment focus of the program, such as requiring larger proportions of recipients to participate in a work program. Other provisions propose dramatic changes in AFDC’s structure, such as imposing time limits on the receipt of benefits and replacing the individual entitlement to benefits with a block grant for which federal funding would be fixed. Concern about welfare dependency has spurred policy initiatives since the 1970s to encourage or assist welfare clients to get jobs. The states have obtained waivers from existing federal statutes and regulations to test a variety of welfare-to-work initiatives. One condition of the waivers is that the states rigorously evaluate the effects of these initiatives. Evaluations conducted under such waivers informed the formulation of the JOBS program; others completed since 1988 can similarly inform the current debate. This report presents the results of our evaluation synthesis of nine published high-quality studies, from eight states, of welfare-to-work experiments for adult AFDC recipients. We identified these studies by conducting a systematic search and methodological review of all evaluations published since the Family Support Act of 1988 that focused, at least in part, on moving clients from welfare to work. All nine studies used comparison groups, six of which were formed through random assignment, making it possible to estimate the effects of a program by comparing the outcomes for its participants with those for nonparticipants. To meet our first objective, we compared the approaches used in these experiments with provisions of the proposed welfare reforms being debated. Our list of provisions was derived primarily from the pending House welfare reform bill, H.R. 4, but we also included a few provisions from other bills introduced in the 104th Congress. To meet our second objective—to identify approaches that successfully moved AFDC recipients from welfare to work—we compared and contrasted the statistically significant effects of similar and dissimilar programs on participants’ earnings, employment, and welfare receipt. (See appendix I for details on our selection and analysis of these studies.) We conducted our work in accordance with generally accepted government auditing standards between December 1994 and April 1995. However, we did not independently verify the information in the evaluation reports. Principal Findings The Completed State Experiments Have Tested Only Some of the Proposed Welfare Reforms The welfare-to-work experiments we reviewed tested many of the provisions in welfare reform proposals (including H.R. 4), such as conducting some form of work program that may provide support services such as child care and requiring adult AFDC recipients to participate in that work program and to cooperate with child support enforcement. (See table 1.) In addition, some states experimented with extending medical and child care benefits to families as they leave welfare for work and with increasing the disregard of earnings while on welfare, both of which are provisions in other current proposals. Of course, the states may not have implemented these features in quite the same form as they appear in the legislative proposals. Proposed provisions not in the state experiments we reviewed include limits on the length of welfare receipt, prohibition of additional benefits for additional children born to families on welfare, and requirements that unwed teenage mothers live with a parent or guardian. These are the subject of ongoing or planned experiments and have not yet been evaluated. Prohibiting aid to noncitizens, creating block grants with fixed funding, and ending requirements that the states match federal expenditures have not been options available to the states. Replacing the current AFDC program with a block grant would basically repeal current federal law prescribing state procedures for determining individuals’ eligibility for benefits and benefit levels. This change aims to increase the states’ flexibility in managing their programs of assistance to needy families and would provide the states with a fixed amount of funds each year rather than matching (at federally specified rates) whatever their expenditures had been. The states have also tested several program features not explicitly addressed in some of the legislative proposals, such as enhancing employment and training activities and consolidating the AFDC and Food Stamp programs. Some of these experiments were begun before the JOBS program was enacted but tested features it currently requires, such as providing a broad range of employment-related and support services. Under H.R. 4, the states would be permitted but no longer required to provide as broad a range of employment-related services and supports. Indeed, the states might be discouraged from enrolling clients in some types of education and training because these activities would not count toward the bill’s work program participation requirements. The states would face financial sanctions if they failed to meet minimum participation levels. Thus, these state experiments are relevant to the question of whether the more inclusive provisions of current law should be retained. A Range of Programs Had Positive Results All but three of the experiments had a statistically significant positive effect on at least one of the following: participants’ employment, earnings, receipt of welfare, and welfare payment amounts. Four were successful on all four outcomes, three others on only one or two. Effects were positive more often on employment and earnings than AFDC receipt, but a variety of approaches and their combinations had some success. Program outcomes were often measured 1, 2, or sometimes 3, and in one up to 5, years after clients had been enrolled. We scored them as “positive” if a statistically significant effect in the intended direction was recorded at any of these time points. The more complex scores for Florida’s Project Independence (FPI) program are discussed below. Table 2 summarizes the major features being tested and does not include features that applied to both the experimental and comparison groups. For example, programs that did not test an employment and training program (the last four rows) offered similar levels and kinds of employment services to both the program and comparison groups, but only the program participants were offered an increase in the earned income disregard. Because the programs typically combined several features at once, individually they do not provide clear tests of the effectiveness of single program features. Therefore, we drew our conclusions about the success of program approaches (including clusters of these features) both by comparing the effects of programs that included and did not include the same feature and by comparing the features of the more and less successful programs. However, our sample of 10 studies is not large enough to provide conclusive answers, because, of course, there are many differences between the studies, some of which might have influenced their outcomes. Combining a Broad Range of Employment-Related Services and Supports Yielded the Best, Though Modest, Results The most successful welfare-to-work programs—those with the largest and most consistent effects—offered participants an expanded mix of education, training, and employment services; increased child care assistance; and mandated some form of client participation. Four programs using this same general approach—San Diego’s Saturation Work Initiative Model (SWIM), Massachusetts’ Employment and Training (ET) program, and California’s Greater Avenues for Independence (GAIN) program, both statewide and in Riverside county—were the only ones to record statistically significant effects on all four outcomes. These programs provided a mix of employment-related services, of which clients could receive one or more. Education and training included assistance in basic education, preparation for the high-school equivalency examination (or GED), English-language training, and vocational classes. Intensive job search included program staff working with employers to develop job placements, assisting clients with their job search, or starting clients with job searches immediately. In addition, some offered community work experience (CWEP), which involves unpaid work in public or nonprofit agencies aimed at increasing clients’ employability. Their evaluations compared participants’ outcomes to those of AFDC clients who received whatever the standard level of employment services was at the time. Since some of these programs began operating before the JOBS program was enacted, they typically offered either a lower level of service than is currently required or nothing at all. Child care assistance was increased to allow participation in employment preparation activities and, during the first year of postwelfare employment, to facilitate the transition off public assistance. Participation mandates included requirements to register for job search and apply for work, participate for a specified number of hours per month, or enroll in a sequence of employment-related activities. However, this does not mean that all clients actually participated; some could be exempted for personal reasons, others for lack of program resources. There were, however, some significant differences in the four successful programs. Massachusetts’ ET allowed voluntary client participation and selection of activities after a mandatory work registration, while California’s SWIM enforced a fixed sequence of activities and GAIN allowed a variety of sequences. ET put more emphasis on education and training, while GAIN in Riverside put more emphasis on aggressive job search support. The statewide program emphasized basic education more than the other programs. Two other programs in Ohio and Florida that took the same general approach had mixed results, which could in part be explained by funding problems that delayed or cut short the full experiment. Ohio’s economy took a downward turn at the start of the Transitions to Independence—Fair Work (TI) program evaluation period, causing an influx of cases and lengthy backlogs. In fact, a majority of clients did not even receive their employment and training assignments. TI achieved effects on only one of the four outcomes. Florida’s FPI showed positive effects for first-year participants on two outcomes, but an economic downturn combined with a budget freeze led to program reductions in the second year. This provided the opportunity to test the effects of the changes—increases in caseloads and the elimination of child care assistance. However, the contribution of these features is unclear because both the early and later groups of participants achieved mixed results. The effects of even the most successful program were modest. The Riverside GAIN program is arguably the most successful of the welfare-to-work programs. It increased the proportion of clients ever employed in 3 years to 67 percent, or 14 percentage points over the comparison group, but this means that 33 percent of clients in the best program were never employed in 3 years. Of those who were employed at the end of 3 years, only 24 percent made more than $5,000 per year. Thus, Riverside GAIN participants averaged a 49-percent increase in earnings over 3 years compared to nonparticipants receiving only traditional AFDC, but this amounted to only $3,113, or about $1,000 per year. The Riverside program lowered average AFDC payments for all participants over 3 years by 15 percent, or $1,983, and reduced the percentage who were receiving AFDC payments after 3 years by 5 percent, compared to the nonparticipants. However, after 3 years only one fourth of its participants had achieved self-sufficiency by being both employed and off welfare. That the successful programs only modestly reduced welfare dependency has, no doubt, a variety of causes. Even when participation was mandated, not all recipients were required to enroll in activities, some were exempt for ill health or to care for an infant, and others had to wait for assignments. In addition, some education and training programs had participation and attendance problems that diminished their success. These might reflect problems that clients had that support services like those in these programs could address, or perhaps other interventions are needed. Researchers also point to other barriers to moving welfare recipients into self-supporting employment—in particular, their low skill levels and the low wages and short tenure of low-skill jobs. In 1992, 45 percent of the single mothers receiving AFDC lacked a high school diploma and another 38 percent had no schooling beyond high school. Yet occupations that accept limited schooling pay fairly low wages, have limited fringe benefits (such as health insurance), and are characterized by high job turnover. Thus, relatively short-term training and job search interventions may have a limited effect on recipients whose skill levels are low. Increasing Work Incentives Also Succeeded When Reinforced by Employment Supports Rather than enhancing work-related services, the New York Child Assistance Program (CAP) took a different approach, providing an incentive to work by increasing the amount of earned income working recipients could keep. The program supported this incentive by lowering barriers to reentering the job market; it provided child care stipends in advance for clients to use during job search and training. New York’s program successfully increased employment and earnings but did not reduce welfare receipt. In contrast, two programs that increased work incentives and mandated work program participation without expanding employment-related services or child care assistance have not yet succeeded. Michigan’s “To Strengthen Michigan Families” (TSMF) program increased the amount of the income disregard and also required participation in some form of work program. AFDC clients were required to enter into “a social contract” in which they had to complete 20 hours a week of broadly defined “useful” activities of their own choice, such as education or job search. However, no additional child care assistance was provided to assist them in keeping this contract, and there were no significant effects in the first year. During the second year, some small effects were achieved for both earnings and welfare receipt for some subgroups but typically only in the final quarter or month for those with 2 years of data. Evaluation of the effects on the full sample and their stability will have to await future reports. Similarly, Alabama’s Avenues to Self-Sufficiency through Employment and Training Services (ASSETS) program increased work incentives and strengthened its work registration and child support cooperation requirements. In addition to raising the amount of the basic earnings disregard, ASSETS raised the limits on savings and other resources that families were allowed to have while remaining eligible for AFDC. However, it also reduced the amount that could be specifically deducted from earnings for child care expenses. The implementation of their planned employment and training component was delayed by 2 years, so available results do not fully reflect it. This program has had no significant effects on welfare receipt or average payment so far, although the evaluation is not yet complete. Finally, like New York’s CAP, Washington’s Family Independence Program (FIP) both provided economic incentives to encourage work and increased child care assistance. It also aimed to increase participation in education and training by offering small cash bonuses to the participants. However, FIP’s plans became difficult to implement under budget restrictions, and caseloads increased sharply without a corresponding increase in staff. Several features were implemented minimally, such as improving a client’s contact with a case manager and increasing resources to pay for education and training. In addition, the comparison group began getting very similar services in 1990, about a year and a half into the program, when JOBS was implemented in Washington state. Thus, it is difficult to know how to attribute the significant increase in AFDC receipt and payments experienced by this program’s participants. Conclusions Our review of state experiences suggests that the most successful programs offered a broader package of employment-related services than some proposed reform legislation encourages. The programs that successfully increased employment and earnings and reduced welfare receipt offered a broad mix of education, training, and employment-related services and supports like those in the current JOBS program. However, under H.R. 4, welfare recipients enrolled in some education and training activities would not count toward meeting the work program participation levels that are required in order to avoid financial sanctions. Some provisions of the proposed reforms—like the time limit on benefit receipt—have not yet been tested and thus we cannot confidently project the future effects of either those individual provisions or the entire package of reforms. For example, imposing a strict limit on the length of time a family can receive benefits might influence participants’ work behavior. This could influence the effectiveness of both types of work programs, those offering either a broad or narrow package of services; we simply do not have similar past experiences to draw upon. The modest results of even the most successful programs implies that (1) within the current program structure, even increasing investments in employment and support services will not quickly reduce caseloads or welfare dependency, and (2) additional research is needed to understand the barriers to better program performance and to develop and test more successful approaches. However, it should be recognized that some of these barriers may reside outside the welfare program’s control, including poor school preparation and the limited availability and low wages of low-skill jobs. Although federal funds for AFDC benefits have not been capped before, the states have limited the funds available for their work programs. Our review suggest that adequacy of funds can be a critical barrier to the success of efforts to help clients move from welfare to work. Three states in our review were unable to sustain or fully implement their planned level of service because state budget constraints kept them from increasing program capacity to match their growing caseloads. However, by reducing federal prescriptions on the use of these funds, the reform proposals aim to increase the states’ flexibility to manage such resource constraints. Many of the program evaluations that we reviewed were conducted under the requirement that waivers of federal regulations be rigorously evaluated. The pending welfare reform legislation would reduce federal regulation in order to foster further state experimentation, but it would, thereby, effectively remove that evaluation requirement and thus possibly reduce the incentive for future evaluations of state experiments. Recommendations We are not making recommendations in this report. Agency Comments and Our Response The U.S. Department of Health and Human Services (HHS) commented on a draft of this report and generally agreed with our conclusions but argued that (1) the differences between the programs studied and those that would be offered under H.R. 4 are so substantial that one must conclude that the proposed reforms have not been tested and (2) the report makes too strong a case for individual factors explaining program success or failure and should instead describe the “package” of services that may have led to certain effects. On the first point, we agree that some features of the proposed reforms have not been tested, but we believe that the states’ experiences with the program features that would be included under some of the current proposals, as well as with other features that might be discouraged, are relevant to consideration of these reforms. The text has been altered, as necessary, to clarify this distinction. On the second point, our general approach was to focus on packages of services. However, where appropriate we have made changes to clarify this. In addition, HHS provided suggestions for clarifications that we have incorporated, as appropriate, throughout the text. HHS’s comments are reprinted in appendix II. We will send copies of this report to the Chairman of the House Subcommittee on Human Resources of the Committee on Ways and Means, the Chairman of the Senate Finance Committee, the Secretary of Health and Human Services, and others who are interested. Copies will also be made available to others on request. If you have any questions concerning this report or need additional information, please call me on (202) 512-2900 or Robert L. York, Director of Program Evaluation in Human Services Areas, on (202) 512-5885. Other major contributors to this report are listed in appendix III. Our Evaluation Synthesis Methodology We conducted an evaluation synthesis to identify approaches that have successfully helped welfare clients achieve economic independence. That is, we conducted a systematic review and analysis of the results of previous evaluation studies of programs sharing this goal. Whereas some evaluation syntheses examine studies of similar programs to learn whether a treatment consistently has had the intended effect, we examined studies of programs that used a range of different approaches toward the same goal to learn which ones had been successful. Our evaluation synthesis consisted of several steps. The first step entailed locating state welfare-to-work experiments and screening them to identify rigorous evaluation studies with reliable results in terms of the intended outcomes. In the second step, we identified the commonalities and differences among the programs and assessed whether these were related to the programs’ demonstration of effects. We then drew conclusions from the cumulative picture of existing research about what approaches have helped AFDC clients move from welfare to work. Search for and Selection of Studies We identified relevant, potentially high-quality studies by searching for as many existing evaluation studies as possible of welfare-to-work programs for adult AFDC clients. Our criteria were A program could have started before 1988 but its evaluation had to have been reported after the passage of the Family Support Act of 1988. A study had to be testing, at least in part, the effect of welfare-to-work initiatives on adult AFDC single parents. The study measured the effects of the program on employment or AFDC receipt. The program’s effects were measured through a comparison group of nonparticipants (not necessarily a control group). We searched for references to terms such as Family Support Act, JOBS, and welfare reform in on-line bibliographic databases, including CCRSP, ERIC, Sociological Abstracts, the PAIS International index of the Public Affairs Information Service, and the NIS index of the U.S. Department of Commerce. From the resulting abstracts, we were able to screen the hundreds of citations down to six promising evaluations. In addition, we reviewed the bibliographies of research studies and interviewed experts on welfare evaluation to identify other studies we should consider. The experts identified an additional three studies that had only just been published and therefore had not yet appeared in databases or bibliographies. This gave us a total of nine potentially high-quality evaluations of 10 different programs from eight states. (The Riverside County GAIN evaluation included treatments and effects sufficiently different from the rest of California’s GAIN evaluation that we considered them as separate programs.) Finally, we confirmed this list of nine evaluations with program and evaluation officials at HHS. They suggested several studies that we might consider as background but no additional impact evaluations. We explicitly excluded programs focused exclusively on AFDC teenagers, who may have very different needs. We also excluded unpublished studies, implementation studies, evaluations of single program features rather than complete programs, and many studies and reviews that did not examine program effects. So, for example, we excluded the Utah Unemployed Parents evaluation and the National Job Training Partnership Act study, because they did not focus on single parents. Quality Review of Evaluation Studies After identifying the 10 programs, we rated the quality of each study to ensure that the research was rigorous and would produce reliable results. We used six specific criteria, adapted from dimensions in The Evaluation Synthesis, that together would reflect the rigor, consistency, and reliability of an evaluation study: similarity of the comparison group to the program’s clients, adequacy of the sample size for the analyses performed, standardization of data collection procedures, appropriateness of the measures used to represent the outcome variables, adequacy of the statistical or other methods used to control for threats to presence and appropriateness of the methods used to analyze the statistical significance of observed differences. We rated each study on a three-point scale from “unacceptable,” because the report provided no information on the dimension or the method was so flawed that the data were probably wrong, to “acceptable,” indicating that an appropriate method had been used or attempts had been made to minimize problems. Results of Quality Review Most of the 10 programs had well-designed and rigorously structured quasi-experimental or experimentally based evaluations. Six of the 9 evaluations had comparison groups formed by random assignment. In Alabama and Washington, the comparison groups were drawn from AFDC clients in demographically similar jurisdictions; in Massachusetts, from a random sample of clients who did not start a program activity within a specified time period. The rigor of our first screening of programs was reflected when all 10 programs met our standards. However, there were problems with the implementation and execution of several of these programs, rather than with their evaluation designs, that have to be kept in mind when interpreting them. A weakness, or confounding factor, in 3 programs was the similarity in services received by the program participants and the comparison group. (This was a serious problem in Washington but only a minor problem in California’s SWIM and GAIN programs.) This type of confounding factor means that the standard measure of a program’s effect—the difference between outcomes for the two groups—most likely underestimates the program’s potential effect. Overview of Programs All 10 programs targeted single adult AFDC recipients, but 2 also included a small number of unemployed couples in their results. The recipients were overwhelmingly women. Some programs were statewide while others were conducted in several counties or just one county. A few were voluntary; most were mandatory. Some included mothers with children younger than 6 but older than 3; others simply excluded mothers with preschool children. Some delivered services directly; others provided referrals or did nothing at all. Some programs included new AFDC applicants, others included people already enrolled, and some used both. Synthesis of Program Evaluation Results We focused on program effects on aspects of economic self-sufficiency: employment, earnings, and public assistance receipt (any effects reported on additional outcomes are not included here). For each outcome in each study, we compared the participants receiving program services (treatment group) with those of the control (or comparison) group; statistically significant differences were deemed to be program effects. The evaluation reports estimated the likelihood that these differences stemmed from random chance by using standard tests of statistical significance. For our interpretation, we used a common significance level of 5 percent (.05) or less, which was stricter than that used by some of the evaluations. We used a structured approach to look for program features or characteristics that might explain why some programs had positive effects and others did not, for each of the desired outcomes. First, we hypothesized how each of a program’s features might affect each of its outcomes. Then we compared the results of the programs that had each of those features and those that did not. We found mixed results, and we found that programs tended to group in clusters of features, which we examined for their successes. We also examined features of the studies themselves that might have influenced the reporting of statistically significant results, such as whether the treatment and comparison groups received similar services. We reviewed the comments of the evaluators about any problems they had encountered in program or study implementation. We considered not only what services were delivered and how but also how services might have influenced the participants’ behavior. Strengths and Limitations of Our Synthesis Clearly, looking across the studies provided us with information not readily seen by looking only at individual studies. Including several program approaches in our review allowed us to see that while a particular approach can be successful, this does not mean that it is the only successful approach. Examining patterns across a group of studies may allow inferences about which of the variety of a program’s components were probably responsible for its effects; examining single studies ordinarily does not. However, our sample of nine studies cannot provide conclusive answers, since there are many potential differences between studies that might be related to why one has significant results and another does not. Comments From the Department of Health and Human Services The following are GAO’s comments on the June 19, 1995, HHS letter. GAO Comments 1. The text has been changed to more clearly highlight the differences in employment and training programs between the proposals and the successful programs we reviewed and to indicate that the states may not have implemented features exactly as they appear in current bills. We have also clarified issues relating to program design and environment differences. 2. Our general approach was to focus on the package of features unique to the successful programs, while also noting differences among them. Characteristics such as the age of a mother’s youngest child, noted in appendix I, did not distinguish the four successful programs from the others. However, we have made changes to the text to remove the impression that a single factor was claimed as responsible for program failures. 3. The text has been changed to indicate study results that are not yet final. 4. The text has been changed to indicate that in Massachusetts, after registering for work, clients could choose whether to engage in other employment-related activities. 5. The text has been clarified to indicate our belief in the importance of the package of services provided by the successful programs. Although some of these programs resemble the current JOBS program, we do not believe they offer sufficient evidence from which to draw conclusions about the JOBS program per se. 6. The names of the programs not using random assignment are now noted in appendix I. 7. Table 2 has been changed to denote the availability of child care in the SWIM program. 8. The text has been changed to clarify that the evaluation of the statewide GAIN program was limited to six counties. 9. The Florida groups have been explained in the text. Major Contributors to This Report Program Evaluation and Methodology Division Bibliography Evaluations in This Report Fein, David J., Erik Beecroft, and John Bloomquist. The Ohio Transitions to Independence Demonstration: Final Impacts for JOBS and Work Choice. Cambridge, Mass.: Abt Associates, 1994. Friedlander, Daniel, and Gayle Hamilton. SWIM: The Saturation Work Initiative Model in San Diego: A Five-Year Follow-up Study. New York: Manpower Demonstration Research Corporation, 1993. Hamilton, William L., et al. The New York State Child Assistance Program: Program Impacts, Costs, and Benefits. Cambridge, Mass.: Abt Associates, 1993. Hargreaves, Margaret, and Alan Werner. The Evaluation of the Alabama Avenues to Self-Sufficiency Through Employment and Training Services (ASSETS) Demonstration: Interim Implementation and Process Report. Cambridge, Mass.: Abt Associates, 1993. Kemple, James J., Daniel Friedlander, and Veronica Fellerath. Florida’s Project Independence: Benefits, Costs and Two-Year Impacts of Florida’s JOBS Program. New York: Manpower Demonstration Research Corporation, 1995. Kemple, James J., and Joshua Haimso. Florida’s Project Independence: Program Implementation, Participation Patterns, and First-Year Impacts. New York: Manpower Demonstration Research Corporation, 1994. Long, Sharon K., and Douglas A. Wissoker. The Evaluation of the Washington State Family Independence Program: Final Impact Analysis Report. Washington, D.C.: Urban Institute Press, 1993. Long, Sharon K., Demetra Smith Nightingale, and Douglas A. Wissoker. The Evaluation of the Washington State Family Independence Program. Washington, D.C.: Urban Institute Press, 1994. Nightingale, Demetra Smith, et al. Evaluation of the Massachusetts Employment and Training (ET) Program. Washington, D.C.: Urban Institute Press, 1991. Riccio, James, Daniel Friedlander, and Stephen Freedman. GAIN: Benefits, Costs, and Three-Year Impacts of a Welfare-to-Work Program. New York: Manpower Demonstration Research Corporation, 1994. Werner, Alan, and Robert Kornfeld. The Evaluation of “To Strengthen Michigan Families.” Second annual report. First-Year Impacts. Cambridge, Mass.: Abt Associates, 1994. Werner, Alan, and Robert Kornfeld. The Evaluation of “To Strengthen Michigan Families.” Third annual report. Second-Year Impacts. Cambridge, Mass.: Abt Associates, 1995. Werner, Alan, and David Rodda. Evaluation of the Alabama Avenues to Self-Sufficiency Through Employment and Training Services, (ASSETS) Demonstration. Interim impact report. Cambridge, Mass.: Abt Associates, 1993. Other Studies Bloom, Howard S., et al. The National JTPA Study Overview: Impacts, Benefits, and Costs of Title II-A. Cambridge, Mass.: Abt Associates, 1994. Brock, Thomas, David Butler, and David Long. Unpaid Work Experience for Welfare Recipients: Findings and Lessons from MDRC Research. New York: Manpower Demonstration Research Corporation, 1993. Burghardt, John, and Anne Gordon. The Minority Female Single Parent Demonstration: More Jobs and Higher Pay—How an Integrated Program Compares with Traditional Programs. New York: Rockefeller Foundation, 1990. Burghardt, John, et al. The Minority Female Single Parent Demonstration. Vol. 1. Summary Report. Princeton, N.J.: Mathematica Policy Research, 1992. Friedlander, Daniel. The Impacts of California’s GAIN Program on Different Ethnic Groups: Two-Year Findings on Earnings and AFDC Payments. New York: Manpower Demonstration Research Corporation, 1994. Greenberg, David, Robert Meyer, and Michael Wiseman. “When One Demonstration Site Is Not Enough.” Focus, 16:1 (Spring 1994), 15-20. Gueron, Judith M., and Edward Pauly. From Welfare to Work. New York: Manpower Demonstration Research Corporation, 1991. Hamilton, Gayle. The JOBS Evaluation: Early Lessons from Seven Sites. New York: Manpower Demonstration Research Corporation, 1994. Hargreaves, Margaret, et al. Illinois Department of Public Aid: Community Group Participation and Housing Supplementation Demonstration. Fourth interim report. Cambridge, Mass.: Abt Associates, 1994. Levin-Epstein, Jodie, and Mark Greenberg. The Rush to Reform: 1992 State AFDC Legislative and Waiver Actions. Washington, D.C.: Center for Law and Social Policy, 1992. Manski, Charles F., and Irwin Garfinkel (eds.). Evaluating Welfare and Training Programs. Cambridge, Mass.: Harvard University Press, 1992. Nightingale, Demetra Smith, and Robert H. Haveman (eds.). The Work Alternative: Welfare Reform and the Realities of the Job Market. Washington, D.C.: Urban Institute Press, 1995. O’Neil, June E. Congressional Budget Office Cost Estimate of H.R. 1214, The Personal Responsibility Act of 1995. Washington. D.C.: Congressional Budget Office, 1995. Porter, Kathryn H. Making JOBS Work: What The Research Says About Effective Employment Programs for AFDC Recipients. Washington, D.C.: Center on Budget and Policy Priorities, 1990. U.S. Department of Labor. What’s Working (and What’s Not): A Summary of Research on the Economic Impacts of Employment and Training Programs. Washington, D.C.: 1995. U.S. Department of Labor, Employment and Training Administration. “American Poverty: The Role of Education, Training, and Employment Strategies in the New Anti-Poverty Struggle.” Evaluation Forum, 10 (Summer 1994). Zambrowski, Amy, and Anne Gordon. Evaluation of the Minority Female Single Parent Demonstration: Fifth Year Impacts at CET. Princeton, N.J.: Mathematica Policy Research, 1993. Related GAO Products Welfare to Work: Most AFDC Training Programs Not Emphasizing Job Placement (GAO/HEHS-95-113, May 19, 1995). Welfare to Work: Participants’ Characteristics and Services Provided in JOBS (GAO/HEHS-95-93, May 2, 1995). Welfare to Work: Current AFDC Program Not Sufficiently Focused on Employment (GAO/HEHS-95-28, Dec. 19, 1994). Child Care: Current System Could Undermine Goals of Welfare Reform (GAO/T-HEHS-94-238, Sept. 20, 1994). Families on Welfare: Sharp Rise in Never-Married Women Reflects Societal Trend (GAO/HEHS-94-92, May 31, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO reviewed the evaluations of numerous state welfare-to-work experiments completed since 1988, focusing on: (1) how these experiments resemble current welfare reforms; and (2) the approaches that have been effective in increasing employment and earnings or reducing benefits among welfare clients. GAO found that: (1) state welfare-to-work experiments and current federal welfare reform proposals both include work programs for welfare recipients, stricter requirements for participation in work programs and child support enforcement, and increasing work incentives; (2) states are testing proposals, such as limiting the length of time a family can receive benefits, but their evaluations are not yet complete; (3) some states have evaluated features of welfare-to-work programs, such as providing a broad mix of employment services, that go beyond some of the current proposals; (4) although the states' experiences provide information regarding some current reform features, it is not possible to project the likely effects of the entire package of reform proposals; (5) the programs that consistently showed the best employment and welfare-related outcomes for participants combined many employment-related activities and support services with some form of participation mandate and had adequate funding to serve their clients; (6) it has been difficult to move welfare recipients to self-supporting employment; (7) only one fourth of participants were self-sufficient in being both employed and off welfare after 3 years in welfare-to-work programs; and (8) the approach of increasing both work incentives and access to employment has had mixed results among states that have attempted such actions.
Background Telecommuting in the public sector began about 10 years ago as a federal pilot project. Its goals were to save energy, improve air quality, reduce congestion and stress on our highways, and help employees better balance the competing demands of work and family obligations. Typically, formal telecommuting arrangements establish specific times, generally ranging from 1 to 5 days per week, in which employees work at their homes or other remote locations. However, employers may also allow telecommuting on an informal basis, where arrangements are more episodic, shorter term, and designed to meet special employer or employee needs. Although estimates vary depending on the definition of telecommuting that is used, recent data indicate that the number of employers and employees involved in telecommuting arrangements has grown over the past 10 years. In 1992, the U.S. Department of Transportation estimated that there were 2 million telecommuters (1.6 percent of the labor force) working from their homes 1 or 2 days per week. Last year, a private association that promotes the concept of telecommuting, estimated that 9.3 million employees telecommuted at least 1 day per week and 16.5 million telecommuted at least 1 day per month. These estimates show that out of 138 million wage and salary workers in the United States, about 7 to 12 percent telecommute periodically. For the federal workforce, a recent OPM survey of 97 federal agencies showed that 45,298 workers or 2.6 percent of their total workforce, telecommuted at least 52 days per year. Management Concerns Include Suitability, Security, and Costs of Telecommuting In our examination of barriers to telecommuting in the private sector, we found that decisions on whether an organization ultimately adopted telecommuting programs or expanded them over time was heavily dependent on the resolution of three concerns: identifying the positions and employees suitable for telecommuting; protecting data; and controlling the costs associated with telecommuting. The concerns held by private sector management were similar to those of managers in federal agencies. Of those management concerns that pose a potential barrier to telecommuting, the first involved identifying those positions and employees best suited for telecommuting. Our analysis and interviews with employers, proponents of telecommuting, and other experts, showed that telecommuting is not a viable option for every position or employee. For example, site-specific positions involving manufacturing, warehousing, or face-to-face interaction with customers are usually not suitable for telecommuting. Conversely, positions involving information handling and professional knowledge-related tasks, such as administrative activities and report writing, can often be performed from a remote location. Beyond having jobs suitable for telecommuting, an organization must also have employees that are able to perform in a telecommuting environment. The current literature showed that telecommuting is best suited for high-performing and self-motivated employees with a proven record of working independently and with limited supervision. If an organization determines that it lacks the positions or employees that are suitable for telecommuting, it may choose not to establish or expand such arrangements. A second management concern pertained to an employer’s ability to protect proprietary and sensitive data and monitor employee access to such data without invading individual privacy rights. Our analysis of current literature and studies on this subject, as well as interviews with employers, showed that security concerns generally centered on potential vulnerabilities associated with providing employees with remote access to internal record systems. Access involving the Internet and employers’ ability to prevent unauthorized copying, manipulation, and modification of company information was of particular concern. We also identified uncertainties among employers regarding the extent to which electronic monitoring of employee activities is permissible or considered an infringement on individual privacy. Left unresolved, these data security issues could potentially cause employers to choose not to adopt telecommuting arrangements. The third management concern involved assessing the costs associated with starting a telecommuting program and its potential impact on productivity and profits. Telecommuting programs often involve some employer investment related to upgrading systems and software to permit remote access, providing employees with hardware and software to work from their homes, or incurring additional costs to rent space and equipment available at telecenters. These costs may adversely affect profits if productivity does not increase or at least remain the same. The potential barriers to private sector telecommuting discussed today are similar in many ways to those confronting telecommuting in the federal government, as noted in prior GAO work and OPM’s June 2001 report. In 1997, we reported on the implementation of telecommuting (then referred to as flexiplace) in federal agencies. Among the topics discussed in our report were barriers affecting the growth of telecommuting programs. The most frequently cited obstacle to increased use of telecommuting related to management concerns. Interviews with agency and union officials disclosed that managers and supervisors were hesitant to pursue telecommuting arrangements because of fears that employee productivity would diminish if they worked at home. Other related concerns cited in our report included management views that agencies did not have sufficient numbers of suitable employees and positions for telecommuting arrangements; concerns regarding the treatment of sensitive data, especially the additional cost of ensuring the security of data accessed from remote locations; and lack of resources necessary to provide additional computers, modems, and phone lines for the homes of telecommuters. OPM’s June 2001 report on federal agency efforts to establish telecommuting policies identified similar potential barriers. OPM reported that its survey of 97 federal agencies showed that management reluctance was the most frequently cited barrier to increased telecommuting among federal employees. Basic concerns centered on the ability to manage workers at remote sites and the associated loss of control over telecommuters. OPM also noted that security concerns about allowing remote access to sensitive and classified data remained high, as did questions about funding the purchase of additional computer hardware and software for equipment that would be deployed at telecommuters’ homes. Current Laws and Regulations Have Implications for Federal Telecommuting Programs While management concerns are often cited as a potential barrier to private and federal telecommuting programs, our work identified a number of laws and regulations that could also impact these arrangements. These laws and regulations include those covering taxes, workplace safety, recordkeeping, and liability for injuries. Because several of these laws and regulations predate the shift toward a more technological and information-based economy in which telecommuting has developed, their application to telecommuting is still evolving and unclear at this time. Of those laws and regulations that could impact an employer’s provisions of telecommuting arrangements, increased state tax liability for employers and employees involved in interstate telecommuting arrangements may have the greatest potential to undermine further growth. At issue for employers is whether having telecommuters work from their residence in a state where a company has no other physical presence can expose the company to additional tax liabilities and burdens. For the employees, the tax issue has taken on increasing importance, most notably in the Northeastern United States, where a number of states have tax rules that allow them to deem all wages of nonresident telecommuters working for companies located in their states as taxable whenever working at home is for the employee’s convenience rather than an employer necessity. At the same time, the state where the telecommuter resides and works via telecommuting may be taxing some of the same income because it was earned while they worked at home, which in effect “double taxes” that income. Our discussions and other information we received during our review, brought to our attention at least 13 tax cases related to telecommuting and taxing issues. One such case showing the long reach of a tax authority involves New York State’s taxing the wages of a telecommuting Tennessee resident who was employed by a company located in New York, but worked 75 percent of the time from home. A number of telecommuting experts and employers we interviewed believed that the uncertainties surrounding the application of individual state tax laws to telecommuting situations was a significant emerging issue that, if left unresolved, could ultimately impact the willingness of employers and individuals (including federal employees) to participate in telecommuting programs. Beyond the issue of state taxation, our work identified a number of other barriers to private-sector telecommuting programs that are also applicable to federal agencies. First, in regard to workplace safety, one concern was that employers would have to conduct potentially costly inspections of workers’ home offices. The federal Occupational Safety and Health Act requires private employers to provide a place of employment that is free from recognized, serious hazards. A February 2000 OSHA policy directive stated that it would not inspect home offices, hold employers liable for their safety, or require employers to inspect these workplaces. Some employers and telecommuting proponents, however, remained concerned that this internal policy could be reversed in the future, exposing employers to workplace safety violations and ultimately requiring them to complete costly home office inspections. A number of employers told us they were attempting to eliminate potential workplace safety issues by offering employees guidance on home office safety and design or providing them with ergonomic furniture. Other experts have suggested that a training program on safety be part of an employer’s program. Under the Occupational Safety and Health Act, federal agencies must also establish and maintain safety and health programs consistent with OSHA standards. To the extent that they attempt to meet OSHA safety standards for their telecommuters’ home offices, the potential financial and administrative costs of initiatives similar to those taken in the private sector may serve as a barrier to implementation. Second, federal wage and hour law and regulations may also pose a barrier to telecommuting programs in both the private and public sectors. The Fair Labor Standards Act (FLSA) requires, among other things, that employers maintain sufficient records to document all hours worked, including overtime. Concerns voiced by telecommuting experts in this area centered on the increased documentation burden this may pose, as well as the uncertainties regarding an employer’s ability to sufficiently monitor hours worked and control labor costs. However, our review and interviews with employers showed that most telecommuters fall under employee classifications (i.e., executive, administrative, or professional) that are exempt from FLSA requirements. In addition, to comply with the law and control labor costs for the few employees to whom the FLSA did apply, some employers developed ad hoc procedures to preauthorize and record hours and overtime worked. As a result, monitoring the hours of telecommuting workers was not viewed as a substantial barrier. However, to the extent that federal agencies have a workforce covered by the FLSA, concerns about the ability to sufficiently control and track telecommuter hours worked may serve as a barrier to implementation. A final issue I will discuss relates to the potential for increased employer liability for home workplace injuries and the rising worker compensation costs this could bring. Generally, work-related injuries are covered under state workers’ compensation programs. Numerous telecommuting experts are concerned that, because injuries at home are not usually witnessed, determining whether they are truly work-related is problematic. Our analysis and interviews showed that this is an area that could be vulnerable to increased fraud and abuse. The employers we interviewed and other experts have said that they were not yet experiencing significant problems with home workplace injuries or workers’ compensation claims. However, some experts noted that this could become a larger issue as more individuals telecommute. Concluding Observations Telecommuting offers a new set of opportunities that could benefit employers, employees, and society as a whole. Whether these opportunities are realized, however, will depend on resolving fundamental questions about how telecommuting affects an employer’s ability to manage employees and other resources, specifically about its suitability as a work arrangement as well as questions about data security and overall costs. Knowing the extent to which these questions apply to federal agencies would provide important information for making decisions about telecommuting by federal workers. Realizing the full potential of telecommuting also requires looking beyond internal management concerns to the laws that govern an organization’s operating environment. Some of these laws were put in place before we could imagine a world in which employees lived in one state, but through technology, worked in another distant state, and as a result, they may unintentionally discourage telecommuting. Further examining how current laws and regulations could potentially impact telecommuters and their employers would provide the opportunity to mitigate their effects. In conclusion, pursuing the question of how to promote telecommuting is really a question of how to adapt current management practices, and laws and regulations to changing work arrangements that are, and will be, part of the information age in which we now live. This concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have.
Telecommuting refers to work that is done at an employee's home or at a job site other than a traditional business office. Perhaps the biggest challenge to establishing and expanding telecommuting programs in both the public and private sectors is management's concerns about the types of positions and employees suitable for telecommuting, protecting proprietary and sensitive data, and establishing cost-effective telecommuting programs. Some federal and state laws and regulations, including those governing taxes, workplace safety, workforce recordkeeping, and liability for home workplace injuries, are also potential obstacles to telecommuting. Overall, the application of state tax laws to telecommuting arrangements, as well as other laws and regulations enacted before the transition to a more technological and information based economy, is evolving and their ultimate impact remains unclear.
Background The Secretary of the Interior is hereby authorized, in his discretion, to acquire through purchase, relinquishment, gift, exchange, or assignment, any interest in lands, water rights or surface rights to lands, within or without existing reservations … for the purpose of providing land for the Indians. … Title to any lands or rights acquired pursuant to this Act shall be taken in the name of the United States in trust for the Indian tribe or individual Indian for which the land is acquired, and such lands or rights shall be exempt from State and local taxation. Since 1934, the total acreage held in trust by the federal government for the benefit of tribes and their members has increased from about 49 million to about 54 million acres. Within Interior, BIA is responsible for the administration and management of all land held in trust by the United States and for serving the 561 federally recognized tribes and about 1.9 million individual Indians and Alaska Natives. The Assistant Secretary for Indian Affairs has primary responsibility for BIA while the BIA Director oversees its day-to-day operations. BIA has over 9,600 staff and an annual budget of about $2.39 billion. BIA’s responsibilities include the administration of education systems, social services, and natural resource management, among other things. BIA is organized by 12 regions with 58 underlying agencies located throughout the country. One region covers the state of Alaska, and the remaining 11 cover the continental United States. (See fig. 1.) A regional director is in charge of each regional office and a superintendent is in charge of each agency office. The Office of Trust Services, which includes BIA’s Central Office realty staff, provides overall guidance for the land in trust program as one of its many responsibilities. Real estate services staff, about 390 total with an annual budget near $41 million, are located at BIA offices across the country and are responsible for processing land in trust applications, as well as other functions, including property management, land leasing and title activity, and lease compliance. Real estate services staff are under the line authority of regional directors and agency superintendents. In 1980, Interior established a regulatory process intended to provide a uniform approach for taking land in trust. For on-reservation applications under the Secretary’s discretionary authority, the deciding official must consider the statutory authority to take land into trust; the need for the land; the purpose of acquiring the land; for individual Indians, the amount of land already held in trust and the individual’s need for assistance in handling business matters; the implications for state and local governments of removing land from the potential jurisdictional concerns of state and local governments; BIA’s ability to discharge its duties on the newly acquired land; and environmental compliance, particularly with the National Environmental Policy Act (NEPA). For off-reservation applications under the Secretary’s discretionary authority, BIA must also place greater weight on the concerns of state and local governments as the distance of the land from the tribe’s reservation increases and review a business plan if the land is to be acquired for business purposes. Once these steps have been completed, BIA provides a decision to the applicant and affected parties. Several additional steps follow, including publication of the decision in the Federal Register or a local newspaper, and possible administrative appeals and litigation. In 1988, about 8 years after the regulations for taking land in trust were issued, the Indian Gaming Regulatory Act was enacted. The act provided the statutory basis for the operation and regulation of certain gaming activities on Indian lands. It generally prohibits gaming activities on Indian trust lands acquired by the Secretary after October 17, 1988, the date the act was signed into law. However, the act does provide several exceptions that allow gaming on lands acquired in trust after its enactment. For fiscal year 2005, gaming revenues from Indian gaming facilities totaled $22.6 billion. On applications for land in trust, applicants must declare the anticipated use of the property, particularly whether the property will be for gaming or nongaming purposes. Applications to take land in trust for gaming purposes are handled by the Office of Indian Gaming Management within the Office of the Assistant Secretary for Indian Affairs. In September 2005, Interior’s Office of the Inspector General reported on the processing of applications for land in trust for gaming purposes. The Inspector General reported that while the review and approval process for gaming applications was “sufficient,” the process took an average of 17 months— or about 1.4 years—from the time BIA received the application until its final action. Furthermore, the Inspector General reported 10 instances where tribes had converted lands acquired for nongaming purposes to gaming without first getting the necessary approvals pursuant to the Indian Gaming Regulatory Act. Interior subsequently determined that five of these conversions were eligible for gaming under the Indian Gaming Regulatory Act, one was not, and four were still under review at the time of the Inspector General’s report. The gaming facility on the one ineligible conversion was later closed. Our report focuses on discretionary nongaming land in trust applications, which fall into three categories—on-reservation, off-reservation, and “gaming related” applications. The gaming related category was added in 2001, and it refers to applications involving support facilities for gaming establishments, such as parking lots and maintenance buildings, but not the actual gaming activity itself. By directive of the Assistant Secretary for Indian Affairs, each category of applications is processed slightly differently or by a different office. In most cases, the decision maker for on- reservation applications is the superintendent of the local BIA agency office. For the remaining on-reservation applications and for the off- reservation applications, the decision maker is the applicable BIA regional director. Off-reservation applications are processed using the criteria in 25 C.F.R. §151.11 and the Assistant Secretary for Indian Affairs is to review the draft decision and supporting materials and provide input before the regional director issues a decision. On- and off-reservation applications are generally processed by a combination of BIA realty staff at BIA’s Central Office in Washington, D.C.; a BIA regional office; or a local BIA agency office. Finally, gaming-related applications are processed by the Office of Indian Gaming Management in Washington, D.C., and the decision maker is the Assistant Secretary for Indian Affairs. During the land in trust process, administrative appeals must be filed within 30 days of receipt by the applicant of the notice of the decision, and parties have at least 30 days to file judicial challenges after the decision is published in the Federal Register or a local newspaper. Administrative appeals can be filed with the applicable BIA regional director or the IBIA, depending on who the BIA deciding official was. First, if a superintendent was the deciding official, parties can appeal the decision to a regional director. The regional director then reviews the application’s administrative record and any other available information and renders a ruling. The regulations governing appeals state that a regional director must make a ruling within 60 days after all times for pleadings, including extensions, have expired. The regional director’s ruling can then be further appealed to the IBIA, the administrative review body at Interior. The IBIA’s ruling is the final position for Interior. Second, if a regional director was the decision maker, parties may appeal the decision to the IBIA. Once a decision is final for Interior, it is published in the Federal Register or a local newspaper and parties have at least 30 days to file judicial challenges to the decision. Appendix II provides an overview of the land in trust process. Interior is considering revisions to the land in trust regulations, among a number of other possible regulation changes. Preliminary revisions under consideration were distributed to tribes on December 27, 2005. Changes are under consideration throughout the regulations, including the institution of a trust acquisition request form, new criteria for considering on- and off-reservation acquisitions, extended state and local government comment periods, and time frames for issuing a decision. Although Interior held tribal consultations in February and March to discuss draft regulations, the land in trust regulations were not part of the meetings’ agendas. Interior set the date of March 31, 2006, for tribes to submit comments on the proposed changes. According to the Associate Deputy Secretary, Interior is planning to hold consultation meetings in the last quarter of calendar year 2006, followed by publishing a proposed rule in the Federal Register for public comment. BIA Generally Followed the Regulations for Taking Land in Trust, and These Regulations Provide BIA with Wide Discretion BIA generally followed its regulations for processing the 87 land in trust applications with decisions in fiscal year 2005, such as properly notifying affected state and local governments and providing time for comments and appeals. The criteria in the regulations for taking land in trust are not specific and do not include guidelines for how BIA should apply them. Apart from the regulations, we found one BIA agency office did not properly document its decision-making process, including the consideration of the criteria in the regulations. Furthermore, we found that two separate agreements between groups of tribes and two BIA regional offices, designed to expedite the processing of certain applications, have raised concerns and were under investigation by Interior’s Office of Inspector General at the time of our review. BIA Generally Followed the Regulations for Taking Land in Trust BIA generally followed its regulations for the 87 land in trust applications with decisions in fiscal year 2005. Specifically, BIA notified affected state and local governments and provided a 30-day comment period for them to submit information on potential tax and jurisdictional impacts; obtained a preliminary title opinion from Interior’s Office of the usually issued a decision letter to the applicant and interested parties based on an evaluation of the criteria in the regulations, including determining compliance with NEPA requirements, provided 30 days for the applicant or interested parties to appeal the decision and an explanation of the appeals process in its decision letter; and, published a notice of its decision in the Federal Register or local newspaper providing at least 30 days for interested parties to seek judicial review. Of these 87 decisions, 80 were approvals, and 7 were denials. The Superintendent of the Wewoka Agency, Eastern Oklahoma Region, denied one application because the applicant failed to meet the criteria. The Superintendent of the Horton Agency, Southern Plains Region, officially withdrew six applications, in effect denying them, because the tribe did not submit additional necessary information for several years. Applicants and state and local governments can file appeals and judicial challenges if they believe that BIA failed to properly follow the regulations. Eight of the 87 decisions in fiscal year 2005 had been appealed as of September 30, 2005. Three of the appeals were not filed within the required 30-day appeal period; therefore, they were dismissed as untimely. The remaining five appeals were pending as of September 30, 2005. The appellants generally asserted that BIA did not adequately consider tax and jurisdictional impacts. While these most recent appeals were pending at the end of the fiscal year 2005, some other appeals of decisions from fiscal year 2004 are illustrative in demonstrating how the appeal process works. For example, the local government of Union Township, in the state of Michigan, appealed three land in trust applications to the Midwest Regional Director, asserting that BIA had not addressed, among other things, the township’s jurisdictional and land use concerns in its decision. The township argued that the proposed acquisition would create “an island (of trust land) in the middle of the township in a prime commercial corridor” that might be subject to different zoning and building regulations and that this might create “serious difficulties for rational land use planning.” BIA’s decision stated only that primary law enforcement and fire protection would be provided by the tribe and that the tribal council has good relations with local planning officials and made no mention of the township’s concerns. The Midwest Regional Director agreed that the decision had not adequately addressed the issues raised by Union Township and returned the applications to the Superintendent of the Michigan Agency to better address those concerns. In addition, the Midwest Regional Director determined that the Michigan Agency had not provided sufficient information on environmental compliance. Criteria in the Regulations Provide BIA Wide Discretion Because They Are Not Specific and Do Not Include Guidelines for How BIA Should Apply Them In general, we found that the criteria in the regulations provide BIA with wide discretion in deciding to take land in trust, primarily because they are not specific, and BIA has not provided clear guidelines for applying them. For example, one criterion requires BIA to consider the impact of lost tax revenues on state and local governments. However, the criterion does not indicate a threshold for what might constitute an unacceptable level of lost tax revenue and, therefore, a denial of an application. Furthermore, BIA does not provide guidance on how to evaluate lost tax revenue, such as comparing lost revenue with a county’s total budget or evaluating the lost revenue’s impact on particular tax-based services, such as police and fire services. In addition, the criterion does not require deciding officials to consider the cumulative impact of tax losses resulting from multiple parcels taken in trust over time—a practice some state and local governments would like to see instituted. Table 1 shows our analysis of the criteria. In addition, the criteria are not pass/fail questions and, therefore, the responses to the criteria do not necessarily result in an approval or denial of an application. For example, should BIA decide that an application has “failed” to meet one or more of the criteria, the BIA deciding official still has discretionary authority under the regulations to approve the application. However, we found no instances in which an official decided that an applicant did not meet one or more criteria but still approved the application. Revisions to the regulations under consideration make it clearer that, because it is difficult to develop specific thresholds for most criteria, BIA intends to assume that most on-reservation applications will eventually receive approval unless a major failing is evident, such as an environmental hazard on a property that would leave the federal government liable to environmental clean-up costs. Conversely, the draft changes make it more difficult to approve off-reservation applications. One BIA Office Did Not Properly Document Its Decisions and Two Other Offices Have Entered Into Agreements with Tribes That Have Raised Concerns While we found that BIA procedurally followed the regulations for the 87 applications with decisions in fiscal year 2005, there were two areas not specifically addressed in the regulations that raised concerns. First, BIA’s Fort Peck Agency, in the Rocky Mountain Region, did not document its decision-making process for two applications decided in fiscal year 2005, including the consideration of the criteria in the regulations. Although not in the regulations, BIA policy calls for offices to include an analysis of each of the criteria in their decision letters for approving or denying applications. This policy stems from a 1999 IBIA statement that failure to provide an analysis of the criteria to interested parties would potentially lead to the IBIA vacating future decisions. BIA realty staff at the Fort Peck Agency were unable to provide us with documentation showing they considered the criteria for two applications approved in fiscal year 2005. The Fort Peck Agency reported it also has some pending applications as of the end of fiscal year 2005. By not documenting its consideration of the applicable criteria, the Fort Peck Agency is not fully disclosing its rationale for its decisions and is, therefore, making the process less transparent. Two separate agreements between groups of tribes and two BIA regional offices designed to expedite the processing of certain applications were under investigation by Interior’s Office of Inspector General at the time of our review. Specifically, agreements signed by tribes and BIA regional offices in the Pacific and Midwest regions created land in trust consortiums. In both cases, consortium tribes agreed to use a portion of their budget to pay for additional staff positions at BIA dedicated to processing consortium members’ land in trust applications. According to staff with the Inspector General’s office, the Pacific Region’s land in trust consortium agreement was not reviewed or approved by Interior’s Office of the Solicitor before BIA entered into it. The staff further stated that the Midwest Region’s agreement, created several years after the Pacific Region’s agreement, did undergo review and approval by the Solicitor’s Office. Interior’s Office of Inspector General was conducting an investigation of these consortium arrangements to determine whether the tribes’ allocation of money to fund the consortiums was legally authorized and whether BIA was favoring land in trust applications from those tribes. Many Land in Trust Applications Have Not Been Processed in a Timely Manner While BIA’s current regulations do not set a specific time frame for making an initial decision on an application, BIA is considering revisions to the regulations that would impose a time frame of 120-business days, or about 6 months, for making a decision for both on- and off-reservation applications once an application is complete. According to our analysis of three categories of land in trust applications, BIA did not decide most applications within the proposed time frames the agency is now considering, or within existing time frames for appeals. First, for the 87 applications with decisions in fiscal year 2005, the median length of time from submission of an application to a BIA decision was a little over 1 year. Second, the 28 complete off-reservation applications currently awaiting review have been at the BIA Central Office for an average of 1.4 years, as of the end of fiscal year 2005. Finally, for applications on appeal, current federal regulations call for regional directors to rule on an appeal within 60 days after all time for pleadings has expired. For the 34 appealed applications awaiting a BIA decision that we reviewed, the average time pending from the BIA decision to the end of fiscal year 2005 was almost 3 years. Some Applications with Decisions in Fiscal Year 2005 Were Decided in a Timely Manner, While Others Took an Exceedingly Long Time While the current land in trust regulations do not provide a time frame for BIA’s review of land in trust applications, BIA is considering revisions to the regulations that would establish a time frame of 120 business days, or about 6 months, for BIA to issue a decision once a complete application has been assembled. For the 87 applications with decisions in fiscal year 2005, the median length of time from submission of an application to a BIA decision was 1.2 years, twice as long as the proposed time frame. Using the time frame under consideration as a guide, and allowing 30 days for state and local governments to provide comments, we determined that at least 10 of the 87 applications we reviewed were processed in a timely manner. Additional applications may have been decided in a timely manner, but the files we reviewed did not clearly document the date when an application was complete. Figure 2 shows the amount of time BIA took to process applications with decisions in fiscal year 2005. Table 2 shows the processing times for the 87 applications we reviewed by region. As the table shows, the shortest processing time—58 days— occurred in the Midwest Region, while the longest processing time— almost 19 years—occurred in the Pacific Region. (App. III provides additional details on the 87 land in trust applications with decisions in fiscal year 2005.) According to our analysis of BIA files, processing times for applications with decisions in fiscal year 2005 were lengthened by inaction on the part of either the applicant or BIA. For example, according to BIA files, the Pacific Region application that took almost 19 years to process was submitted in 1986 by an individual tribal member to place 5.42 acres of land in trust. BIA found that the application lacked required documents and, therefore, could not process the application until it received these documents. The applicant did not provide the necessary documents until 1991. While the application was deemed complete in 1991, according to our file review, the regional office did not issue a notice to interested parties of the proposed trust acquisition until 2002. However, in the same year, the BIA Pacific Regional Director ordered processing stopped on the application because the applicant’s tribal affiliation was uncertain. BIA and the applicant worked to resolve this issue, and BIA approved the application on February 25, 2005, almost 19 years after its submission. While the BIA file stated clearly that processing on the file was halted initially due to inaction on the part of the applicant, it did not provide an explanation regarding why the application was not acted upon by the BIA from 1991 to 2002. In other cases processed at the Horton Agency Office in Kansas, our file review showed several applications were closed by the agency in 2005 because of inaction on the part of the tribe; one of these applications had been submitted in 1991. BIA officials also noted that access to the Internet would increase their ability to process land in trust applications in a timely manner. Off-Reservation Applications Have Not Been Processed in a Timely Manner Off-reservation applications awaiting review by BIA’s Central Office have not been processed in a timely manner. Again, BIA is considering imposing a 120-business day time frame, or about 6 months, for issuing a decision on off-reservation applications once an application is complete. According to BIA Central Office staff, there was nearly a 2-year period between December 2003 and November 2005 when no off-reservation land in trust applications were cleared by the Assistant Secretary. On average, the 28 off-reservation applications we reviewed had been pending in the Central Office for 1.4 years by the end of fiscal year 2005—almost three times longer than the 6-month time frame under consideration. Using the time frame under consideration as a guide, and allowing 30 days for state and local comments, we found that at least 22 of the 28 off-reservation applications pending at the Central Office were not processed in a timely manner. The most recent application forwarded to the Central Office had been pending for about 1 month, while the oldest application had been pending for over 3 years. This analysis is based solely on the time the applications were pending at the BIA Central Office and does not include the time the applications spent at a BIA agency or regional office before they were forwarded to the Central Office. In total, from the time of their initial submission at a BIA agency or regional office until the end of fiscal year 2005, these applications had been pending an average of 4.6 years. These applications originated from 17 tribes covering 1,832 acres of land in 11 states, primarily in BIA’s Northwest and Southern Plains Regions. (See app. IV for more detailed information on these 28 applications.) Turnover in the position of the Assistant Secretary for Indian Affairs may have contributed to the length of time involved in processing off- reservation applications. The current Central Office review process was instituted in February 2002. According to the February 2002 memorandum instituting this process, “very effort will be made to complete the overview within one week.” The Assistant Secretary who instituted this process held the position for about 1-1/2 years before retiring in December 2002. Since then, the position of Assistant Secretary of Indian Affairs has been held by three different people: an acting Assistant Secretary; a permanent Assistant Secretary; and, since February 2005, an Associate Deputy Secretary at Interior has served as the Acting Assistant Secretary. It Appears that Appeals Have Not Been Resolved in a Timely Manner by BIA Regional Directors Federal regulations require regional directors to “render written decisions in all cases appealed to them within 60 days after all time for pleadings (including all extensions granted) have expired.” According to our review of 34 appealed decisions awaiting resolution by a BIA regional director, the average time pending from the time of the decision to the end of fiscal year 2005 was 2.8 years. While our file review did not allow us to determine at what point “all time for pleadings” had expired in each case, it appears, based on the lengthy time period, that none of the 34 appealed decisions awaiting a regional director’s ruling were resolved in a timely manner. However, in cases in which a ruling has not been rendered by a regional director within the required time frame, the regulations provide a process to appeal the inaction of the regional director to the IBIA. Under these circumstances, the IBIA has stated that it could use its authority to order a Regional Director to issue a final decision on a tribe’s trust acquisition request. Typically, however, the IBIA has instead ordered the regional director to provide a status report on the requested action. If satisfied that the matter is being addressed or has already been resolved by the regional director, the IBIA has dismissed the appeal. Most of the appealed decisions we reviewed originated from BIA’s Southern Plains Region. (App. V provides additional details on these applications.) When applications are not processed in a timely manner because of delays by BIA or the applicant, information in the applications can become outdated, particularly environmental assessments, comments from state and local governments, and tax data. When this happens, BIA must devote additional resources to obtain updated information and reprocess the applications—an inefficient and time-consuming process for BIA, Indian applicants, and state and local governments. The applicants also bear a direct financial cost because they continue to pay property taxes on the land while BIA is processing their applications. The applicant may face additional financial burdens due to processing delays, such as the opportunity costs associated with delayed economic development activities. Citing Taxes and Jurisdictional Issues, State and Local Governments Opposed Applications in Fiscal Year 2005 When opposing land in trust applications or appealing decisions, state and local governments principally cited concerns about lost tax revenues and jurisdictional issues. In commenting on applications prior to decisions made in fiscal year 2005, state and local governments opposed 12 of 87 applications, or about 14 percent, mainly citing concerns about lost tax revenues and jurisdictional issues. State and local governments have also opposed some applications through administrative appeals, again primarily citing lost tax revenues and jurisdictional issues. As of the end of fiscal year 2005, a total of 45 decisions were pending review on appeal, including 5 decisions from fiscal year 2005. Although we found little opposition to the applications with decisions in fiscal year 2005, some state and local governments we contacted said (1) they did not have access to sufficient information about the land in trust applications and (2) the 30-day comment period was not sufficient time in which to comment. Citing Primarily Taxes and Jurisdictional Issues, State and Local Governments Opposed Only a Small Percentage of the Applications with Decisions in Fiscal Year 2005 For the 87 land in trust applications with decisions in fiscal year 2005, state and local governments opposed or raised concerns—primarily involving taxes and jurisdictional issues—on 12 applications prior to BIA’s decision. For example, the state of Kansas opposed the Kickapoo tribe’s application for placing about 75 acres in trust because trust status would cause a loss of tax revenue, which amounted to $172 for the county in 2000. Despite the tax loss, Kansas said its local government would still bear the cost of continuing to provide services, such as road maintenance and fire protection. The county of jurisdiction—Brown County, Kansas—opposed trust status, saying “…further erosion of the real estate base is always a concern.” The tribe responded in a letter to BIA in 2001, saying it disagreed with the state’s arguments. In April 2005, the Superintendent of BIA’s Horton Agency in the Southern Plains Region closed the application because the tribe did not respond to BIA’s requests for additional information for several years. BIA generally reviewed the comments it received on pending applications and considered them in its decision-making process. Table 3 describes the Indian tribe, the acreage, proposed use of land to be taken in trust, and the tax losses state and local governments expressed concern about prior to BIA’s decision on 12 applications. As table 3 shows, while most lost annual tax revenue was less than $1,000, Santa Barbara County, California, opposed the Santa Ynez Band of Chumash Mission Indians’ application for 6.9 acres to be placed in trust because of a tax loss of about $43,000 per year. Before the decision, the county held a public hearing in June 2004 on the environmental assessment for the proposed trust acquisition. More than 50 speakers commented, mostly in opposition to the application. BIA and county officials held a joint meeting to discuss the issues the county raised. BIA ultimately approved the trust application in January 2005, and the county did not oppose the decision at that time. However, several citizen groups appealed the decision, and in August 2005 the county filed a motion to intervene or alternatively file an amicus brief. The IBIA dismissed the motion for intervention as untimely and dismissed the citizens’ appeals for lack of jurisdiction in February 2006. State or Local Governments Have Also Cited Primarily Tax and Jurisdictional Issues When Opposing BIA Land in Trust Decisions through Administrative Appeals As of September 30, 2005, 45 appeals were pending either before BIA regional directors or the IBIA. All but two appeals involved decisions approving land in trust applications, and all but three appeals were filed by state or local governments. These appeals echo the tax, jurisdiction, and other types of issues that were raised before BIA’s decision. Most of the pending appeals were made by a state or local government that frequently or routinely appeal BIA’s decisions on land in trust applications. BIA’s Southern Plains Region had the highest number of appeals that were pending as of September 30, 2005. (See table 4.) The appeals in the Southern Plains Region generally involve the state of Kansas and Jackson County, Kansas. See appendix V for detailed information on the 34 appeals awaiting resolution by a BIA regional director and table 5 for detailed information on the 11 appeals awaiting resolution by the IBIA. The following example illustrates the types of concerns raised on appeal. In 2002, the state of Kansas appealed a decision by the Horton Agency Superintendent to allow 7.85 acres in trust on the Sac & Fox reservation. The state argued that BIA’s decision (1) reduces the tax rolls by $492; (2) violates the Tenth Amendment to the Constitution, since states surrendered many powers to the federal government but retained residual sovereignty; and (3) violates the Act for Admission of Kansas into the United States because it would compel the state to relinquish its sovereign jurisdiction over the land. The tribe stated that (1) Brown County, the recipient of the $492 per year in taxes, did not file an appeal and the amount is insubstantial; (2) the Regional Director, like the IBIA, lacked jurisdiction to declare federal statutes unconstitutional, and this issue has been addressed in several other appeals to the IBIA; and (3) Kansas had accepted admission into the United States on the condition that the federal government retained its power to regulate Indian affairs; therefore, BIA did not infringe on the state’s sovereignty. The Southern Plains Regional Director was still considering the appeal as of June 8, 2006. Similar arguments about loss of tax revenues and jurisdictional issues have been made in appeals before the IBIA. For example, Cass County and the City of Cass Lake, Minnesota, appealed three decisions of the Minnesota Agency Superintendent to place 1.28 acres of land of the Minnesota Chippewa Tribe (Leech Lake Band) in trust in 2001. The land was to be used for residential housing, women’s services programs, and a tribal health office. The county and the city said the loss of the land would have a negative impact on the tax rolls and that the land might not be within the reservation boundaries; consequently, the applications would be subject to additional criteria. When the matter was appealed to the Regional Director, he concluded that the tax loss of about $5,000 annually was not significant and that the tribe’s services to the entire community, including non-Indians, reduced the financial burden on local governments. State and Local Governments Want More Information about Applications as Early as Possible and More Time to Comment Some state and local government officials want more information about applications early in the process, and they want more time to comment. In a July 2005 paper, the National Governors Association stated that any new regulations should include, among other things, a requirement that states and local governments be able to review tribal submissions and evidence, just as tribes are able to review state submissions. The governors also said that language in the regulations should ensure that states have the right to provide data challenging assertions made in the proposals to take land in trust. According to some state and county officials, the current process does not work well in providing them with information and an opportunity to comment. During a meeting with staff of various state governors, arranged by the National Governors Association, an attorney with the South Dakota Office of the Attorney General told us that while the governor’s office receives notification of land in trust applications, the state does not have access to a tribe’s application except through a Freedom of Information Act request, which often takes too long. He said BIA does not consistently allow for extensions in these cases. In a meeting with county officials arranged by the National Association of Counties, a representative from a New York county said that BIA’s process was unfamiliar, so the state, the two counties involved, and other local governments paid for extra legal, economic, and environmental consultants. However, he said it was not possible for these government entities to respond adequately to the initial BIA notice within 30 days. BIA provided an extension of time for the county to respond. Similarly, some state and local governments raised the following access and timing issues in comments on the applications that we reviewed: In 1999, Cass County told the Minnesota Agency Superintendent that further documentation on the application from the Minnesota Chippewa Tribe (Leech Lake Band) was needed for the county to provide specific comments other than the amount of taxes. The county asked for more documents under the Freedom of Information Act and for an additional 60 days to comment following receipt of the documents. BIA provided the documents and more time. In June 2001, Santa Barbara County, California, responded to a notice of an application that, without information regarding how the Santa Ynez Band of Chumash Mission Indians of the Santa Ynez Reservation intended to regulate activity on trust land, the county could only speculate that jurisdictional and land use conflicts would arise. In December 2002, an assistant legal counsel to the governor of Kansas wrote to the BIA representative in the Horton, Kansas, field office that to effectively represent the state, it was necessary to have each tribe’s resolution plan that accompanies the initial application for land to be taken into trust. Also, in a January 2005 letter, the General Counsel to the Governor of Minnesota told BIA that it could not fully comment on an application by the Minnesota Chippewa Tribe (Grand Portage Band) without an opportunity to review the proposed purpose for conversion and potential uses. In the revisions to the regulations, Interior is considering providing some additional information to state and local governments and lengthening the period for comments. One provision under consideration would require that a tribe complete a form called a “request for trust acquisition.” BIA would provide the form, along with a description of the land and the proposed use of the land, to the state and local governments having jurisdiction. Another provision would lengthen the time period for state and local governments to comment after BIA provided notice of an application. The time periods would change from 30 days to 60 days for on- reservation applications and to 90 days for off-reservation applications. BIA’s Land in Trust Database Is Incomplete and Inaccurate, and BIA is Planning to Redesign It During the course of our review, we found the data in BIA’s land in trust database, which was implemented agencywide in August 2004, were frequently incomplete and inaccurate. As a result, the data are of questionable value to Interior and BIA management, and we did not rely on it. BIA has already recognized some shortcomings and initiated an effort to re-evaluate and redesign the database, as necessary. The database was hastily developed and deployed without defining and documenting user requirements throughout the agency and clearly defining data fields. Staff with Interior’s Office of Information Development said a contractor developed the database in about a month to address the information needs of the Deputy Assistant Secretary for Indian Affairs in the summer of 2004. In a June 2005 memo, almost a year after the system was put in place, BIA’s Deputy Director for Trust Services noted that only 4 of the possible 11 regions had entered any data into the database, and the memo directed each BIA regional and agency office to enter all of its land in trust applications into the database within 5 days. By the end of fiscal year 2005, the database contained more than 1,000 applications. We found that not all of the applications had been entered into the database, and the status of an application, as either approved, denied, or pending, was frequently incorrect in the database. Specifically, we found the following issues: Not all of the applications had been entered into the database. Twenty- nine of the 87 applications with decisions in fiscal year 2005, or 33 percent, were not in the database. About half of these applications not in the database, 13, were from the Eastern Oklahoma Region’s Chickasaw Agency. No one at the office had access to the database when we initially inquired and, therefore, they could not enter information. Also 9 applications at the Pacific Region were not in the database, and no one there had access when we inquired. We also found instances at the Midwest and Southwest Regions where some pending applications had not been entered into the database. The high rate of applications that had not been entered into the database is one of the factors that led us to conclude that database information was unreliable. Status of applications was frequently incorrect. During the course of our review, we found that 30 of the 41 applications identified as denied in the database were miscoded, an error rate of almost 75 percent. Most of the remaining “denied” applications were applications that were closed by realty staff with the Southern Plains Region’s Horton Agency because the tribal applicants had not responded for more than a year to BIA’s requests for the additional information needed to process the applications. The applications were not processed and denied based on the criteria in the regulations; rather, they were closed due to inactivity. However, as currently designed, there is no category in the database to show this type of resolution other than denied. In addition, we found that some offices interpreted “approved” differently. For example, two agency offices in BIA’s Rocky Mountain Region used an application form that required the agency superintendent to approve the application for filing and processing. As defined by these offices, some applications being processed had been “approved,” but they were actually pending applications. Other BIA offices considered an application approved when the superintendent actually approved taking the land in trust. While some of the problems we encountered with the status of the applications in the database were simply data entry errors, others were the result of systematic problems, such as the lack of common definitions for key terms. Furthermore, at the time of our review, regional and agency realty staff did not use the new database as the primary tool for managing their applications. According to BIA regional and agency realty staff, they do not use or do not like to use the database because it is cumbersome, slow, and does not meet their needs. They continue to use their office-specific spreadsheets to manage and track their applications. These office spreadsheets were one of the tools we used to try and verify the information in the database. However, trying to reconcile the office spreadsheets with the database was difficult because the office spreadsheets usually identified applications only by parcel name, whereas the database identified applications with different unique identification numbers by region, agency, and tribe. We believe that data need to be accurate, valid, complete, consistent, and timely enough to document performance, support decision making, and respond to the needs of internal and external stakeholders. According to Interior officials, the database has been used to respond to questions about the program from various levels of management and from Congress. Further, data quality depends on how readily users can access data, aided by clear data definitions and user-friendly software. When significant data limitations exist, it is important to make stakeholders and Congress aware of the limitations so they can judge the credibility of the data for their use. During the course of our review, BIA recognized that the database has limitations, and it asked Interior’s Office of Information Development to improve the database. In April 2006, the Deputy Director for Information Development conducted a 3-day workshop for program managers on BIA’s land in trust database. The session served as a basis for making improvements and, in May 2006, the office was preparing a plan to (1) involve regional and headquarters officials in changing the database, (2) better define terms and fields, and (3) increase the number of fields in the database. A properly designed and implemented database with accurate data would provide BIA with important information to help better manage the land in trust process. Conclusions The land in trust regulations were intended to provide a clear, uniform, and objective approach for Interior to evaluate land in trust applications. However, the regulations provide wide discretion to the decision maker because the criteria are not specific, and BIA has not provided clear guidelines for applying them. Given the wide discretion that exists and the increased scrutiny that the land in trust process has come under with the growth of Indian gaming, it is important that the process be as open and transparent as possible. Clearly documenting each decision and providing that information to state and local governments is a critical component of having an open and transparent process. However, contrary to BIA policy and admonishments from the IBIA, we found one BIA office that did not document its consideration of the criteria in the regulations. While this office only accounted for 2 of the 87 decisions in fiscal year 2005, it omitted documentation of the most important part of the process. State and local governments need information on how BIA reaches its decisions to effectively execute their role in the process, including holding the federal government accountable for its decisions and having adequate information to decide whether or not to appeal a decision if it believes that the federal government did not adequately follow the process. A lack of specific time frames for BIA to make decisions on land in trust applications results in a lack of predictability about the process and contributes to the perception, on the part of Indian applicants and state and local governments, that the process is not open and transparent. Lengthy application processing times can place a burden on BIA, Indian applicants, and state and local governments. If applications are not processed in a timely manner because of delays by BIA or the applicant, information in the applications can become outdated, particularly environmental assessments, comments from state and local governments, and tax data. When this happens, BIA must devote additional resources to obtaining updated information and reprocessing the applications—an inefficient and time-consuming process for BIA, Indian applicants, and state and local governments. To the extent that BIA is the cause of some of these delays, imposing specific time frames on the decision-making process should improve the processing of the land in trust applications. In addition, some state and local governments have been unable to adequately participate in the process because they did not have enough information on the pending applications or the necessary length of time to provide substantive comments. Interior is considering changes to the regulations that would address these issues. Finally, federal agencies need data that are accurate, valid, complete, consistent, and timely enough to document performance, support decision making, and respond to the needs of internal and external stakeholders. During the course of our review, BIA recognized the shortcomings with the data in its land in trust database and initiated a process to improve the database. A properly designed and implemented database with accurate data would provide important information to (1) BIA to help it better manage the land in trust process and (2) other stakeholders, particularly Congress, to help carry out oversight of the land in trust process. Recommendations for Executive Action To improve timeliness and transparency and ensure better management of BIA’s land in trust process, we recommend that the Secretary of the Interior direct the Assistant Secretary for Indian Affairs to take the following three actions: reinforce the requirement that all decisions be fully documented; move forward with adopting revisions to the land in trust regulations that include (1) specific time frames for BIA to make a decision once an application is complete and (2) guidelines for providing state and local governments more information on the applications and a longer period of time to provide meaningful comments on the applications; and institute internal controls to help ensure the accuracy and reliability of the data in the land in trust database, as part of the redesign of the existing system. Agency Comments Interior’s Associate Deputy Secretary commented on a draft of this report in a letter dated July 12, 2006 (see app. VI). In general, Interior agreed with our findings, conclusions, and recommendations. The Associate Deputy Secretary commented that BIA is working to address the recommendations and that a corrective action plan will be developed and implemented in response to the report. Specifically, BIA is taking steps to finalize the regulations under consideration. After the regulations are completed, BIA will develop a handbook to ensure consistent application of the regulations. The handbook will also include specific internal control procedures to ensure all decisions are properly and completely documented, as well as entered into the land in trust database accurately and in a timely manner. We are sending copies of this report to interested congressional committees, the Secretary of the Interior, the Assistant Secretary for Indian Affairs, BIA regional and agency offices we visited, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Objectives, Scope, and Methodology The fiscal year 2006 House Appropriations Committee Report for the Department of the Interior’s (Interior) appropriation bill directed GAO to study the Bureau of Indian Affairs (BIA) procedures and practices in implementing the land in trust regulations. In response to this direction and subsequent discussions with congressional staff, we (1) assessed the extent to which BIA’s processing of land in trust applications followed its regulations, (2) determined the extent to which applications were processed in a timely manner, and (3) identified any state and local government concerns about land in trust applications and how they were addressed in BIA’s decision-making process. For all of the objectives, we reviewed applicable laws, regulations, and land in trust applications. We reviewed applications at six BIA regional offices— Eastern, Midwest, Northwest, Pacific, Southern Plains, and Southwest— and eight BIA agency offices—Blackfeet (Browning, Montana), Chickasaw (Ada, Oklahoma), Great Lakes (Ashland, Wisconsin), Horton (Horton, Kansas), Minnesota (Bemidji, Minnesota), Siletz (Siletz, Oregon), Warm Springs (Warm Springs, Oregon), and Wind River (Fort Washakie, Wyoming). We selected those offices because our general intent was to visit all BIA offices with 10 or more land in trust applications described as approved or denied in BIA’s land in trust database. However, interviews with realty officials at these offices and at the Western, Great Plains, Navajo, and Rocky Mountain Regions and our examination of documents they provided led us to conclude that the database was frequently incomplete and inaccurate. During the course of our work, we found many examples of inaccuracies in the database that showed data were missing, incorrectly described, or inconsistently reported. Therefore, our scope was limited to the groups of applications in which we had greater confidence that we had obtained all of the applications. We examined (1) 87 discretionary nongaming land in trust applications with decisions in fiscal year 2005, (2) 28 off-reservation applications awaiting comments from the Office of the Assistant Secretary for Indian Affairs, (3) 34 appealed decisions pending before BIA regional directors at the end of fiscal year 2005, and (4) 11 appealed decisions pending before the Interior Board of Indian Appeals (IBIA) at the end of fiscal year 2005. In an effort to collect all of the applications in these categories, we relied on interviews with BIA realty officials in the relevant offices, examination of their localized spreadsheets for tracking applications, and some comparisons with other BIA databases. To identify the pending appeals at the IBIA, we relied on these methods and the selections provided by the Chief Judge and an examination of the IBIA’s informal log. We collected 67 of the 87 discretionary nongaming land in trust applications with decisions in fiscal year 2005, or 77 percent, during our site visits. From telephone discussions with realty staff, we identified the remaining relevant applications at five agencies—Colville (Colville, Washington), Fort Peck (Fort Peck, Montana), Michigan (Sault Ste. Marie, Michigan), Southern California (Riverside, California), and Puget Sound (Everett, Washington). Staff at locations we did not visit made copies of an additional 18 applications and mailed them to us. We contacted realty officials at the Navajo Region and the Great Plains Region, including its agencies—Rosebud (Rosebud, South Dakota), Lower Brule (Lower Brule, South Dakota) and Pine Ridge (Pine Ridge, South Dakota) and verified that they had no applications with decisions in fiscal year 2005. In addition, we obtained applications that were appealed to BIA regional directors and pending in fiscal year 2005 based on discussions with realty officials in the various field offices and regions and from examining their files. We used a similar method to identify and collect applications appealed to the IBIA that were pending at the end of fiscal year 2005. Also, we interviewed the Chief Judge of the IBIA to identify pending applications; he provided copies of relevant applications. In doing so, we obtained information on the two remaining applications with BIA decisions in fiscal year 2005. Besides interviews with BIA and Interior officials, we obtained views from various interested parties including representatives of the National Governors Association, the National Association of Counties, National Congress of American Indians, and several individual tribes. The National Governors Association invited their members to meet with us, and they hosted a teleconference, which included representatives from 12 states— Arkansas, California, Colorado, Connecticut, Kentucky, New Mexico, New York, Ohio, Oklahoma, South Carolina, South Dakota, and Washington. The National Association of Counties included a panel session with GAO at their annual meeting in March 2006. The six participants were from the California State Association of Counties; Kitsap County, Washington; Madison County, New York; Navajo County, Arizona; Seneca County, New York; and Ziebach County, South Dakota. For discussions with tribal leaders, we used a nonprobability sample to select tribes that submitted applications in recent years to BIA locations we visited. We met with representatives of the 13 tribes listed in table 6. In addition, we obtained Interior’s and Indians’ views on the land in trust process by participating in a panel session on the subject at the Self- Governance Tribes’ Fall Conference in 2005. For each of the objectives we took the following specific actions: To determine how BIA processed land in trust applications, we reviewed the 87 applications with decisions in fiscal year 2005 and compared how the applications were processed with the requirements in the regulations and departmental guidance. In addition, we interviewed Interior’s field solicitors in Minnesota and Oregon to obtain their perspectives on how BIA followed procedures during their reviews of applications. To determine whether applications were processed in a timely manner, we compared the processing times for (1) 87 applications with decisions in fiscal year 2005 and (2) 28 complete off-reservation applications awaiting comments from the Office of the Assistant Secretary for Indian Affairs to the 120-business days, or about 6 months, time frame BIA is considering imposing for making decisions on on- and off-reservation land in trust applications. The reported minimum, median, and maximum processing times are for fiscal year 2005 only and might not be indicative of other years. For each of the applications with decisions in fiscal year 2005, we tried to use the date of the application as the initial point to calculate the processing time. For the few applications where we could not determine the date of the application, we used either the date of the tribal resolution requesting that the land be placed in trust or the date BIA notified state and local governments about an application. We used the decision date as the end date for calculating the processing time of these applications. For off-reservation applications, we calculated the time from the date of the draft decision to the end of fiscal year 2005. In addition, we compared the length of time that 34 appealed decisions had been awaiting resolution by BIA regional directors with the current 60-day time frame set forth in the regulations on appeals. For the appealed decisions, we calculated the time from the date of the decision to the end of fiscal year 2005. We also interviewed BIA officials and tribal representatives involved in the process to obtain their views on the time taken for processing applications. To determine whether state and local governments had concerns, we analyzed the content of comments made by these governments for the 87 applications with decisions in fiscal year 2005 and 45 appeals pending at the end of fiscal year 2005. Moreover, we reviewed the National Governors Association 2005 position paper on revisions to the regulations for processing land in trust, and we obtained draft revisions to the regulations from a Counselor to the Assistant Secretary for Indian Affairs at Interior. As described above, we decided that the BIA database was not reliable for our purposes. To determine the accuracy and reliability of the database, we compared the information in the database with other data sources, including spreadsheets used by a number of the BIA offices we visited to track land in trust applications, BIA realty reports under the Government Performance and Results Act, and BIA annual acreage reports. We also discussed the development of the current database and the proposed redesign of the database with staff in the Office of the Chief Information Officer within the Office of the Assistant Secretary for Indian Affairs. We performed our work between August 2005 and June 2006 in accordance with generally accepted government auditing standards. BIA’s Process for Placing Land in Trust Remnd In the case of a denial by the superintendent and no appeal, the process would end here. Processing Times for 87 Land in Trust Applications with Decisions in Fiscal Year 2005 Processing Times for 28 Off-Reservation Land in Trust Applications Awaiting Consideration by BIA Central Office Processing Times for 34 Appealed Land in Trust Decisions Awaiting Resolution by a BIA Regional Director Comments from the Department of the Interior GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Jeffery D. Malcolm, Assistant Director; Jean Cook; Mark Keenan; Daniel J. Semick; Carol Herrnstadt Shulman; and Susan Swearingen made key contributions to this report. Also contributing to the report were Jennifer DuBord, Susanna Kuebler, Greg Marchand, Justin Monroe, George Quinn, Anne Rhodes-Kline, Jena Y. Sinkfield, Ashanta Williams, and Greg Wilmoth.
In 1980, the Department of the Interior (Interior) established regulations to provide a uniform approach for taking land in trust. Trust status means the government holds title to the land in trust for tribes and individual Indians. Trust land is exempt from state and local taxes. The Secretary of the Interior has delegated primary responsibility for processing, reviewing, and deciding on applications to take land in trust to the Bureau of Indian Affairs (BIA). As part of this process, BIA must seek comments from affected state and local governments. Congress directed GAO to study BIA's processing of land in trust applications to determine the extent to which (1) BIA followed its regulations, (2) applications were processed in a timely manner, and to (3) identify any concerns raised by state and local governments about land in trust applications. GAO is also providing information on problems with BIA's data on the processing of land in trust applications. BIA generally followed its regulations for processing land in trust applications, although most of the criteria in the regulations are not specific and thus do not offer clear guidance for how BIA should apply them. For example, there are no guidelines on how to weigh the impact of lost tax revenues on local governments. As a result, the BIA decision maker has wide discretion. Generally, all of the 87 applications with decisions in fiscal year 2005 were approved, except for 1 denial and 6 that were closed because the applications were incomplete. BIA is considering revisions to the regulations that would clarify that applications will generally be approved unless there is clear evidence of significant negative impacts. These revisions would make BIA's decision-making process more transparent. Currently, BIA has no deadlines for making decisions on land in trust applications, but BIA is considering imposing about a 6-month time frame. In addition, there is also a 60-day time frame for BIA regional directors to rule on appeals. Based on these time frames, it appears that many land in trust applications have not been processed in a timely manner. First, the median processing time for the 87 applications with decisions in fiscal year 2005 was 1.2 years--ranging from 58 days to almost 19 years. Second, 28 complete off-reservation applications had been waiting an average of 1.4 years for a decision as of September 30, 2005. Third, 34 appeals had been waiting an average of about 3 years for resolution by a BIA regional director as of September 30, 2005. When opposing land in trust applications or appealing decisions, state and local governments principally cited concerns about lost tax revenues and jurisdictional issues. In commenting on applications prior to decisions made in fiscal year 2005, state and local governments opposed 12 of 87 applications, or about 14 percent. Also, as of September 30, 2005, 45 decisions were on administrative appeal to either a BIA regional director or Interior's Board of Indian Appeals, including 5 appealed decisions from fiscal year 2005. Although GAO found little opposition to applications with decisions in fiscal year 2005, some state and local governments we contacted said (1) they did not have access to sufficient information about the land in trust applications and (2) the 30-day comment period was not sufficient time in which to comment. GAO found the data in BIA's land in trust database, which was implemented in August 2004, were frequently incomplete and inaccurate. The database was hastily developed without defining user requirements and data fields. Specifically, (1) not all of the applications had been entered into the database and (2) the status of an application, as either approved or denied, was frequently incorrect. A properly designed and implemented database with accurate data would provide BIA with important information to help better manage the land in trust process. BIA has already recognized the shortcomings and initiated an effort to redesign the database as necessary.
Background Congress passed the pediatric exclusivity provision as part of the Food and Drug Administration Modernization Act of 1997 to address a long- standing concern about the low percentage of prescription medications on the market that had been tested and approved for use in children. BPCA, which reauthorized the pediatric exclusivity provision, also included a requirement that FDA take into account adequate representation of race and ethnicity in the development of patient groups in pediatric drug studies. FDA is responsible for administering the law and has procedures for ensuring the study of drugs in pediatric patients as well as guidance that encourages (1) the inclusion of children from minority groups and (2) the collection and analysis of race-related study data. In this role, FDA must balance its policy of minimizing the number of children exposed to a drug during clinical trials with the need to maintain adequate sample sizes, including adequate representation of minority children, for effectively assessing a drug. Improvements Resulting from the Pediatric Exclusivity Provision In May 2001, we testified before the Senate Committee on Health, Education, Labor and Pensions that, since enactment of the pediatric exclusivity provision, both the numbers of new drugs studied in children and the number of therapeutic classes these drugs represent have substantially increased. We reported that hundreds of studies were being done on drugs that are important to pediatric patients because the drugs treat a variety of diseases or conditions that afflict children. Some were tests on relatively small numbers of pediatric patients to determine the correct dose for a specified age group, while other tests were on larger numbers of pediatric patients and were more complex and costly evaluations of a drug’s safety and effectiveness in children of various ages. BPCA reauthorized and expanded the provision for 5 more years through October 1, 2007. FDA Procedures for Ensuring the Study of Drugs in Pediatric Patients The process for obtaining exclusive marketing rights can be initiated either by a drug sponsor or by FDA. A sponsor may submit a proposal to FDA to conduct drug studies. If FDA officials believe that studying a drug may produce health benefits for children, FDA issues a formal written request to the drug sponsor that includes, among other things, the type of studies to be conducted, the study design and goals, and the formulations and age groups to be studied. As of March 31, 2003, FDA had issued 272 written requests for pediatric studies. Of these, 220 were issued in response to sponsors’ proposals. FDA may issue a written request without the sponsor’s proposal if FDA identifies a need for pediatric data. FDA has issued 52 written requests without sponsors’ proposals. A written request may require more than 1 study of a drug; the 272 requests covered 631 studies, and could involve more than 37,150 pediatric patient participants if they were all completed. Regardless of the final study results, if FDA determines that the data submitted fairly responds to the written request and the studies were conducted properly, it will grant the sponsor 6 months of additional exclusive marketing rights. From enactment of the pediatric exclusivity provision in 1997 through April 30, 2003, FDA granted an additional 6 months of additional exclusive marketing rights for 74 drugs. Sponsors are not required to include minority children in studies for pediatric exclusivity. Findings from these studies have led to labeling changes for pediatric use for 50 drugs. For example, a study of fluoxetine (an antidepressant) confirmed its effectiveness to treat major depressive disorders in children 8 to 17 years of age and obsessive-compulsive disorder in children 7 to 17 years of age. In addition, studies for a new asthma drug—montelukast— led to new information on dosing and a new oral formulation permitting its use in children from the ages of 12 months to 5 years. FDA also has a process in place to encourage pediatric studies of drugs that manufacturers choose not to conduct. For drugs on which the patent or exclusive marketing rights have expired, commonly referred to as off- patent drugs, the National Institutes of Health (NIH) in collaboration with FDA annually develop a list of drugs for which pediatric studies are needed and publish it in the Federal Register. FDA may select a drug from this list, issue a written request to the manufacturer that holds the approved application for the drug, and, if the manufacturer does not respond within 30 days, forward the written request to NIH to issue a contract to conduct the study. In fiscal year 2003, HHS announced that NIH would set aside $25 million from its budget to conduct pediatric studies of off-patent drugs from this list. Similarly, if FDA issues a written request for a drug that is on-patent but the drug sponsor declines to test the drug in children, FDA can ask the Foundation for the National Institutes of Health, which supports the mission of NIH, to test the drug with funds raised from the private sector. Evidence Shows That Drug Effectiveness and Toxicity Can Vary among Racial and Ethnic Groups An important reason to include minorities in pediatric drug studies is to examine the effect of race or ethnicity on the disposition and effects of drugs in children. In adults, the activity of some drug-metabolizing enzymes varies with race or ethnicity. For example, one commonly prescribed drug used to treat gastric conditions, esomeprazole (Nexium), is partly metabolized by the CYP2C19 enzyme. Studies have shown that from 15 to 20 percent of Asians lack the enzyme CYP2C19. As a result, some Asians metabolize the drug poorly and require lower doses because their bodies do not clear the drug as rapidly as individuals with this enzyme. Also, compared with Caucasians, certain Asian groups are more likely to require lower dosages of a variety of different antipsychotic drugs used to treat mental illness. Research in adults over the past several decades has further characterized significant differences among racial and ethnic groups in the metabolism, clinical effectiveness, and side-effect profiles of many clinically important drugs. These differences in response to drug therapy can be traced to differences in the distribution of genetic traits that produce these differences among racial and ethnic groups. These naturally occurring variations in the structures of genes, drug metabolism enzymes, receptor proteins, and other proteins that are involved in drug response affect how the body metabolizes certain drugs, including cardiovascular agents (beta- blockers, diuretics, calcium channel blockers, angiotensin coverting enzyme (ACE) inhibitors, and central nervous system agents (antidepressants and antipsychotics). FDA’s Efforts to Account for Minority Children in Clinical Drug Studies BPCA requires that FDA take into account adequate representation of children from ethnic and racial minority groups when issuing written requests to drug sponsors. FDA regulations have required that in new drug applications, “effectiveness data (safety data) shall be presented by gender, age, and racial subgroups and shall identify any modifications of dose or dose interval needed for specific subgroups.” Other FDA guidance encourages the participation of racial and ethnic groups in all phases of drug development, recommends collection of race-related data during research and development, and recommends the analysis of the data for race-related effects. FDA officials told us that if there is scientific evidence documenting possible mechanisms causing variation in drug response in minorities, such as a higher or lower prevalence of a specific drug metabolizing enzyme or drug receptor, then FDA’s written request will require the study sponsor to increase minority representation in the study. The officials told us that it is particularly important to consider racial differences in pediatric patients under two circumstances: (1) when there is a possible difference in drug metabolism or response demonstrated in adult clinical studies or documented in the scientific literature or (2) if a drug is used to treat a disease that disproportionately affects minorities. Absent these conditions, the officials told us that FDA does not require that sponsors include particular numbers or proportions of minority children in its studies. Limitations of Pediatric Drug Studies According to FDA officials, FDA’s policy is to minimize the number of children exposed to a drug during clinical studies, while maintaining an adequate sample size to draw clinically meaningful conclusions. Most pediatric studies for extension of exclusive marketing rights are designed to give health care providers information on the appropriate dosage or formulation of a drug in a pediatric population. As a result, most pediatric clinical drug studies generally are on a smaller scale than the clinical studies drug sponsors conduct to gain FDA approval to market a new drug. FDA officials told us that both the small number of patients in most pediatric studies as well as the fact that most studies seek to determine the appropriate dosage and safety for pediatric patients have precluded any definitive conclusions about racial or ethnic differences in drug response among children. No completed studies under the pediatric exclusivity provision to date have led to findings or labeling changes specific to any racial or ethnic group. Smaller Proportions of Minority Children Were in Studies for Additional Marketing Exclusivity Requested before BPCA Compared to their proportions in the U.S. population, smaller proportions of children of racial and ethnic minority groups were included in the clinical drug studies we reviewed for additional exclusive marketing rights that FDA requested before BPCA took effect. However, for hypertension drugs where differences in racial response have been documented in adult drug studies, FDA required, and drug sponsors included, larger numbers of children from specific racial and ethnic groups. Most of FDA’s written requests for studies that have been issued since BPCA took effect required drug sponsors to report the number of racial and ethnic minorities in their final study results. In addition, some requests required drug sponsors to analyze the effects of race and ethnicity or increase minority representation for certain drugs where differences in racial response have been documented in adult drug studies. The Proportions of Children in Racial and Ethnic Minority Groups in Clinical Studies for Exclusive Marketing Rights Were Lower Than Their Proportions in the U.S. Population Compared with their proportions in the U.S. population, smaller proportions of African American, Hispanic, and Asian children were included in clinical studies for the drugs that were granted 6 months of additional exclusive marketing rights by FDA from January 4, 2002, through March 6, 2003. Across all clinical studies for the 23 drugs we examined, 7 percent of pediatric patients were African American, 5 percent were Hispanic, and 1 percent were Asian. Most pediatric patients were Caucasian—69 percent—and the race and ethnicity were unknown for 14 percent. Compared with the frequency distribution of African American and Hispanic children under 18 years of age for the U.S. population as a whole in 2000, the proportions of these two groups included in clinical drug studies were 8 and 12 percentage points lower, respectively, than their proportions in the U.S. population. The proportion of Asian children in clinical drug studies was 2 percentage points lower than their proportion in the U.S. population (see table 1). (See app. II for the number of children in racial and ethnic groups included in clinical studies for drugs granted additional exclusive marketing rights from January 4, 2002, through March 6, 2003.) Pediatric Studies for Hypertension Drugs Included More Children of Racial and Ethnic Minority Groups FDA required that sponsors increase representation of children of ethnic and racial minority groups in clinical studies for drugs used to treat diseases that disproportionately affect children in such groups or where evidence from studies on adults suggests that for certain classes of drugs differences in metabolism or response for racial or ethnic groups exist. For example, because hypertension is more prevalent and more severe in African Americans than in Caucasians, and adult responses to some hypertension therapies appear to be different in African American and non-African American populations, FDA’s written requests for these drugs require that the patient recruitment protocol be designed to ensure a mixture of African American and non-African American patients. Therefore, in pediatric clinical studies for three cardiovascular drugs used to treat hypertension, African American children represented 22 percent of study participants (see table 2). FDA Written Requests Issued since BPCA Require Sponsors to Increase Minority Representation for Certain Drugs For some written requests issued since BPCA took effect, FDA required sponsors to increase the participation of minority children. Specifically, 4 of the 22 written requests for such studies directed sponsors to increase the proportion of minority children participants or to analyze the effects of race and ethnicity. In 11 of the 22 requests, FDA directed drug sponsors to report the representation of pediatric patients of ethnic and racial minority groups when submitting final study results, but did not request that sponsors include a particular proportion of minority children or analyze the effects of race and ethnicity. The remaining 7 written requests made no mention of race or ethnicity. FDA’s four study requests that directed sponsors to increase the proportion of minority children participants or to analyze the effects of race or ethnicity took varied approaches. One written request by FDA required that the sponsor include a mixture of African American and non- African American patients for a study of a drug used to treat hypertension. Two other requests, for diabetes drugs, required the study sponsors to ensure that 50 percent of the study populations were composed of African American, Native American, and Hispanic patients because of a greater prevalence of diabetes in these groups. In the fourth written request, for a drug used to prevent bone loss, FDA required that the study sponsor examine potential demographic covariates, such as race. Drugs of Importance to Minority Children Are Being Studied in Response to Pediatric Exclusivity Provision Requests Some drugs that may be used to treat diseases or conditions that disproportionately affect children of racial and ethnic minority groups are being studied under the pediatric exclusivity provision. In response to FDA written requests, drug sponsors are conducting or have completed pediatric studies on drugs that might be used to treat hypertension, type II diabetes, sickle cell anemia, and other conditions that disproportionately affect minorities. From January 4, 2002, through March 6, 2003, FDA granted exclusive marketing rights or issued written requests for studies of 10 drugs that might be used to treat diseases or conditions that disproportionately affect minority children. Specifically, 4 of the 23 drugs for which FDA granted additional exclusive marketing rights might be used to treat diseases or conditions that are more prevalent in minorities, such as asthma and hypertension (see table 3). In addition, 6 of the 22 written requests for new studies that FDA issued to drug manufacturers during this period also included treatments for diseases or conditions disproportionately affecting minorities, such as type II diabetes, hypertension, sickle cell anemia, HIV, and hepatitis B. FDA Monitoring of Data on Minority Representation Needs Improvement FDA does not have a system in place to serve as a single source of data to allow the agency to efficiently determine the extent of minority enrollment in drug studies under the pediatric exclusivity provision. Further, we found that some study reports submitted to FDA from drug sponsors did not specify the race and ethnicity of study participants. For example, in the completed studies for the 23 drugs granted additional exclusive marketing rights that we examined, the race or ethnicity of 86 percent of study participants was identified, but study sponsors did not specify the race or ethnicity of 960 children, or 14 percent of the studies’ populations. Recently, FDA issued draft guidance to improve drug sponsors’ reporting of racial and ethnic minority representation data, and FDA is planning to develop a database to monitor demographic variables in drug trials across the agency. There Is No Single Source of Data about Minority Representation There is no single data source at FDA to allow the agency to tabulate the overall numbers of racial and ethnic minorities in clinical studies. For example, to quantify the participation of racial and ethnic groups in studies for the 23 drugs granted additional exclusive marketing rights since January 2002, FDA had to extract and tally race data from about 50 separate final study reports that included nearly 7,000 children. Reporting of Minority Representation Data Is Not Standardized Final study results submitted to FDA from sponsors do not always fully describe the race and ethnicity of children who participated in clinical drug studies. In addition, FDA has not established uniform definitions for reporting racial and ethnic minorities in drug studies. In reviewing the study results for the 23 drugs granted additional exclusive marketing rights from January 4, 2002, through March 6, 2003, we found wide variation in how study sponsors presented and defined data regarding minority participation. Study sponsors reported minority representation according to non-standard definitions, which were often ambiguous. For example, one study classified its 200 participants as “mostly Caucasian” and included no further data on the remaining population. Similarly, in studies included in three applications involving more than 1,500 children, sponsors only identified the number of Caucasian patients and did not identify the racial or ethnic groups of non-Caucasian children. Across all studies for drugs granted exclusive marketing rights from January 4, 2002, through March 6, 2003, the race or ethnicity of 960 children, or about 14 percent of all study participants, was unknown. Eighty-six percent of study participants were identified by race or ethnicity. Further, we could identify the specific race or ethnicity for only 30 of the 268 subjects classified as “other” in study reports. FDA officials told us that they do not know which populations are included in the “other” category and that it likely includes children whose race was not determined. FDA Is Taking Steps to Improve Data Management Recently, FDA has begun to take steps to address data management issues. In January 2003, FDA issued draft guidance for industry recommending that study sponsors collect and report racial and ethnic representation using definitions developed by the Office of Management and Budget, which HHS adopted for use in HHS funded and sponsored data collection and reporting systems. FDA stated in its draft guidance that using uniform categories would enhance the consistency and comparability of data across studies and other HHS agencies, as well as promote the early identification of differences in physiological response among racial and ethnic groups. FDA’s draft guidance recommended that sponsors collect race and ethnicity data for clinical study participants using five racial groups (African American/Black, American Indian/Alaska Native, Asian, and Native Hawaiian/Other Pacific Islander, and White) and two ethnic groups (Hispanic/Latino and not Hispanic/Latino). However, FDA guidance is not legally binding for either FDA or the sponsor. In addition, FDA has started to develop an agencywide system called the Demographic Information and Data Repository (DIDR) to electronically manage information regarding demographic characteristics of clinical trial participants, including age, sex, and race. DIDR is part of FDA’s response to a congressional report requesting that FDA monitor the representation of women in clinical studies. The conference report accompanying FDA’s 2002 appropriations identified a $500,000 increase in funding for FDA’s Office of Women’s Health to begin work on this system. FDA officials told us that it would be several years before the system is operational. Conclusions To have optimal effectiveness for all children, a drug should be tested in clinical studies that include pediatric patients representing the full range of population groups likely to receive the drug once it is marketed. In addition to age, genetic factors related to race and ethnicity may play important roles in the variability of patients’ responses to a drug. Pediatric clinical drug studies with sufficient representation of minority groups are necessary to detect the presence or absence of differences in responses to certain drugs. The changes under BPCA to the pediatric exclusivity provision require that FDA take into account the adequate representation of children of racial and ethnic minorities in written requests for drug studies. However, it is too early to tell whether FDA’s written requests issued since enactment of BPCA will result in better reporting or a broader mix of participants. Currently, FDA is unable to accurately determine whether and to what extent minority groups are accounted for in final study results because it does not require sponsors to use uniform definitions. Though FDA’s draft guidance on standard definitions for reporting race and ethnicity is helpful, sponsors will not be obligated to use these categories to identify study participants unless FDA requests that they do so. The standardized collection of demographic data, such as race and ethnicity, would help ensure that FDA’s forthcoming DIDR will have the required data needed to evaluate the risks and benefits of a drug in specific demographic groups. Recommendation for Executive Action To help the agency more efficiently monitor the participation of children of racial and ethnic groups in studies for additional exclusive marketing rights, we recommend that the Commissioner of FDA specify in written requests that study sponsors must use the racial and ethnic categories described in FDA’s January 2003 draft guidance to identify study participants in their reports to the agency. FDA can refuse to grant 6 months of additional exclusive marketing rights under the pediatric exclusivity provision for sponsors that do not fairly respond to FDA’s written requests. Agency Comments and Our Evaluation FDA comments on a draft of this report reaffirmed the importance of clinical studies of drugs used to treat children. FDA agreed that the agency needed to improve the efficiency of its system for tracking demographic information about study participants. FDA also agreed with our recommendation and reported that it has already begun to implement it. FDA raised concerns about three aspects of our draft report. First, FDA was critical of our comparison of the proportions of minority children study participants to the proportions of minority children in the population. FDA commented that it would have been more appropriate for us to compare the proportions of minority children in clinical drug studies with the proportions of minority children with the specific condition each drug is intended to treat. We agree that such a comparison would have been useful, but both we and FDA found that the information needed for such comparisons—the racial and ethnic group distributions of children with many of the specific conditions treated by the drugs studied for additional exclusive marketing rights—was not available. Further, FDA has previously used the methodology we employed in its analyses of adult study participants. Second, FDA was concerned about what it regards as the implications of our finding that the proportions of minority children in pediatric studies requested by FDA before the passage of BPCA were less than their proportions in the general population. FDA incorrectly suggested that we advocate that “the percentage of children in each clinical drug trial would or should track the percentage of children in the general population.” Our report does not make any recommendations about the preferred study populations for any clinical drug trial. Further, we did not disagree with FDA’s current policy requiring larger proportions of children from racial and ethnic minority groups when a studied drug treats a condition that disproportionately affects minorities or when it is known from adult studies that the effects of a drug may be different in persons from different racial or ethnic groups. Third, FDA noted that the race or ethnicity of a high percentage of study participants was identified even before BPCA was enacted. Our findings agree with that assessment—we reported that the race or ethnicity of study participants was identified for 86 percent of study participants—but we believe that FDA should have been able to identify the race or ethnicity of every study participant. FDA’s written comments are reprinted in appendix III of this report. FDA also provided technical comments, which we considered and incorporated where appropriate. We are sending this report to the Commissioner of FDA and to other interested persons. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7119. Another contact and major contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To assess the extent to which children of racial and ethnic groups are represented in clinical studies for drugs granted exclusive marketing rights, we reviewed data for the 23 drugs that were granted exclusive marketing rights from January 4, 2002, through March 6, 2003. For these 23 drugs, we determined the total number of children in four racial and ethnic groups enrolled in each study from Food and Drug Administration summary documents, and new drug applications (NDA) or supplemental new drug applications (sNDA) submitted to FDA for this time period. We collected clinical study participation data for three racial groups (African American, Asian, and Caucasian) and one ethnic group (Hispanic) because drug sponsors commonly used these categories. However, the clinical studies included in the NDAs or sNDAs submitted during this period were conducted before the effective date for the Best Pharmaceuticals for Children Act of 2002 because the time lag between when FDA issues a written request for a pediatric study and when sponsors submit final study results ranged from 1 to 4 years. To assess the extent to which FDA required drug sponsors to take into account the adequate representation of children of racial and ethnic groups in clinical studies for drugs for which written requests have been issued since BPCA took effect, we reviewed the 22 written requests issued for pediatric drug studies by FDA from January 4, 2002, through March 6, 2003. To determine whether drugs used to treat conditions or diseases disproportionately affecting minorities are being studied under the pediatric exclusivity provision, we obtained data on the prevalence of selected diseases or conditions that disproportionately affect minorities and examined the list of drugs for which FDA has either granted exclusive marketing rights or issued study requests from January 4, 2002, through March 6, 2003, to determine if any of these drugs may be used to treat these diseases or conditions. We compiled data on the estimated prevalence of the diseases and conditions by race and ethnicity from the National Centers for Health Statistics, National Center of HIV, STD, and Tuberculosis Prevention, and research in scientific journals reporting the prevalence of these diseases and conditions in minority children. We interviewed National Institutes of Health officials, pharmacology experts, and pediatric clinicians, including members of the American Academy of Pediatrics and the Pharmaceutical Research and Manufacturers of America to gain their perspectives on the representation of minorities in drug studies and the study of drugs of importance to these populations. To evaluate FDA’s management of pediatric clinical study data on minority representation and its guidance to sponsors on reporting such data, we reviewed FDA’s policies, guidance, and rules for inclusion and reporting of minority representation in drug studies. We interviewed FDA officials within the Office of Counter-Terrorism and Pediatric Drug Development to determine how they interpret and implement these policies for the pediatric exclusivity program. We spoke with officials in the Office of Women’s Health who were responsible for establishing a database to monitor demographic variables to determine how an agencywide demographic database might affect the monitoring of minority participation in drug studies. We also reviewed FDA’s response to a congressional request to develop an agencywide demographic database. We conducted our work from October 2002 through September 2003 in accordance with generally accepted government auditing standards. Appendix II: The Number of Children by Racial and Ethnic Group in Studies for Drugs Granted Exclusive Marketing Rights We obtained the number of children by race or ethnic group who participated in the clinical drug studies for the 23 NDAs or sNDAs for exclusive marketing rights in our sample by reviewing the portions of final study reports that provide information on the demographic representation in the study. Table 4 represents the number of children of racial and ethnic groups, by drug class, in clinical studies for drugs granted exclusive marketing rights from January 4, 2002, through March 6, 2003. It is important to recognize that the FDA written requests outlining the study design for the 23 NDAs or sNDAs that we examined preceded the passage of BPCA on January 4, 2002. The time between when FDA issued written requests for pediatric studies and sponsors conducted and submitted final study results for FDA review and approval ranged from 1 to 4 years. Appendix III: Comments from the Food and Drug Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Gloria E. Taylor, Sharif Idris, George Bogart, and Elizabeth T. Morrison also made major contributions to this report.
Drug effectiveness and adverse events can vary between children and adults and among racial and ethnic groups. The Food and Drug Administration (FDA) is authorized under the pediatric exclusivity provision to grant drug sponsors 6 months of additional exclusive marketing rights for conducting clinical drug studies in children. The Best Pharmaceuticals for Children Act of 2002 (BPCA) expanded this provision to require FDA to take into account the adequacy of minority representation in pediatric exclusivity studies. BPCA also directed GAO to evaluate the representation of minorities in such studies. GAO examined the extent to which minority children are represented, whether drugs that treat diseases disproportionately affecting minority groups are studied under the provision, and FDA's monitoring of the representation of minority children in the studies. GAO reviewed related FDA documents, FDA requests for pediatric studies and final study results, and interviewed FDA officials and other experts. Compared with the proportions of children from racial and ethnic minority groups in the U.S. population, smaller proportions of children from minority groups were included in the pediatric clinical drug studies requested by FDA before the enactment of BPCA that GAO reviewed. However, FDA required, and drug sponsors included, larger proportions of African American children in clinical studies for hypertension drugs because there is evidence that hypertension is more prevalent and more severe among African Americans. Furthermore, FDA has requested that forthcoming studies for certain drugs include larger proportions of minority children. Studies of some drugs that may be used to treat diseases or conditions that disproportionately affect minorities have been completed and additional such studies have been requested by FDA. From January 4, 2002, through March 6, 2003, FDA granted additional exclusive marketing rights to four drugs that may be used to treat conditions such as hypertension, type II diabetes, and sickle cell anemia--conditions or diseases that disproportionately affect minority children. During that time, FDA also issued written requests for studies of six drugs for these conditions. FDA does not have a system in place to serve as a single source of data to allow the agency to efficiently determine the extent of participation of children by racial and ethnic group under the pediatric exclusivity provision. GAO found that some study reports submitted to FDA from drug sponsors did not specify the race and ethnicity of all study participants. Across all the studies for drugs granted additional exclusive marketing rights that GAO reviewed, 86 percent of study participants were identifiable by race or ethnicity, but the race or ethnicity of 14 percent of study participants was unknown. In January 2003, FDA issued draft guidance recommending that drug sponsors use standard definitions for race and ethnicity in drug studies. However, drug sponsors are not required to use these definitions. FDA has also begun to develop an agency-wide system to monitor demographic characteristics of study participants, such as age, sex, and race. FDA agreed with the GAO recommendation to specify the categories that sponsors should use to report minority representation as well as GAO's findings regarding the efficiency of its data collection systems. FDA expressed concerns about the GAO comparison of the proportion of minorities in drug studies to their proportion in the U.S. population. However, FDA had previously used the methodology GAO employed in its analyses of adult study participants.
Background The World Bank Group’s member countries collectively determine policy and make investment decisions. Its board of directors is made up of 24 executive directors who represent all 185 member countries. The U.S. Executive Director is the main liaison between the United States and the World Bank Group. Treasury has the lead role in working with the U.S. Executive Director to determine the U.S. position on proposed World Bank Group projects. As a member country of the World Bank Group, the United States may support, abstain from voting, or vote against a proposed project. However, no single member country can veto a proposed project. In 1989, the World Bank established social and environmental guidelines, or Safeguard Policies, to identify and address potentially significant negative environmental and social impacts. In 2006, IFC developed its own distinct performance standards for assessing the environmental and social impact of its projects, and MIGA introduced its own standards, which were largely based on IFC’s, in 2007. The World Bank, IFC, and MIGA have also established policies guiding the disclosure of project information to the public. IFC and MIGA guidelines state that proposed project documents, including environmental assessments, should be released 60 days prior to a board vote on projects with potential, significant adverse impacts. The World Bank disclosure guidelines do not specify a number of days. World Bank Group entities screen project proposals for potential environmental impacts and assign one of four categories to determine the type of environmental assessment needed. See table 1. Title XIII outlines the U.S. government’s basic requirements for reviewing the potential environmental and social impacts of proposed multilateral development bank projects, including those of the World Bank Group. The overall purpose of the legislation is to ensure that U.S. assistance to multilateral development banks promotes sustainable use of natural resources and the protection of the environment, public health, and the status of indigenous people in developing countries. In 1989, Congress amended Title XIII to include Section 1307, commonly referred to as the Pelosi Amendment. The Pelosi Amendment directly affects whether the U.S. government will support a proposed project. It directs the U.S. government to ensure that a proposed project with potentially significant negative impacts meets certain requirements, such as making publicly available an assessment of the project’s environmental impact 120 days before the World Bank Group’s Board of Directors votes on the project. If the World Bank Group’s project sponsor does not make the assessment or a summary of the assessment publicly available within this time frame, the law instructs the U.S. government not to vote in favor of the proposal. The law also requires that the assessment include an analysis of the project’s cumulative and associated impacts, as well as alternatives to the proposed project. As a result of the Pelosi Amendment, the World Bank Group and other major multilateral development banks began requiring project sponsors to prepare environmental impact assessments and make them available to affected groups, according to representatives from U.S. government agencies and nongovernmental organizations, as well as a 1998 U.S. Congressional Research Service report determining the impact of the Pelosi Amendment. Both the Pelosi Amendment and other sections of Title XIII specify the responsibilities of several U.S. agencies in monitoring proposed multilateral development bank projects with the potential for significant environmental and social impacts. As the lead U.S. agency interacting with the multilateral development banks, Treasury is to take the following actions: ensure that an environmental impact assessment or a comprehensive summary accompanies project proposals, consult with and consider recommendations from other federal agencies and interested members of the public regarding this assessment, determine whether an environmental assessment has been made publicly available at least 120 days prior to the board vote on the proposal, instruct the U.S. Executive Director on the U.S. position for each proposed project, consult with other U.S. agencies to develop environmental impact review procedures for proposed multilateral development bank projects and assist in implementing these procedures, and provide an annual report to Congress on the environmental sustainability of multilateral development banks’ operations and the efficacy of U.S. efforts in this process. Title XIII also requires USAID to work with Treasury and the Department of State (State) to analyze, where feasible, the environmental, social, and other impacts of proposed multilateral development bank projects “well in advance” of the projects’ board vote date, and to ensure that investigations are undertaken for proposals that are likely to have substantial adverse impacts. USAID is also required to provide its own report to Congress that identifies proposals likely to have adverse impacts on the environment, natural resources, public health, or indigenous peoples. State and Treasury are to work with USAID to vigorously promote mechanisms to strengthen the environmental performance of multilateral development banks. U.S. Agencies Take Various Approaches to Meet Legal Requirements for Reviewing World Bank Group Proposals Likely to Impact the Environment Treasury addresses Pelosi Amendment requirements for assessing World Bank Group projects by conducting reviews that focus on procedural requirements such as whether the project’s environmental assessment is made publicly available by the project sponsor 120 days before the World Bank Group’s board vote date. Treasury also engages in required interagency consultations by leading a weekly interagency working group. However, Treasury does not always identify projects with potentially significant environmental and social impacts in advance of the interagency meetings, making it difficult for participants to provide effective input. Because they have different responsibilities and flexibility within their statutory requirements, USAID and Treasury take different approaches to analyzing in more depth the environmental and social impacts of a few controversial projects. The agencies learn about many such projects through regular interaction with nongovernmental organizations. Treasury Addresses Pelosi Amendment Requirements by Conducting Procedural Reviews and Engaging in Interagency Consultations Treasury Reviews of World Bank Group Projects Generally Focus on Procedural Requirements As required by the Pelosi Amendment, Treasury conducts reviews of environmental documentation for World Bank Group proposals that could have significant environmental or social impacts. Treasury’s efforts generally focus on fulfilling the requirements of the legislation, which are largely procedural; specifically, Treasury staff review documentation on World Bank Group projects to ensure that procedural requirements specified in the legislation are met. These requirements include ensuring that an environmental impact assessment or a comprehensive summary of the assessment is made publicly available 120 days prior to the World Bank Group’s Board of Executive Directors vote date and that the summary contains items such as discussions of alternatives to the proposed project and the project’s direct and indirect environmental impacts. A Treasury Department official who reviews project documentation stated that the review process involves attempting to ascertain the actual disclosure date, which is not necessarily the date or dates listed on the documents. The Treasury reviews generally take place once the World Bank Group’s Board of Executive Directors schedules a vote for the proposed project. In practice, this can be anywhere from 1 to 3 weeks prior to the scheduled vote date. In calendar year 2007, Treasury officials estimated that they reviewed over 95 projects that they determined could have significant environmental or social impacts. The Pelosi Amendment requires that Treasury consider, among other things, associated and cumulative environmental impacts in its review of World Bank Group project documentation but does not specify what criteria should be used to determine these considerations. As a result, one Treasury economist said she uses professional judgment to determine if the evidence “seems reasonable” when reviewing environmental assessments for compliance with the Pelosi Amendment’s requirement regarding associated and cumulative environmental impacts. Since the amendment does not require Treasury to review proposals to determine if they meet the World Bank Group’s environmental and social safeguard policies, Treasury generally does not evaluate proposals for compliance with these policies. Treasury officials stated that they do not do this because the multilateral development banks have their own procedures, staff, and accountability mechanisms for ensuring compliance with bank policies. Treasury officials noted that they occasionally may closely review the analysis contained in the environmental assessment or other project documentation if they have concerns about the environmental and social impact of the project. However, these officials told us that they rarely instruct the U.S. Executive Director not to support a proposal because of deficiencies with the assessment’s technical analysis. Because Treasury is only required by law to review proposals on which the World Bank Group board votes, a subset of proposals, specifically, umbrella proposals, are not always reviewed by Treasury for compliance with the Pelosi Amendment. These proposals, which are presented to the World Bank Group’s board for approval, contain an environmental assessment that represents a framework for multiple sub-projects. Although the board must approve the proposal as a whole, future sub- projects—some of which could have significant adverse impacts—are not subject to board approval. Since the board does not vote on subprojects, the Pelosi Amendment does not require Treasury to review them. Instead, Treasury reviews these types of proposals on a case-by-case basis. According to Treasury officials, they use professional judgment to determine if the intended sub-projects are likely to have significant adverse environmental and social impacts and, therefore, whether they review environmental documents associated with the sub-projects. Treasury reports its findings to Congress, as required by law, but does not provide these reports in a timely manner. Federal law requires Treasury to provide an annual report to Congress summarizing the environmental performance of the multilateral development banks, including the World Bank Group. Treasury’s most recent report is for fiscal year 2005. Treasury officials told us in October 2007 and again in August 2008 that they were still preparing the report for fiscal year 2006. Treasury Engages in Interagency Consultations to Address Pelosi Amendment Requirements Treasury addresses the Pelosi Amendment’s requirement that it consult with other agencies by leading an interagency working group on multilateral assistance that meets once a week for about an hour to discuss U.S. agencies’ concerns regarding proposed World Bank Group projects. This group discusses political, economic, environmental, social, and other concerns related to proposed multilateral development bank projects, including those of the World Bank Group. The purpose of these discussions is to solicit agency input as to whether Treasury should instruct the U.S. Executive Director to support the projects. In addition to Treasury, State, USAID, and the Commerce Department are regular participants at the meeting. Other agencies such as EPA have attended in the past. According to participants, the volume of proposals and brief discussion time at the working group meetings has limited the quality of discussion on proposals with potentially significant environmental and social impacts. Approximately 1 week prior to each working group meeting, Treasury distributes an agenda containing a list of all multilateral development bank proposals that are scheduled for a vote over the next several weeks. The number of proposals to be discussed in the hour-long meeting each week varies, averaging about 60, but in some weeks, such as near the end of the World Bank Group’s fiscal year, it has been about 150. Treasury officials told us they assume the agencies will review the proposals in advance and inform Treasury of any concerns they may have. They said that if an agency does have an issue with a proposal, Treasury staff will informally discuss the concern with the agency and attempt to resolve it prior to the meeting. Officials from participating agencies we met with stated that because of the volume of proposals to review and the short time span in which to discuss them, they rely on Treasury to identify in the meeting agenda proposals that it believes to be of concern, to facilitate the discussion. However, Treasury has not routinely done so. For example, of over 95 World Bank Group proposals in 2007 considered by Treasury as being likely to have significant adverse environmental impacts, the agency identified only 14 in the agendas it sent out in advance of the working group meetings. Treasury officials stated that, given all their other responsibilities and limited resources, they had not been focused on identifying proposals likely to have significant adverse environmental impact for the weekly working group meeting agendas. USAID and Treasury Analyze a Selected Few Controversial Projects in More Depth, and Learn about Many Such Projects through Regular Interaction with Nongovernmental Organizations USAID and Treasury Are Governed by Different Statutory Responsibilities for Analyzing the Impacts of Controversial Projects Title XIII does not specify a particular process that USAID and Treasury should use when considering environmental assessments, and the agencies use different standards when assessing the sufficiency of environmental impact assessments. Though USAID and Treasury are charged with different statutory responsibilities, each agency may evaluate environmental impact assessments on proposed projects during its review process. Neither federal law nor agency regulations specify one standard to be used across the federal government when considering proposals or environmental impact assessments. Section 1303 of Title XIII requires USAID to ensure that other U.S. agencies and overseas USAID missions analyze, where feasible, the environmental impacts of multilateral development loans well in advance of the loans’ approval to determine whether the proposals will contribute to the sustainable development of the borrowing country. USAID is also required to ensure that investigations of proposed projects with “substantial adverse impacts” are conducted. Because the law contains no prescriptive requirements for how to ensure that investigations of projects with likely substantial adverse impacts are conducted, USAID has taken various approaches to fulfilling this requirement. In previous years, the agency conducted a brief annual review of a large number of proposed projects; in contrast, its current approach is to conduct a thorough analysis of a much smaller number of proposed projects. For example, USAID’s 1999 report to Congress briefly highlighted environmental concerns in 29 projects. In contrast, the latest report, from April 2008, provides an in-depth analysis of nine projects. According to the official responsible for conducting the investigations, USAID reviews the project’s environmental assessment, as well as related studies, such as the environmental management plan. To perform its analyses, USAID employs a technical expert, who evaluates proposals’ environmental and social impacts against USAID standards as well as other guidance, such as that developed by the Council on Environmental Quality (CEQ). This technical expert told us she uses certain criteria when determining whether a proposed project should be investigated and applies them at her discretion. These criteria include, among others, the significance and potential of adverse cumulative impacts, the ability of the proposal to serve as a model for similar proposals within a particular sector, and the potential for the proposal to undermine USAID’s sustainable development activities. USAID may also perform site visits to the proposed project location. During these visits, USAID and other U.S. government officials, including those from Treasury and State, may meet with stakeholders such as the project sponsor, World Bank Group staff, host-country government officials, and local communities affected by the proposal. In addition, USAID may continue to monitor and report on the project if the World Bank Group board approves it once financing and construction begin. Treasury, in determining the U.S. position on proposed actions to be taken by the World Bank Group, is required to develop and prescribe procedures that consider environmental impact assessments, the interagency and public review of these assessments, and other environmental reviews and consultations required by law. Treasury issued regulations in 1992 to fulfill this requirement; however, the regulations do not specify criteria to be used in the interagency process that measure the sufficiency of the environmental assessments. While these regulations address how Treasury will instruct the U.S. Executive Director to proceed at the World Bank Group when an environmental analysis is determined to be insufficient, the regulations do not identify a set of criteria or standard against which to measure sufficiency. While USAID uses guidance and regulations issued by the Council for Environmental Quality when reviewing different aspects of environmental assessments, Treasury often uses internal requirements issued by the World Bank. Although not required to do so by law, Treasury occasionally conducts additional, more in-depth investigations of a few World Bank Group proposals that it determines to be controversial, such as mining or oil and gas projects, or that present opportunities for reducing adverse impacts. Unlike its more procedural reviews of proposals for compliance with the Pelosi Amendment, Treasury officials said they may evaluate these proposals’ documentation for compliance with the multilateral development banks’ internal requirements for assessing environmental and social impacts. However, they do not necessarily determine, for example, whether the World Bank’s “good practices” have been followed. The World Bank’s good practices, compiled in its Environmental Assessment Sourcebook, give examples of practices that the World Bank considers models for project managers to emulate, such as establishing project supervision and monitoring programs. U.S. Government Agencies Focus Attention Predominantly on Projects Identified by Nongovernmental Organizations According to officials from U.S. agencies, of the few proposed projects that Treasury and USAID select for in-depth analysis, many come to their attention through regular interaction with nongovernmental organizations. To foster dialogue with interested non-governmental organizations and to fulfill legislative requirements, Treasury and USAID meet with nongovernmental organizations in a forum commonly referred to as the Tuesday Group, since it generally meets on the first Tuesday of each month. At this forum, the agencies often obtain leads on potentially controversial projects through discussions of planned and ongoing multilateral development bank projects that may have significant adverse environmental and social impacts. In mid-2008, Treasury informally proposed changing the structure of these meetings. Specifically, Treasury’s proposal establishes a steering committee consisting of a representative from Treasury, USAID, and two nongovernmental organizations for the purpose of reviewing and selecting submitted discussion topics for subsequent meetings, which would then focus the discussion on those issues that the steering committee identifies. Treasury officials said that this proposal is meant to make the meetings more efficient, since the officials have many responsibilities and can devote only a small share of their time to assessing the environmental impacts of multilateral development bank projects. We discussed this proposal with the Bank Information Center in September 2008; the Center and Treasury are considering a compromise proposal that would have an agreed-upon agenda while setting aside some time for open-ended discussion. U.S. Government Ability to Identify Environmental Concerns Is Limited, and World Bank Group Projects with Potentially Significant Adverse Impacts Proceed with or without U.S. Government Support Time constraints limit the U.S. government’s ability to identify the environmental and social concerns associated with World Bank Group projects before the World Bank Group board votes on them, and projects with potentially significant adverse impacts proceed with or without U.S. government support. By the time a project is ready for board vote, it is often in its final design stage or, in some cases, already under construction, which limits U.S. agencies’ ability to identify ways to mitigate environmental and social issues associated with the project. Furthermore, the World Bank Group consistently approves projects with potentially significant adverse impacts without U.S. government support; between January 2004 and April 2008, all 34 of the projects the U.S. Executive Director did not support because they did not meet the Pelosi Amendment requirements were still approved by the World Bank Group’s Board of Directors and moved forward. In addition, the U.S. government occasionally supports projects with significant environmental impacts, due to competing priorities and a belief that potential impacts can be mitigated. The U.S. Government’s Ability to Identify Environmental and Social Issues Associated with Most World Bank Group Projects Is Limited by Review Time Frames Officials from agencies that participate in the interagency working group told us that they usually do not have sufficient time to identify environmental and social issues associated with projects in the few weeks encompassing the World Bank Group’s notification of a proposed project scheduled for a vote, the working group meeting at which the project could be discussed, and the date the board votes on the project. Figure 1 shows the timeline of events related to the U.S. government’s review of proposed World Bank Group projects. Treasury officials said that they are notified about projects when they receive a project appraisal document, which describes the project and is what the board reviews when it votes on a project. They said they generally receive this document about 1 to 3 weeks before the board is scheduled to vote on the project. They then put these projects on the working group meeting agenda. The working group meetings generally take place approximately 1 or 2 weeks before the board votes on projects, and Treasury e-mails a list of projects and estimated board vote dates to relevant U.S. agencies about 1 week before each meeting. State and USAID officials noted that this compressed time frame makes it difficult to review project documentation and solicit input from relevant officials from other offices within their agencies. Furthermore, USAID staff in countries where projects are being proposed have limited time to review project documentation because they do not have access to the necessary project documents and depend on staff in Washington to make this information available to them, according to USAID officials. Even when U.S. agencies are able to identify project-related issues, the U.S. government has little time to discuss these issues with the World Bank Group. USAID officials stated that project stakeholders are unlikely to alter the project without sufficient time for discussion before a vote. Even when agencies can identify issues before the board vote, the project is often in its final design stage or, in some cases, already under construction, so the extent to which the World Bank Group can mitigate the issues is limited. Treasury officials stated that there is little opportunity to influence project design once the World Bank Group has released the project appraisal document shortly before the board votes on the project. In addition, an April 2008 USAID report to Congress stated that there are inadequate opportunities to identify, avert, or mitigate adverse environmental and social impacts associated with the projects even when the multilateral development banks release the environmental documents 120 days before the board votes, as the Pelosi Amendment requires. In some instances, projects may already be under construction. For example, construction began on an IFC-financed gold mine in Guatemala a month before the project went to the board for a vote, despite problems associated with the project, including inadequate consultations with the affected community and potential water contamination, according to a USAID report. In its 2005 annual report to Congress, Treasury noted that, while the timeliness and quality of environmental impact assessments of World Bank Group projects had improved, the agency remained concerned about the need to determine appropriate interventions if projects had already begun construction or suffered from a legacy of unaddressed environmental damage. The World Bank Group Consistently Approves Projects That Lack U.S. Government Support Since 2004, the World Bank Group has always approved proposals that lack U.S. support, even if they have potentially significant adverse environmental and social impacts. A lack of U.S. government support for proposals that do not comply with the Pelosi Amendment does not prevent the board from approving such proposals because one member country’s vote cannot prevent approval. Between January 2004 and May 2008, all 34 of the proposed projects the U.S. Executive Director did not support because they did not meet the Pelosi Amendment requirements were still approved by the World Bank Group Board of Directors and moved forward. (See fig. 2 for the number of proposals Treasury has not supported due to lack of compliance with the Pelosi Amendment.) Treasury officials told us that overall U.S. interests are sometimes served when the board approves projects that the Pelosi Amendment prevents the United States from supporting. For example, in one case, the board approved a proposal for a development project in Iraq that the U.S. government would have otherwise wanted to support, but could not because it did not meet the amendment’s 120-day disclosure deadline. Furthermore, once proposed projects are approved by the board, they are unlikely to be modified to address U.S. concerns about adverse environmental and social impacts. U.S. government officials informed us that changes are seldom made to a project as a result of the U.S. Executive Director not supporting it. According to Treasury officials, once the board has approved the proposal and the World Bank Group has funded the project, Treasury has little leverage in influencing any changes to mitigate adverse environmental or social impacts. The United States Occasionally Supports Projects with Significant Environmental Impacts, Due to Competing Priorities In some cases, Treasury recommends that the U.S. Executive Director support projects with significant adverse impacts. Some U.S. agencies may want to oppose these projects because of environmental concerns, but Treasury sometimes recommends that the U.S. Executive Director vote in favor of the project if it determines that it is in compliance with the Pelosi Amendment and that potential impacts can be mitigated. As the lead U.S. agency in formulating the U.S. government’s position on proposed multilateral development bank projects, Treasury’s determination takes into account competing priorities—such as economic development—as well as environmental concerns and, therefore, does not always reflect consensus either among agencies or among units of the same agencies. For example, when reviewing an environmental assessment and associated documentation for a hydroelectric dam project in Uganda, USAID and EPA raised environmental concerns and recommended that Treasury instruct the U.S. Executive Director not to support the project. USAID and EPA officials opposed the project because, among other things, they believed the environmental analysis was incomplete and the analysis of the impact of the dam on endangered species was inadequate. However, Treasury ultimately supported the project due to economic considerations, having determined that the measures the project sponsor would take to mitigate the adverse impacts were sufficient. In its memo to instruct the U.S. Executive Director to support the project, Treasury stated that the project would help reduce Uganda’s electricity shortage and, thereby, lower an obstacle to economic growth and development. State also ultimately supported the project because it brought an acceptable balance across multiple issues, including clean energy, economic development, and political support for the Ugandan government, according to State officials. We could not determine the extent to which the U.S. government balances competing priorities for projects with potentially significant adverse environmental and social impacts that are compliant with the Pelosi Amendment. Between January 2004 and July 2008, Treasury supported 17 World Bank Group proposals for which it determined that significant environmental impacts would occur, but be mitigated. However, because Treasury does not generally write memos to the U.S. Executive Director for projects it supports, it does not maintain documentation to show to what extent other issues may have outweighed environmental and social concerns in these cases. Treasury officials responsible for conducting environmental reviews stated that for controversial projects, decisions are made by senior-level administration officials. Conclusion Given the potential consequences of World Bank Group projects with significant environmental and social impacts, the overriding constraints posed by the World Bank Group’s project development and approval process, and the restrictions the Pelosi Amendment imposes on the U.S. government’s decision-making, U.S. agencies must coordinate efficiently to maximize their resources within the very limited time they have available to review upcoming projects. However, Treasury is not maximizing the effectiveness of a major mechanism for gathering interagency views on projects—the weekly interagency working group it leads. The working group meetings, which Treasury uses to meet its legal requirement to consult with other agencies on the possible impacts of World Bank Group proposals, are meant to provide an opportunity for all participants to utilize their expertise and discuss their perspectives and concerns as part of the vetting process to determine a U.S. position on the proposals. Prior to the meetings, Treasury’s staff flag those projects that they believe have potentially significant environmental and social impacts. Treasury has not, however, been routinely passing this information on to the other participants of the working group in advance of group meetings. For example, Treasury only identified about 15 percent of such projects in working group agendas in 2007. Without this identification, working group agencies have been limited in their ability to effectively contribute to the interagency effort to evaluate proposed World Bank Group projects. Recommendation for Executive Action In order to improve U.S. agencies’ ability to effectively contribute to the interagency effort to evaluate World Bank Group proposals that are likely to have significant adverse environmental and social impacts, we recommend that the Secretary of the Treasury, in his capacity as the chair of the Working Group on Multilateral Assistance, routinely identify all proposals of concern in advance of working group meetings with other agencies in order to maximize the ability of all participants to contribute to the evaluation of World Bank Group proposals. Agency Comments and Our Evaluation We provided a draft of this report to Treasury and USAID for their review and comment. Treasury and USAID provided written comments, which are reprinted in appendix II and III respectively. Treasury also provided technical comments, which are incorporated as appropriate throughout the report. Treasury agreed with our recommendation and noted in its technical comments that it welcomes this recommendation and is already taking a number of measures to comply with it. Treasury also disagreed with two of our findings. First, Treasury disputed our finding that its efforts had little impact, because it believed we did not adequately note Treasury’s behind the scenes efforts to influence project design. In response, we have added language to the report to reflect some of these efforts. However, Treasury did not provide us with information to gauge the impact of these communications and Treasury officials also told us they could not determine what impact these communications have on project design. Second, Treasury also believed that we characterized the interagency process as limited to weekly meetings. While we disagree with Treasury’s interpretation, we added language to clarify the extent of interagency communication. In its comments, USAID suggested that the recommendation may warrant further guidance to more clearly address very short lead times for notice to other agencies. We did not, however, revise this recommendation because we believe it is up to Treasury to determine how best to implement the recommendation. USAID’s comments also raised several issues, including that our report title was overly expansive. In response, we modified the title to clarify that the report addresses certain procedures required by U.S. law. USAID was also concerned that our report did not address all provisions in Title XIII, such as creating a system for information exchange with other interested member countries. These were not within the scope of this report because we focused our review on those sections of Title XIII that directly address U.S. government oversight of the potential environmental and social concerns associated with proposed World Bank Group projects. USAID was also concerned that we did not address whether Treasury has sufficient expertise to evaluate measures to mitigate environmental damage. However, it is beyond the scope of this report to determine whether mitigation measures have been effective; we anticipate addressing this issue in our next report, which will focus on project implementation. More detail on USAID’s comments and our evaluation can be found in appendix III. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and the Administrator of USAID. The report is available at no charge on the GAO Web site at http://www.gao.gov. Appendix I: Objectives, Scope, and Methodology Our objectives were to assess (1) how U.S. agencies implement their legislative requirements to review the potential environmental and social concerns associated with proposed World Bank Group projects, and (2) agencies’ ability to identify and address these concerns. To assess how U.S. agencies implement their legislative requirements, we reviewed environmental legislation, including: Title XIII of the International Financial Institutions Act of 1977, as amended; the procedures for the environmental review of proposed projects of multilateral development banks in 31 C.F.R. Part 26; as well as the U.S. Agency for International Development’s (USAID) Environmental Procedures contained in 22 C.F.R. Part 216. We also reviewed legislation governing U.S. environmental assessments, such as the National Environmental Policy Act of 1969 (NEPA) and Council on Environmental Quality Regulations for Implementing the Procedural Provisions of NEPA, 40 C.F.R. Parts 1500-1508; and, Council on Environmental Quality guidance on implementing 40 C.F.R. Part 1500-1508. To determine how agencies implement their legislative requirements, we interviewed U.S. government officials from the Environmental Protection Agency (EPA), USAID, and the Departments of State and the Treasury (Treasury), as well as expert staff from environmental non-governmental organizations. Because not every agency keeps complete records of its environmental oversight activities or has internal policies governing such actions, certain procedural documentation could not be provided. In such cases, we relied on agency officials’ testimonial evidence. To examine agencies’ ability to identify and address environmental concerns of proposed World Bank Group projects, we reviewed agency documents such as periodic reports to Congress, agency decision memos, and the U.S. government’s voting record on World Bank Group proposals. We also interviewed U.S. government officials from EPA, USAID, and the Departments of Commerce, State, and Treasury. In addition, we interviewed relevant World Bank Group officials from the International Bank for Reconstruction and Development, the International Development Association, the International Finance Corporation, and the Multilateral Investment Guarantee Agency, as well as environmental experts from nongovernmental organizations, and the private sector. To determine the number of World Bank Group projects that the U.S. Executive Director abstained from voting on due to the requirements of the Pelosi Amendment, we collected data from Treasury on the U.S. Executive Director’s voting record from January 2004 through May 2008. We also used these data to identify projects supported by the U.S. Executive Director that Treasury determined may have significant environmental impacts, but that the U.S. Executive Director supported based on Treasury’s determination that such impacts have been addressed and mitigated in the design of the project. To assess the reliability of Treasury’s data on the U.S. Executive Director’s voting record, we (1) interviewed the Treasury official responsible for managing the team of analysts who record data on the status of multilateral development bank projects; and (2) reviewed the voting record data. During the course of our review, we identified incomplete data fields and manual data entry errors, such as duplicate entries of the same project. However, based on our intended use of the data—to identify how the U.S. Executive Director voted on particular World Bank Group projects—and the results of our assessment, we determined that the data provided were sufficiently reliable for the this purpose. Because Treasury does not maintain a database of the projects it reviews for compliance with the Pelosi Amendment, we requested that the agency create a list of projects receiving such a review for calendar years 2006 through 2008. The Treasury analyst conducting these reviews compiled for us an estimate of the projects she reviewed during this time frame. She stated that she compiled the list from a manual log she keeps along with archived emails. Although the analyst stated that the list was generally accurate it is possible that a few projects may not have been captured. We determined, however, that the data provided were sufficiently reliable for the purposes of this report. To determine the total number of projects the World Bank Group identified as (1) having the potential for significant adverse impacts; (2) having the potential for limited adverse impacts; or (3) having funds channeled through a financial intermediary, we extracted data from the World Bank’s and IFC’s project Web sites from January 2004 through May 2008. Since the World Bank Group has not completed vetting its response to our request to conduct a review of World Bank Group environmental assessment policies and their implementation, it therefore did not allow us to assess the reliability of World Bank and IFC data. Although we used this data to identify an approximate number of projects that were categorized by the World Bank and IFC as likely to have significant adverse environmental impacts, the reliability of those data are undetermined. Due to the nature of the Multilateral Investment Guarantee Agency’s (MIGA) business model (providing political risk insurance and project guarantees), it was not feasible for us to collect project-related data, since MIGA’s tracking and monitoring activities are different than those of the IFC or the World Bank. For example, MIGA does not categorize its support in terms of individual projects, but rather in terms of individual guarantees from distinct investors. Therefore, there may be more than one investor who has applied for and obtained MIGA insurance and, thus, more than one guarantee for a given project. We conducted this performance audit from October 2007 to November 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Treasury and GAO’s Evaluation The following are GAO’s comments on the U.S. Department of Treasury’s letter dated November 7, 2008. GAO Comments 1. Treasury disputed our finding that their efforts have little impact. While Treasury officials did note that they communicate with World Bank Group officials informally about projects, Treasury did not provide us with information to gauge the impact of these communications. Furthermore, Treasury officials also told us they could not determine what impact these communications have on project design. 2. Treasury asserted that we characterized the interagency process as limited to weekly meetings. While we did not specifically state that the interagency process is limited to the weekly interagency working group meetings, we have added language to clarify the extent of interagency communication. Appendix III: Comments from the United States Agency for International Development and GAO’s Evaluation The following are GAO’s comments on the U.S. Agency for International Development letter dated November 13, 2008. GAO Comments 1. USAID commented that several sections of Title XIII are not discussed in the report, such as creating a system for information exchange with other interested member countries. This is outside the scope of our report, which focuses on U.S. government efforts to review the World Bank Group’s process for assessing the environmental impact of projects. We have added language to the report to clarify this point. 2. USAID commented that our draft report title was overly expansive. In response, we have changed the title to clarify that our report addresses certain procedures required by U.S. law. 3. USAID commented that the focus on a short timeline is misleading because it does not take into consideration the time period leading up to the release of the project appraisal document. However, Title XIII does not provide a timeline for when U.S. government agencies should begin reviewing World Bank Group or other multilateral development bank proposals. By not specifying a timeline, the legislation leaves it up to the agencies to determine when they should begin reviewing proposals. We do note that an April 2008 USAID report to Congress stated that there are inadequate opportunities to identify, avert, or mitigate adverse environmental and social impacts associated with projects even when the banks release the environmental documents 120 days before the board votes. USAID has acknowledged that it can only use the early, upstream approach to provide a more intensive look at a limited number of projects. 4. USAID commented that our report implies that Treasury has the expertise to review environmental impact assessments and determine if mitigation measures are appropriate. However, we do not comment on Treasury expertise. Rather, we acknowledge that USAID and Treasury review documentation for different purposes. We also state that Treasury is the lead agency in formulating the U.S. government’s position on proposed multilateral development projects, and that its decisions do not always reflect consensus among agencies or even among units of the same agencies. While the Pelosi Amendment (Section 1307) requires that environmental impact assessments contain associated and cumulative impacts and alternatives to the proposal in order for Treasury to support a project in a board vote, Treasury is not required to consider specific mitigating measures in determining how to instruct the U.S. executive director to vote. Section 1306, a separate law in Title XIII, requires Treasury to instruct the U.S. Executive Director to vigorously urge the multilateral development banks to consider other environmental factors and to circulate to the bank board documents that include these factors, including mitigating measures. It is beyond the scope of this report to determine whether mitigation measures have been effective; we anticipate addressing this issue in our next report, which will focus on project implementation. 5. USAID strongly believes that the intent of Section 1307 of Title XIII is to be more than just a procedural process. However, the specific provisions of Section 1307 that require U.S. government oversight of the potential environmental and social concerns associated with proposed World Bank Group projects are primarily procedural in nature. We revised the report as appropriate in response to this comment. 6. USAID states that Treasury has not consulted with other U.S. agencies to develop environmental impact review procedures for multilateral development bank projects. USAID believes that our report should include recommendations to harmonize disparate review standards. However, the law does not require agencies to harmonize review standards. Therefore, we did not address this issue in this report. 7. USAID commented on several factual errors in the report with respect to the Tuesday Group. We have revised the report to incorporate the first point regarding Tuesday Group co-chairs. Regarding the second point, we characterized the proposal, dating from mid-2008, as it was described to us by Treasury and the Bank Information Center. Moreover, in a technical comment regarding this proposal, Treasury did not dispute its timing. We have added a sentence to an existing footnote stating that this proposal is very similar to one described in section 5, annex A of USAID’s 2002-2004 report to Congress. 8. USAID commented that our recommendation may warrant further guidance to more clearly address the very short lead times for notice to other agencies. However, we did not change our recommendation, which was made to the Secretary of the Treasury. We believe it is up to Treasury to determine how best to implement the recommendation. Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments Anthony Moran, Assistant Director; Kay Halpern; Chris Kunitz; RG Steinman; Christina Werth; and Linda Wong made key contributions to this report. In addition, Ashley Alley, Debbie Chung, Etana Finkler, and Joel Grossman provided technical or legal assistance.
The World Bank Group lends about $40 billion annually to developing countries. Critics have claimed that some projects have harmed the environment and the local population. Title XIII of the International Financial Institutions Act of 1977 outlines in part the U.S. government's requirements for reviewing potential environmental and social impacts of proposed multilateral development bank projects. GAO was asked to assess the U.S. government's international environmental oversight efforts by examining (1) how U.S. agencies implement legislative requirements to review the potential environmental concerns associated with proposed World Bank Group projects, and (2) agencies' ability to identify and address these concerns. GAO reviewed Title XIII, World Bank Group reports, and U.S. agency documents and met with representatives from U.S. government agencies, the World Bank Group, and nongovernmental organizations. U.S. agencies take various approaches to meet legal requirements for reviewing World Bank Group proposals likely to have significant adverse environmental impacts. The Treasury Department (Treasury), which leads these efforts, generally focuses on fulfilling the law's largely procedural requirements, such as ensuring that the project's environmental assessment is made publicly available by the project sponsor 120 days before it is voted on by the Group's board. The reviews usually occur from 1 to 3 weeks prior to such a vote. Treasury also engages in required consultations by leading a weekly interagency working group. Some participants stated that, because of limited time and the volume of proposals, they rely on Treasury to identify proposals of concern to facilitate the discussions. However, Treasury has not routinely done so. For a selected few projects, Treasury and the U.S. Agency for International Development analyze in more depth a proposal's potential environmental and social impacts. Both agencies learn about many such projects through regular interaction with nongovernmental organizations. Time constraints limit the U.S. government's ability to identify environmental and social concerns associated with World Bank Group projects before a vote on the proposal, and projects with potentially significant adverse impacts proceed with or without U.S. government support. The compressed review time frame makes it difficult for U.S. officials to examine proposal documentation and solicit information from knowledgeable parties. In addition, by the time of the vote, a project is often already in its final design stage or even under construction, which limits U.S. agencies' ability to identify ways to mitigate the concerns. Furthermore, proposals with potentially significant adverse impacts proceed with or without U.S. government support. The board consistently approves proposals that lack U.S. support; between January 2004 and May 2008, all 34 of the proposals the United States did not support because they did not meet legislative requirements were still approved by the board. Finally, the U.S. government occasionally supports proposals with significant environmental impacts, due to competing priorities, including economic and other considerations.
Background Mission-critical skills gaps within specific federal agencies as well as across the federal workforce pose a high risk to the nation because they impede the government from cost effectively serving the public and achieving results. We first designated strategic human capital management across the government as a high-risk issue in 2001 because of the federal government’s long-standing lack of a consistent approach to human capital management. In February 2011, we narrowed the focus of this high-risk issue to the need for agencies to close mission- critical skills gaps. At that time, we noted that agencies faced challenges effectively and efficiently meeting their missions across a number of areas, including acquisition management and foreign language capabilities. Trends in federal workforce retirement threaten to aggravate the problem of skills gaps. As Figure 1 shows, 30 percent of all career permanent employees who were on board as of September 30, 2013, will be eligible to retire by 2018. Moreover, some agencies such as the Department of Housing and Urban Development and the Small Business Administration will have particularly high retirement eligibility rates by 2018. Various factors affect when eligible individuals actually retire, which can be seen in the number of new retirement claims received by OPM each month. In October 2014, OPM was hit with a surge in new retirement claims—10,155, the highest number since February 2014. However, in the intervening months, there was considerable fluctuation in new claims. Some amount of retirement and other forms of attrition can be beneficial because it creates opportunities to bring fresh skills on board and allows organizations to restructure themselves to better meet program goals and fiscal realities. Nevertheless, if turnover is not strategically managed and monitored and succession plans are not in place, gaps can develop in an agency’s institutional knowledge and leadership as experienced employees retire. In addition to a potential wave of employee retirements, other human- capital related risks are threatening the performance of federal agencies, including current budget and long-term fiscal pressures, declining levels of employee satisfaction, and the changing nature of federal work with an increasing number of positions requiring advanced degrees and other skills. Effectively addressing mission-critical skills gaps requires a multi-faceted response from OPM and agencies. In February 2013, we noted that OPM and agencies would need to use a strategic approach that (1) involves top management, employees, and other stakeholders; (2) identifies the critical skills and competencies that will be needed to achieve current and future programmatic results; (3) develops strategies that are tailored to address skills gaps; (4) builds the internal capability needed to address administrative, training, and other requirements important to support workforce planning strategies; and (5) includes plans to monitor and evaluate progress toward closing skills gaps and meeting other human capital goals using a variety of appropriate metrics. We also noted that further progress in closing skills gaps will depend on, among other things, the extent to which OPM develops a predictive capacity to identify newly emerging skills gaps beyond those areas already identified. Lessons Learned From Initial Efforts to Close Mission-Critical Skills Gaps Could Strengthen Future Approaches Working Group’s Effort Yields Three Key Lessons for Identifying Skills Gaps Based on its deliberations from September 2011 through March 2012, the CHCO Council Working Group (Working Group) identified skills gaps in six government-wide, mission-critical occupations (MCO): cybersecurity, auditor, human resources specialist, contract specialist, economist, and the STEM family. It also identified mission-critical competencies, including data analysis, strategic thinking, influencing and negotiating, problem solving, and grants management competencies. As part of these deliberations, individual agencies identified agency-specific MCOs and were responsible for designing strategies to close skills gaps in those occupations. Although this effort was an important step forward, because of various methodological shortcomings, the Working Group did not address a more comprehensive list of mission-critical skills gaps. As we discuss later in this report, in 2015, OPM plans to identify a new set of MCOs. Going forward, it will be important for OPM and the CHCO Council to use lessons learned from its initial efforts to inform this next round of work. Specifically, the Working Group’s experience underscores the importance of (1) using a robust, data-driven approach to identify potential MCOs early in the process; (2) prioritizing occupations using criteria that consider programmatic impact; and (3) consulting with subject matter experts and other stakeholders prior to identifying MCOs. Since 2011, our work has identified skills gaps in nearly two dozen occupations across the government. In some cases, such as cybersecurity, the gaps we identified were affecting multiple agencies and were consistent with the Working Group’s findings. But our work also identified additional skills gaps, both government-wide and agency specific, that were having a significant programmatic impact, such as: Oil and Gas Management. In January 2014, we found that hiring and retention challenges at the Department of Interior (Interior) have resulted in fewer inspections of oil and gas facilities, which according to officials results in an increased risk to human health and safety due to a spill or accident. In 2012, Interior’s Bureau of Land Management had an attrition rate among petroleum engineers that, according to OPM data, is more than double the average federal attrition rate. Although Congress has provided Interior with the authority to establish higher minimum rates of basic pay for key inspection occupations, we noted in January 2014 that it was uncertain how Interior would address staffing shortfalls over time. Interior generally agreed with our recommendation that it should systematically collect data on hiring times for key oil and gas positions, ensure the accuracy of the data, analyze the data to identify the causes of delays, and expedite the hiring process. In response to our recommendation, Interior stated that its bureaus have begun a more systematic collection and analysis of hiring data to identify the causes of delays and to help expedite the hiring process. Telecommunications. In December 2013, we found that a decline in telecommunication expertise across multiple agencies compounded the General Services Administration’s (GSA) challenges in transitioning those agencies to a new network of telecommunications services, contributing to delays and cost overruns of 44 percent. Moreover, according to GSA, customer agencies are concerned that the shortage of telecommunications specialists will get worse because there are not enough to replace experienced workers nearing retirement. GSA has yet to fully study the issue of addressing mission- critical skills gaps and agreed that understanding expertise shortfalls would be useful for future transition planning purposes. Officials from GSA and OPM agreed with our recommendation on the need to better examine potential government-wide telecommunications expertise shortfalls and have agreed to coordinate on efforts to do so. While this recommendation was still open at the time of our review, GSA’s Office of Human Resources Management plans to take several actions such as identifying and validating technical competencies, developing competency models, and performing a workforce assessment against the models. A Data-Driven Approach Could Help OPM and Agencies Identify a More Complete Set of Mission-Critical Occupations Working Group officials observed that a lesson learned from their MCO selection process was that they did not use workforce information and data analytics sufficiently early in the process. Instead, to identify an initial list of MCOs, the Working Group started with an environmental scan that consisted of our reports and academic studies. The Working Group used data analytics, such as the levels of attrition within an occupation, only after it had identified the initial set of MCOs. This approach only supported the selection of MCOs the Working Group already made and did not subject the full range of federal occupations to the same analytical criteria. As part of our development of our 2013 High-Risk report, we discussed this approach with the Working Group’s leadership, specifically questioning why the Working Group did not use workforce data as the starting point for its selection process.2013 discussion of lessons learned, Working Group officials concluded that the MCO selection process would have benefitted from analyzing staffing gap data and associated trends prior to identifying an initial set of MCOs. These officials determined that using these workforce data would have given the Working Group a better sense of which occupations had the biggest skills gaps. Prioritization Criteria Should Consider Potential Programmatic Impact In February 2012, to help prioritize their efforts, Working Group officials limited the scope of their work by creating criteria that at least half of the 24 CHCO Council agencies needed to report having a skills gap in that occupation, and that those agencies collectively needed to employ at least 95 percent of the occupation’s workforce. Working Group officials explained that limiting the number of MCOs using these criteria enabled them to focus on occupations found in most agencies. The officials also noted that it was important to establish a threshold for the number of agencies with a given skills gap so that the Working Group could focus resources on addressing skills gaps that had the greatest reach across the government. Officials added that skills gaps that existed at only a few agencies were the agencies’ responsibility to address. While we recognize the importance of prioritizing MCOs to a manageable number, the Working Group’s approach overlooked skills gaps that may not have met the criteria but still had the potential for significant programmatic impact. For example, in August 2014, we found that, in part because of staffing shortages, the Department of Justice’s Bureau of Prisons was not activating new prison facilities in a timely manner, thereby aggravating the problem of prison overcrowding. Additionally, we found in December 2013 that the Department of Transportation’s Federal Railroad Administration lacked a plan to have sufficient safety inspectors to carry out oversight of such initiatives as positive train control—a communications system designed to prevent events like train- Indeed, Working Group officials later recognized that to-train collisions.establishing such a strict standard for a government-wide MCO resulted in eliminating from consideration other important occupations, such as nurses and financial analysts. Because the Working Group’s selection criteria created such a strict and narrow standard for government-wide skills gaps, its efforts failed to capture the broader range of skills gaps that were affecting agencies’ abilities to meet their missions. As we note later in this report, OPM’s proposed methodology for selecting a new set of MCOs does not include such a strict definition for government-wide skills gaps. Early Stakeholder Input Can Help Improve the Identification of Skills Gaps Our principles for effective strategic workforce planning note that agencies should involve stakeholders in developing and implementing future workforce strategies. However, the Working Group did not thoroughly consult occupational experts from other interagency councils or organizations prior to identifying MCOs. Instead, Working Group officials only considered consulting relevant stakeholders to discuss strategies for addressing skills gaps at a March 2012 meeting—after the Working Group had already identified preliminary MCOs for selection. As a result, stakeholders could only provide advice on decisions already made rather than help with the initial screening of MCOs. As one example, members of the Council of Inspectors General on Integrity and Efficiency—an Executive Branch interagency council devoted, in part, to providing training for federal auditors—told us that they had not been consulted about the existence of skills gaps in the auditor occupation and were initially unsure why the auditor occupation was designated by the Working Group as an MCO for skills gap closure. A Working Group official indicated at a January 2012 meeting that the Working Group had sufficient subject matter experts internally, and verification of MCO selection with outside experts could lead to competing lists of MCOs and could expend significant resources. As a result of this limited outreach, however, the Working Group missed an opportunity to leverage the expertise of key outside stakeholders to get a more complete assessment of the need to close skills gaps. As we describe later in this report, OPM has recognized this deficiency and is proposing a methodology for a new interagency working group that incorporates more input from subject matter experts when identifying emerging skills gaps. Efforts to Address Skills Gaps Are Underway with Mixed Results OPM Provided Resources and Visibility to Efforts to Close Skills Gaps OPM and the Working Group have made important progress in creating an infrastructure to address the six mission-critical skill gaps they identified. The GPRA Modernization Act of 2010 (GPRAMA) requires OMB to coordinate with agencies to develop cross-agency priority (CAP) goals to improve the performance and management of the federal government in several areas, including human capital management.The fiscal year 2013 budget—released in February 2012—designated OPM as leader of a two-year interim CAP goal to close skills gaps by 50 percent in three-to-five MCOs by September 30, 2013. As goal leader, the Director of OPM appointed key federal officials from each of the six government-wide mission-critical occupations to serve as sub-goal leaders for each of the six MCOs identified by the Working Group. At the time of our review, the sub-goal co-leaders for the cybersecurity workforce, for example, were the Assistant Director for Cybersecurity at the White House Office of Science and Technology Policy and the Lead for the National Initiative for Cybersecurity Education within the Department of Commerce’s National Institute of Standards and Technology. Likewise, the sub-goal leader for the economist workforce is the Assistant Secretary for Economic Policy and Chief Economist at the U.S. Department of the Treasury. In working within their occupational communities, the sub-goal leaders selected specific strategies to decrease skills gaps in the occupational communities they represent. Multiple sub-goal leaders indicated that the quarterly meetings and availability of OPM staff and their human resources expertise provided visibility and resources to sub-goal leaders’ efforts to close skills gaps in their MCOs. For example, Auditor sub-goal leaders told us that the OPM Director assisted in conducting a study of government-wide recruitment and hiring of auditors. In partnership with OPM and stakeholders from the federal audit community, Auditor sub- goal leaders found that the current qualifications for federal auditors do not align with the nature of federal audit work. To address this challenge, OPM is studying how to change the qualifications for federal auditor positions to improve agencies’ experiences with recruiting and hiring qualified candidates for that occupation. The meetings and performance reviews by OPM and sub-goal groups demonstrate high-level leadership commitment to address mission-critical skills gaps. We have maintained that removing skills gaps as a high-risk issue across the government will depend in part on the extent to which OPM and agencies involve top management and include plans to monitor and evaluate progress toward closing skills gaps.year 2015 budget, released in March 2014, includes a four-year human capital management CAP goal to (1) create a culture of excellence and engagement to enable higher performance, (2) build a world-class federal management team starting with the Senior Executive Service, and (3) enable agencies to hire the best talent from all segments of society. While these CAP goal elements contain workforce planning strategies and metrics relevant to closing skills gaps, there are no overall performance targets for closing skills gaps, and closing skills gaps is no longer an explicit goal. The President’s fiscal Throughout the time period when closing skills gaps was an explicit CAP goal, top officials at OPM held the sub-goal groups accountable for making progress by holding quarterly performance review meetings. OPM also used these meetings to discuss challenges the sub-goal leaders were facing and to field requests from sub-goal leaders for how they could better facilitate sub-goal groups’ efforts. Now that the fiscal year 2013 CAP goal expired and was replaced with a fiscal year 2015 human capital management CAP goal that did not have explicit targets for closing skills gaps, OPM officials have told us that they intend to continue meeting with current sub-goal leaders through a community of practice. OPM has also worked with agencies to develop occupation-specific communities of practice within the science, technology, engineering, and mathematics (STEM) and Economist sub-goal groups. In addition, OPM has indicated that it intends to hold quarterly performance review sessions with officials in charge of efforts to close skills gaps in emerging government-wide occupations in a similar format to what was done during the prior CAP goal. These are all important steps in the right direction and highlight OPM’s commitment to addressing skills gaps going forward. Still, the CAP goal to address skills gaps gave the entire effort government-wide focus and visibility, and provided OPM a mechanism to hold agencies accountable for results. It will be important for OPM to continue its leadership on this high-risk issue by holding occupational leaders accountable for implementing their strategies to close skills gaps and sustaining the visibility of the issue among agency officials across the government. Key focus areas include using (1) better defined, more measurable goals, (2) outcome-oriented performance metrics that align with activities to close skills gaps, and (3) key practices for project planning. Use of Better Defined, Measurable Goals Enhances Monitoring on Closure of Skills Gaps The CAP goal target to close skills gaps was vague and difficult to measure. The fiscal year 2013 human capital CAP goal’s target was to close skills gaps by 50 percent by the end of fiscal year 2013 in three-to- five of the six MCOs for which OPM appointed sub-goal leaders and groups of officials from across the government. Because the CAP goal target was difficult to measure, however, there is no clear basis for determining whether the CAP goal was met. Our prior work on performance goals has indicated that setting ambitious but realistic goals is one of the factors that can influence whether goal- setting and performance measurement efforts will be successful. However, sub-goal leaders and OPM officials leading the effort stated that the 50 percent skills gap closure target was difficult to measure. Specifically: An OPM official told us that the 50 percent target was not set appropriately and that, rather than have a single, overarching performance target, sub-goal groups needed individualized targets to allow for greater measurability. Similarly, the STEM sub-goal co-leader told us that the CAP goal target was not designed in a way that could facilitate a sophisticated measurement of progress by the various sub-goal groups. The Cybersecurity sub-goal leader indicated that the group’s efforts were at such an early stage of maturity that it could not gauge a 50 percent skills gap closure because it had not fully determined the nature of its skills gap. The Cybersecurity sub-goal group, therefore, did not have an effective baseline from which to measure 50 percent progress. Align Performance Metrics with Targets and Activities for Closing Skills Gaps Of the six sub-goal groups (Acquisition, Cybersecurity, Economist, Human Resources, STEM, and Auditor), three—the Cybersecurity, Economist, and Human Resources sub-goal groups—used performance metrics that either did not align with outcomes for closing skills gaps or did not represent their activities for closing skills gaps (the STEM and Auditor sub-goal groups did not establish metrics at all). Our prior work has outlined the importance of developing outcome-oriented performance metrics that clearly and sufficiently relate to the performance they are meant to assess. Our prior work has also discussed the importance of using performance metrics to link agencies’ goals and priorities with the actions that they are taking so that leaders can hold their organizations accountable for progress. For example, the Acquisition sub-goal group aligned its metrics with activities and overall targets. Sub-goal leaders indicated that they focused their efforts to close skills gaps on maintaining federal acquisition competencies. To do so, the sub-goal group set a performance metric of increasing the certification rate among civilian contract specialists and produced guidance on revising agencies’ certification curricula. Sub-goal leaders indicated that increasing the certification rate among civilian contract specialists by 10 percentage points—from 75 percent to 85 percent—was adequate for the occupation’s long-term needs. To make 50 percent progress toward this target by the end of fiscal year 2013, they worked toward a metric of an 80 percent certification rate. The Acquisition sub-goal group’s efforts resulted in a 6 percentage point increase in the civilian contract specialist certification rate—to 81 percent—which surpassed its CAP goal metric and was aligned with such actions as producing guidance on reforming agencies’ certification curricula. In contrast, the Cybersecurity sub-goal group is an example of a sub-goal group that did not fully align its performance metric with the sub-goal group’s activities. For instance, officials began an initiative in 2013 to categorize the cybersecurity specialty areas for the various occupations involved in federal cybersecurity work. This initiative, which operated through the end of fiscal year 2014, will allow managers to better assess what cybersecurity work needs to be done at agencies, and where there are gaps preventing agencies from accomplishing their cybersecurity missions. The Cybersecurity sub-goal leader noted that using the updated classifications to identify skills gaps is necessary before those skills gaps can be addressed. Officials did not, however, include a publicly-reported metric in fiscal year 2013 to track how many of the cybersecurity-related classifications had been updated. Moreover, the Human Resources sub-goal group is an example of a sub- goal group that did not align its metric with the outcome of closing skills gaps. The Human Resources sub-goal group tracked the percentage of federal human resources personnel who registered for and completed a single course on HR University—a centralized online suite of courses and curricula managed by OPM that agencies can use for training purposes. OPM officials said that they tracked these indicators because they perceived the need to standardize human resources training curricula across agencies as a first step to closing the skills gap. OPM officials told us, however, that they are still striving to develop more outcome-oriented measures from the use of HR University. We agree that while ensuring that human resources professionals receive proper training is vital, relying on a metric of how many people register for and complete a single online course is not the most effective way to assess the outcome of closing skill gaps within the human resource occupation. OPM officials also noted that a 2014 data clean-up of HR University detected erroneous information that made the fiscal year 2013 registration and course completion rates unreliable as reported. OPM has continued to use the course registration performance metric as part of its fiscal year 2015 Agency Priority Goal to close skills gaps in the federal HR workforce. OPM officials told us that data flaws on HR University have been corrected. However, because this metric does not measure an outcome, it does not provide quality information toward goals to close skills gaps. Key Project Planning Practices Could Improve Efforts to Address Skills Gaps Sub-goal groups’ planning documents for efforts to address skills gaps in fiscal year 2014 generally met three of six key practices for project planning. As we have noted, a well-developed and documented project plan can help ensure that agencies are able to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the project. Project plans can also encourage agency managers and stakeholders to systematically consider what is to be done, when and how it will be done, and what skills will be needed. We identified the following six key practices from our prior work: (1) identifying root causes of issues, (2) establishing objectives, (3) developing specific actions, (4) assigning roles and responsibilities, (5) establishing the duration of actions, and (6) using outcome-oriented performance metrics. Table 1 lists each of the sub-goal groups and our evaluation of their planning documents against each of the key practices for project planning. For plans that included some information relevant to a key practice but did not provide sufficient detail nor fully addressed the key practice, we gave partial credit. Some key practices were used consistently by sub-goal groups. As shown above, sub-goal groups’ plans generally met some of our key practices. For instance, all of the sub-goal groups structured their plans by listing specific actions for addressing skills gaps. Our prior work states that it is important to identify the specific actions necessary to achieve a plan’s objectives. By doing so, managers can properly assess the risk of not achieving the plan’s objectives.Acquisition sub-goal group listed a clear division of actions—such as publishing a revised certification curriculum and increasing the government-wide use of an acquisition workforce management system— that addressed objectives that included strengthening civilian contracting certification standards. For example, the plan for the Some key practices were generally absent from sub-goal group project planning. Sub-goal groups’ plans were less effective in meeting other key practices for project planning. For instance, only the STEM and Auditor sub-goal groups’ plans discussed the root causes of their skills gaps and the purpose for their actions. Additionally, sub-goal groups did not consistently identify roles and responsibilities for actions to address skills gaps. While all sub-goal groups listed agencies that would be responsible for accomplishing actions, only the Auditor sub-goal group identified individual officials charged with specific actions. Three of six sub-goal groups did not always track outcome-oriented performance metrics in their plans. For instance, the STEM sub-goal group’s plan tracked such items as the number of STEM hiring reforms that had been approved by OPM. While gaining OPM approval of hiring- policy changes is important in attracting more qualified workforce candidates, the STEM sub-goal group’s plan did not track outcomes that might result from approving such policy changes, such as the number and quality of applicants and hires for STEM positions. OPM’s Efforts to Predict Emerging Mission-Critical Skills Gaps Are in the Early Planning Stages OPM Plans to Strengthen the Methodology Used to Identify Emerging Skills Gaps The interagency working group that identified the list of skills gaps in six government-wide mission-critical occupations (MCO) was re-named the Federal Agency Skills Team (FAST). OPM has tasked the group— composed of agency officials with workforce planning and data analysis skills—with implementing a standard and repeatable methodology that identifies and addresses government-wide skills gaps as well as mission- critical competencies over a 4-year cycle. In the first year, OPM officials stated that FAST intends to meet regularly until it identifies a new set of government-wide skills gaps, which OPM officials expect will occur by June 2015. Our analysis of FAST’s proposed methodology identified three features that incorporate lessons learned from the Working Group’s initial efforts, which were described earlier. First, FAST is to use a data- driven approach as the initial step for identifying a broad list of skills gaps. Specifically, officials from the 27 Chief Human Capital Officers (CHCO) Council agencies will be expected to compare their agency’s mission- critical or priority occupations against several factors. According to OPM officials, those factors include: 2-year retention rate, quit rate, retirement rate, and applicant quality. Second, in addition to being more data-driven, FAST will use a multi- faceted approach to identify a select number of government-wide MCOs. This is to be achieved by using teams of at least three to four individuals to monitor internal and external environments using various sources— including news articles, reports, and interviews with public and private sector subject matter experts—to detect trends that may influence an agency’s current and future workforce needs. Third, our analysis determined that the proposed methodology does not have a strict numerical standard for what constitutes a government-wide skills gap. This could allow for more discretion to address occupations with the potential for significant programmatic impacts even though an absolute majority of agencies may not have skills gaps in those areas. After this identification process, FAST is to develop strategies to address the new set of skills gaps during the remaining years of the 4-year cycle. In years two and three, OPM—in conjunction with FAST—is to designate leaders from within the selected government-wide occupations who will develop and implement plans to address those skills gaps. Finally, in year four, FAST plans to evaluate and monitor outcomes to determine the effectiveness of those strategies. In addition, during the fourth year FAST plans to incorporate lessons learned into a revised process for identifying skills gaps at the federal level. OPM’s Plan to Capture Staffing Data Using Its Database Lacks a Schedule OPM’s plan to capture staffing data for select occupations using a database is still under development. Each year, OPM collects staffing data through a reporting process where the CHCO Council agencies are required to provide OPM with information—such as the target number of employees and projected attrition—for five government-wide MCOs (cybersecurity, human resources specialist, economist, contract specialist, auditor), and three-to-five agency-specific or STEM MCOs. Among other things, OPM uses these data to determine if an agency’s workforce has a staffing gap in any of those occupations. In response to an administration initiative directed, in part, to reduce the human resources reporting burden on agencies, OPM is exploring how it can replace this annual reporting process by using its database containing federal workforce information—known as the Enterprise Human Resources Integration (EHRI)—to capture the same agency staffing data. However, an OPM official stated that no timeframe currently exists for using EHRI to capture agencies’ MCO staffing data. In the interim, an OPM official stated that they will continue collecting MCO staffing data until OPM can make the necessary investments in EHRI. Our schedule assessment guide has noted that a well-planned schedule is a fundamental management tool that can (among other things) help government agencies specify when work will be performed in the future.Moreover, our schedule assessment guide states that a consistent methodology for developing, managing, and evaluating cost estimates for certain types of programs includes the concept of scheduling the necessary work to a timeline. By establishing a schedule specifying when EHRI will be modified to capture government-wide staffing data, OPM officials will have a road map for gauging progress, identifying and resolving potential problems, promoting accountability at all levels of the agency, and determining the amount and timing of the funding needed. OPM Lacks a Process for Collecting Government- wide Competency Data According to an OPM official, there is no process, using EHRI or another system, for collecting consistent data on the competencies of the federal workforce, which is needed to effectively predict future mission-critical skills gaps. According to an OPM official, federal agencies’ ability to assess workforce competencies varies, which makes collection of government-wide data on competency gaps difficult. As one example, we have found that the Census Bureau has started to assess the competencies needed to carry out its future work. In contrast, Department of Commerce (Commerce) human capital officials stated that they do not conduct a Department-wide competency assessment of Commerce’s workforce. Moreover, the Department of Energy (DOE) has conducted a competency assessment for a number of engineering occupations such as nuclear engineers. DOE officials have also developed a model for conducting a Department-wide competency assessment. Key principles for effective workforce planning that we developed note that agencies must determine the competencies that are critical to successfully achieving their mission and goals. Doing so can help agencies to effectively meet demographic, technological, and other forces that are challenging government agencies to change the activities they perform and the goals that they must achieve, how they do their business, and even who does the government’s business. Therefore, as OPM develops its process for using EHRI to collect agencies’ skills gaps data, it will be important for OPM to also work with agency CHCOs to bolster the ability of agencies to assess workforce competencies by sharing competency surveys, lessons learned, and other tools and resources— and to ensure that such information can be stored in the EHRI database for government-wide workforce analysis. OPM and Selected Agencies Could Improve Efforts to Address Skills Gaps by Strengthening Data-Driven Reviews OPM Developed HRstat Data-Driven Reviews to Help Agencies Regularly Track Their Progress in Achieving Their Human Resources Goals Data-driven reviews—commonly referred to as “stat” meetings—are regularly scheduled, structured meetings used by organizations to review performance metrics with department or program personnel to drive progress on agency priorities and goals. Conducting frequent stat meetings is a leadership strategy proven to help agency officials achieve results by focusing on an identified set of priorities, diagnosing problems, and deciding on the next steps to increase performance. The GPRA Modernization Act of 2010 required agencies to conduct data-driven quarterly progress reviews with key personnel responsible for the accomplishment of agency priority goals. Building on this statutory model, OPM created HRstat, which is a CHCO- led review of the key metrics that contribute to agencies’ human resources goals, such as closing mission-critical skills gaps. launched HRstat as a 3-year pilot program in May 2012, with an initial group of eight agencies that included Commerce, DOE, and the U.S. Agency for International Development (USAID)—the three agencies we selected for our illustrative case studies. In 2013 and 2014, OPM chose eight agencies per year for successive pilots. As a result, all 24 agencies subject to the Chief Financial Officers Act of 1990, as amended, had at least one year of HRstat implementation by the end of 2014. According to OPM guidance, agency Performance Improvement Officers and Chief Operating Officers should support HRstat reviews. assess them. The three agencies we selected all have closing critical skills gaps as one of their agency-wide human resource goals. Selected Agencies Are Using Different Metrics to Track Skills Gaps under Their HRstat Reviews Based on our assessment of the HRstat reviews of our selected agencies (using agency material from the third quarter of fiscal year 2014), we identified those metrics most relevant to tracking progress on closing skills gaps. As illustrated in table 2 below, we found considerable variation in the number and types of metrics agencies were using to track skills gaps. As shown above, only DOE was tracking retirement eligibility and projected retirements, which are key indicators of where agencies might be at risk for future skills gaps. Likewise, only Commerce was tracking candidate quality, which provides an indication of whether applicants had the skills needed to perform the work. While some amount of variation is both desirable and expected given the different missions of the agencies and the flexibility OPM gave agencies in selecting metrics, the variation shown above has at least two significant downsides. First, the variation in number and types of metrics agencies are tracking suggest that some agencies’ HRstat reviews are more robust and well rounded than others in that they are measuring factors affecting skills gaps from a more complete perspective. Indeed, each metric may tell a different story about the extent and nature of current and emerging skills gaps. This could lead to different remedial actions on the part of agencies. A second downside in the variation in numbers and types of metrics is that it limits OPM and the CHCO Council’s ability to track agencies’ progress in closing skills gaps government-wide. Our February 2013 report noted that a leading practice for successful data-driven reviews, such as HRstat, is to ensure alignment between goals, program activities, However, because our selected agencies are using and resources.different metrics, it is difficult for OPM to assess where agencies are making progress, where additional efforts are needed, and how OPM might be able to help them, if at all. It also limits the ability of agencies and OPM to discuss and share lessons learned in identifying and addressing skills gaps. OPM officials stated that they knew of the various metrics used by agencies during their HRstat reviews. They noted that the HRstat pilot program was intended to be “agency-centric” and not a data-collection tool for OPM. While we agree that agencies should continue to have flexibility in choosing and “owning” metrics for their HRstat reviews that best meet their particular needs, a core set of metrics—while still allowing agencies discretion to include metrics that meet their specific requirements—could help strengthen the quality and consistency of the HRstat reviews from a government-wide perspective. Moreover, a core set of metrics could foster collaboration between agencies’ use of HRstat reviews and the efforts envisioned under FAST. As we noted earlier, FAST is to use a data-driven approach as an initial step for identifying a range of skills gaps, such as a survey of hiring managers’ satisfaction with job applicants’ skills. As shown above, Commerce already tracks this same survey of hiring managers’ satisfaction during its HRstat reviews. A core set of metrics could integrate the work done by FAST with HRstat reviews. Going forward, OPM, in conjunction with the CHCO Council, plans to identify key strategic and operational HR metrics all agencies will collect through HRstat and share with OPM. As part of that effort, it will be important for OPM and the CHCO Council to develop a core set of valid metrics that are directly aligned with the goal of identifying and addressing agency skills gaps. Conclusions Closing workforce skills gaps is critical for agencies to better achieve a wide range of missions, from purchasing mission-critical goods and services to carrying out the Decennial Census. While efforts to close mission-critical skills gaps are couched in discussions about staffing numbers, competencies, metrics, and similar technical terms, the ultimate goal is higher-performing, cost-effective government. However, the challenges that agencies face are diverse and were not fully captured by the CHCO Council Working Group’s first efforts to identify skills gaps in government-wide, mission-critical occupations. Although these initial efforts created an infrastructure for addressing skills gaps, to date, overall progress remains mixed. At times, goals have suffered from having targets that are difficult to measure. Likewise, agency officials have chosen to track metrics that often do not allow for an accurate assessment of progress made toward these goals for closing skills gaps. Building the predictive capacity to identify emerging mission-critical skills gaps is also critical to making further progress in addressing this issue. Realizing this, OPM has established an interagency working group known as FAST, which is responsible for identifying and addressing current and emerging skills gaps. OPM also intends to replace its annual reporting process for collecting agency staffing data by modifying its workforce database to capture the same data. These are important steps forward. However, we are concerned about these efforts for two reasons. First, OPM has not established a time frame for modifying its workforce database to capture the same agency staffing data that it currently collects through an annual reporting process—which will reduce the human capital reporting burden on federal agencies. Second, OPM officials stated that there is no process for collecting data on the competencies of the federal workforce because agencies’ ability to assess workforce competencies varies. Helping agencies determine the competencies that are critical to successfully achieving their mission and goals will help them respond to external factors, such as changes in national security, technology, or budget constraints. At the agency level, the use of HRstat meetings is a proven leadership strategy that could help agency officials monitor their progress toward closing skills gaps. However, OPM should take a greater leadership role in helping agencies include a core set of metrics in their HRstat reviews so that OPM and agency leaders can have a clear view of progress made closing skills gaps. While it is important for agencies to have ownership over their HRstat reviews, OPM should also maximize its opportunity to use HRstat to gain greater visibility over the federal workforce. Recommendations for Executive Action To assist the interagency working group, known as FAST, to better identify government-wide skills gaps having programmatic impacts and measure its progress towards closing them, we recommend that the Director of OPM—in conjunction with the CHCO Council—strengthen its approach and methodology through the following actions: Assist FAST in developing goals for closing skills gaps with targets that are both clear and measurable. Work with FAST to design outcome-oriented performance metrics that align with overall targets for closing skills gaps and link to the activities for addressing skills gaps. Incorporate greater input from subject matter experts, as planned. Ensure FAST consistently follows key practices for project planning. To ensure that OPM builds the predictive capacity to identify emerging skills gaps across the government—including the ability to collect and use reliable information on the competencies of the federal workforce for government-wide workforce analysis—we recommend that the Director of OPM take the following two actions: Establish a schedule specifying when OPM will modify its EHRI database to capture staffing data that it currently collects from agencies through its annual workforce data reporting process. Work with agency CHCOs to bolster the ability of agencies to assess workforce competencies by sharing competency surveys, lessons learned, and other tools and resources. To help agencies and OPM better monitor progress toward closing skills gaps within agencies and government-wide, we recommend that the Director of OPM: Work with the CHCO Council to develop a core set of metrics that all agencies should use as part of their HRstat data-driven reviews. Coordinate with FAST personnel and explore the feasibility of collecting information needed by FAST as part of agencies’ HRstat reviews. Agency Comments and Our Evaluation We provided a draft of this report for review and comment to the Directors of OPM and OMB; the Secretaries of the Departments of Defense, Energy, and Treasury; the Administrators of the National Aeronautics and Space Administration (NASA) and USAID; the Chief Financial Officer and Assistant Secretary for Administration at the Department of Commerce; and the Special Assistant to the President and Associate Counsel to the President in the Executive Office of the President. The following agencies had no comments on the draft report: the Departments of Defense, Energy, and Treasury, NASA, OMB, USAID, and the Office of Science and Technology Policy in the Executive Office of the President; as well as Commerce’s Bureau of the Census and National Institute of Standards and Technology. OPM and Commerce’s Office of Human Resources Management provided technical comments that were incorporated, as appropriate. In its written comments, reproduced in appendix II, OPM partially concurred with one recommendation, did not concur with one recommendation, and concurred with one recommendation. Specifically, OPM stated that it partially concurred with our recommendation to strengthen the approach and methodology used by the interagency working group, known as FAST, to better identify skills gaps. OPM noted it agreed with, and planned to implement, the principles of each recommended action. However, OPM said it needed to clarify how its terminology and planned process differs from the description in our recommendation. In particular, OPM stated its process will identify government-wide rather than agency-specific skills gaps as it believes our draft recommendation suggests. We recognize that FAST was established to address government-wide skills gaps and have clarified the language in our recommendation accordingly. OPM stated that it does not concur with our recommendation to: (1) establish a schedule specifying when it will modify its EHRI database to automatically capture staffing data that it currently collects from agencies through its annual workforce data reporting process, and (2) work with agency CHCOs to bolster agencies’ ability to assess workforce competencies by sharing competency surveys, lessons learned, and other tools and resources. Regarding EHRI, OPM maintained that it is impossible for the EHRI database to automatically capture staffing data currently included in MCO Resource Charts because some of these data includes specific agency projections and targets, which are provided via a manual data feed. OPM stated that it is assessing whether EHRI can be modified to allow agencies to supply these manual feed data into the database system. We have modified our report to recognize that EHRI cannot automatically capture the same agency staffing data that are captured through the MCO Resource Charts. In addition, OPM noted that there are funding implications associated with its ability to anticipate whether and when a modification schedule to the EHRI online database could be established. While we appreciate OPM’s funding concerns, as we mention in the report, a well-planned schedule is a fundamental management tool that can (among other things) help government agencies specify when work will be performed in the future. Moreover, scheduling the necessary work to a timeline is important for developing, managing, and evaluating cost estimates for certain types of programs. As such, a schedule, as we recommend, would help OPM determine the amount and timing of the funding needed, and help OPM identify the competing priorities that need to be balanced due to resource constraints. We therefore continue to believe OPM would benefit by implementing our recommendation. Regarding workforce competencies, OPM noted that funding and resource constraints negatively affect its ability to support agencies’ efforts to address their workforce competencies. While funding limitations could affect OPM’s ability to take these actions, our recommendation would help OPM and agencies stretch resources by leveraging their knowledge and experience. We therefore continue to believe OPM would benefit by implementing our recommendation. OPM concurred with our recommendation to work with the CHCO Council to develop a core set of metrics that all agencies should use as part of their HRstat data-driven reviews, and explore the feasibility of collecting information needed by FAST as part of agencies’ HRstat reviews. We are sending copies of this report to the appropriate congressional committees, the Director of OPM, the Secretaries of Commerce and Energy, the Administrator of USAID, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report assesses (1) lessons learned from initial efforts to close critical skills gaps and how they can inform future initiatives, (2) what progress the Office of Personnel Management (OPM) has made in building a predictive capacity to identify future mission-critical skills gaps, and (3) how OPM and agencies are using HRstat to identify and close skills gaps. To assess lessons learned from initial efforts to close critical skills gaps and how they can inform future initiatives, we took the following steps. We observed the Chief Human Capital Officer (CHCO) Council skills gaps working group’s (Working Group) meetings from July 2011 to August 2012 to understand the process it used to identify government-wide skills gaps in mission-critical occupations (MCO). We reviewed documentation, such as status reports, that described the Working Group’s activities. Furthermore, we reviewed our reports from October 2010 to May 2014 to identify skills gaps that we determined were causing programmatic impacts at agencies across the federal government. To identify our reports that identified skills gaps at agencies across the federal government, we searched through our Engagement Reporting System, our external database, and our Engagement Results Phase internal database. Reports were selected if they included references to skill gaps or related workforce conditions that might produce skill gaps (workforce shortages, competency gaps, training deficits) that were primary objective findings issued since the start of fiscal year 2011, updates on skill gaps identified since the beginning of fiscal year 2011, or updates on skill gaps identified in our 2011 and 2013 High Risk Series reports. Reports were not selected if they found skills gaps at agencies prior to fiscal year 2011, or if they referenced skills gaps that were not part of the engagement’s findings. Private sector skills gaps were also excluded because the scope of this engagement was limited to OPM’s and agencies’ progress closing skills gaps across the federal government. We reviewed planning documents for addressing skills gaps in the six MCOs identified by the Working Group for such things as their consistency with key practices for project planning. We identified project planning key practices from our prior work assessing project planning, our guide for key practices in project schedules, and our criteria for removing issues from our high-risk list. In particular, we assessed whether each planning document included the following elements relevant to closing skills gaps: (1) identifying root causes, (2) establishing plan objectives, (3) developing specific actions needed to accomplish objectives, (4) assigning roles and responsibilities to all actions, (5) establishing the durations of actions, and (6) using outcome-oriented performance metrics. Two analysts independently assessed each planning document to determine the extent to which information that was included met all six key practices and rated each plan using a three-level scale of yes, partially, or no, and reached a level of inter-rater agreement of 100 percent. For plans that included some information relevant to a key practice but did not provide sufficient detail or did not fully address the key practice, we gave partial credit. We reviewed documentation such as quarterly status updates to determine what progress had been made toward the interim cross- agency priority (CAP) goal for closing skills gaps. We also interviewed OPM and agency officials who are responsible for designing strategies to close skills gaps that OPM, in coordination with, the Office of Management and Budget, designated as the focus of the CAP goal. To assess what progress OPM has made in building a predictive capacity to identify future government-wide mission-critical skills gaps, we reviewed documentation, such as OPM strategic plans and meeting minutes from the CHCO Council Executive Steering Committee, containing information about ongoing OPM initiatives that could build the predictive capacity within OPM to identify future mission-critical skills gaps. We also interviewed OPM officials who were implementing these initiatives to learn more about their status. To assess how OPM and agencies are using HRstat to identify and close skills gaps, we selected a nongeneralizable sample of three agencies— the Departments of Commerce and Energy, and the U.S. Agency for International Development—from among the 24 agencies that had at least begun implementing HRstat at the time of our review. We selected these agencies because they had one or more of the following: (1) multiple skills gaps that we and the Working Group identified; (2) skills gaps in any of the four occupations that we or the Working Group most frequently identified (STEM (science, technology, engineering, and mathematics), acquisition, human resources, and cybersecurity); and (3) large proportions of their workforces in each of those occupations. Also, the selected agencies were among the first to implement HRstat and therefore had the most experience with the HRstat process at the time of our review. To assess how our selected agencies were using HRstat to address skills gaps, we also reviewed memorandums, internal briefings, and other material that agencies used to prepare for the reviews, and interviewed officials involved in each agency’s HRstat review process. We conducted this performance audit from March 2014 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Personnel Management Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Steven Lozano, Assistant Director; Don Kiggins, Analyst-in-Charge; Devin Braun; Deirdre Duffy; Karin Fangman; Donna Miller; and Rebecca Shea made major contributions to this report.
Mission-critical skills gaps both within federal agencies and across the federal workforce pose a high risk to the nation because they impede the government from cost-effectively serving the public and achieving results. GAO was asked to review progress OPM has made in closing government-wide skills gaps, achieving its cross-agency priority goal, and additional steps needed to better identify and address skills gaps. This report assesses (1) lessons learned from initial efforts to close critical skills gaps and how they can inform future initiatives, (2) what progress OPM has made in building a predictive capacity to identify future mission-critical skills gaps, and (3) how OPM and agencies are using HRstat to identify and close skills gaps. To address these objectives, GAO reviewed documentation; interviewed OPM officials; and reviewed the implementation of HRstat meetings at Commerce, DOE, and USAID. Lessons learned from initial efforts to try to close skills gaps could strengthen future approaches. For example, the Chief Human Capital Officer (CHCO) Council Working Group (Working Group) identified skills gaps in six government-wide occupations, such as cybersecurity and auditors. Although this effort was an important step forward, GAO's work has identified skills gaps in nearly two dozen occupations with significant programmatic impact. In some cases, such as cybersecurity, the skills gaps GAO identified were consistent with the Working Group's findings. But GAO's work has also identified additional skills gaps. For example, a decline in telecommunication expertise at multiple agencies contributed to delays and cost overruns of 44 percent when those agencies were transitioning to a new network of telecommunications services. The Working Group did not address a more comprehensive list of skills gaps because of various methodological shortcomings that included insufficient analysis of workforce data early in the process. In 2015, the Office of Personnel Management (OPM) and the CHCO Council plan to identify and address a new set of government-wide skills gaps. It will be important that key lessons learned from the initial efforts to identify skills gaps inform this next round of work, including the need to (1) use a data-driven approach early in the process, (2) prioritize occupations using criteria that consider programmatic impact, and (3) consult with subject matter experts and other stakeholders prior to the identification of skills gaps in occupations. Key features of OPM's efforts to predict emerging skills gaps are in the early planning stages. GAO has previously reported that further progress in closing skills gaps will depend on, among other things, the extent to which OPM develops a capacity to predict emerging skills gaps beyond those areas already identified. A re-named interagency group, known as the Federal Agency Skills Team, plans to strengthen the methodology used to identify emerging skills gaps. Additionally, OPM officials are discussing plans to modify OPM's workforce database to capture government-wide staffing data. However, OPM will need to establish a schedule for modifying this database to ensure its implementation. OPM officials also stated that because agencies' capacity to assess workforce competencies varies, OPM does not have government-wide data on competency gaps, which is needed to identify emerging cross-agency skills gaps. In conjunction with agencies' CHCOs, OPM will need to strengthen agencies' ability to assess their competency needs that are critical to successfully achieving their mission and goals. OPM and selected agencies that GAO reviewed—the Departments of Commerce (Commerce) and Energy (DOE), and the U.S. Agency for International Development (USAID)—could improve efforts to address skills gaps by strengthening their use of quarterly data-driven reviews, known as HRstat meetings. Specifically, the metrics used by the selected agencies during their HRstat meetings vary from agency to agency, making it difficult for OPM to assess agencies' progress in closing skills gaps government-wide. Although it is important for agencies to have their own HRstat metrics, OPM should work with the CHCO Council to develop a core set of HRstat metrics that all agencies use so that OPM may have the ability to analyze skills gap data across the government.
Background Compact of Free Association: 1986-2003 In 1986, the United States, the FSM, and the RMI entered into the original Compact of Free Association. The compact provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to ensure certain national security rights for all of the parties, and (3) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency. Under the original compact, the FSM and RMI also benefited from numerous U.S. federal programs, while citizens of both nations exercised their right under the compact to live and work in the United States as “nonimmigrants” and to stay for long periods of time. Although the first and second goals of the original compact were met, economic self-sufficiency was not achieved under the first compact. The FSM and the RMI became independent nations in 1978 and 1979, respectively, and the three countries established key defense rights, including securing U.S. access to military facilities on Kwajalein Atoll in the RMI through 2016. The compact’s third goal was to be accomplished primarily through U.S. direct financial assistance to the FSM and the RMI that totaled $2.1 billion from 1987 through 2003. However, estimated FSM and RMI per capita GDP levels at the close of the compact did not exceed, in real terms, those in the early 1990s, although U.S. assistance had maintained income levels that were higher than the two countries could have achieved without support. In addition, we found that the U.S., FSM, and RMI governments provided little accountability over compact expenditures and that many compact-funded projects experienced problems because of poor planning and management, inadequate construction and maintenance, or misuse of funds. Amended Compacts of Free Association: 2004- 2023 In 2003, the United States approved separate amended compacts with the FSM and RMI that (1) continue the defense relationship, including a new agreement providing U.S. military access to Kwajalein Atoll in the RMI through 2086; (2) strengthen immigration provisions; and (3) provide an estimated $3.6 billion in financial assistance to both nations from 2004 through 2023, including about $1.5 billion to the RMI (see app. I). The amended compacts identify the additional 20 years of grant assistance as intended to assist the FSM and RMI governments in their efforts to promote the economic advancement and budgetary self-reliance of their people. Financial assistance is provided in the form of annual sector grants and contributions to each nation’s trust fund. The amended compacts and their subsidiary agreements, along with the countries’ development plans, target the grant assistance to six sectors—education, health, public infrastructure, the environment, public sector capacity building, and private sector development—prioritizing two sectors, education and health. To provide increasing U.S. contributions to the FSM’s and the RMI’s trust funds, grant funding decreases annually and will likely result in falling per capita grant assistance over the funding period and relative to the original compact (see app. II). For example, in 2004 U.S. dollar terms, FSM per capita grant assistance will fall from around $1,352 in 1987 to around $562 in 2023, and RMI per capita assistance will fall from around $1,170 in 1987 to around $317 in 2023. Under the amended compacts, annual grant assistance is to be made available in accordance with an implementation framework that has several components (see app. III). For example, prior to the annual awarding of compact funds, the countries must submit development plans that identify goals and performance objectives for each sector. The FSM and RMI governments are also required to monitor day-to-day operations of sector grants and activities, submit periodic financial and performance reports for the tracking of progress against goals and objectives, and ensure annual financial and compliance audits. In addition, the U.S. and FSM Joint Economic Management Committee (JEMCO) and the U.S. and RMI Joint Economic Management and Financial Accountability Committee (JEMFAC) are to approve annual sector grants and evaluate the countries’ management of the grants and their progress toward compact goals. The amended compacts also provide for the formation of FSM and RMI trust fund committees to, among other things, hire money managers, oversee the respective funds’ operation and investment, and provide annual reports on the effectiveness of the funds. Current Development Prospects Remain Limited for the RMI The RMI economy shows limited potential for developing sustainable income sources other than foreign assistance to offset the annual decline in U.S. compact grant assistance. In addition, the RMI has not enacted economic policy reforms needed to improve its growth prospects. The RMI’s economy shows continued dependence on government spending of foreign assistance and limited potential for expanded private sector and remittance income. Since 2000, the estimated public sector share of GDP has grown, with public sector expenditure in 2005—about two-thirds of which is funded by external grants—accounting for about 60 percent of GDP. The RMI’s government budget is characterized by limited tax revenue paired with growing government payrolls. For example, RMI taxes have consistently provided less than 30 percent of total government revenue; however, payroll expenditures have roughly doubled, from around $17 million in 2000 to around $30 million in 2005. The RMI development plan identifies fishing and tourism as key potential private sector growth industries. However, the two industries combined currently provide less than 5 percent of employment, and both industries face significant constraints to growth that stem from structural barriers and a costly business environment. According to economic experts, growth in these industries is limited by factors such as geographic isolation, lack of tourism infrastructure, inadequate interisland shipping, a limited pool of skilled labor, and a growing threat of overfishing. Although remittances from emigrants could provide increasing monetary support to the RMI, evidence suggests that RMI emigrants are currently limited in their income-earning opportunities abroad owing to inadequate education and vocational skills. For example, the 2003 U.S. census of RMI migrants in Hawaii, Guam, and the Commonwealth of the Northern Marianas Islands reveals that only 7 percent of those 25 years and older had a college degree and almost half of RMI emigrants lived below the poverty line. Although the RMI has undertaken efforts aimed at economic policy reform, it has made limited progress in implementing key tax, land, foreign investment, and public sector reforms that are needed to improve its growth prospects. For example: The RMI government and economic experts have recognized for several years that the RMI tax system is complex and regressive, taxing on a gross rather than net basis and having weak collection and administrative capacity. Although the RMI has focused on improving tax administration and has raised some penalties and tax levels, legislation for income tax reform has failed and needed changes in government import tax exemptions have not been addressed. In attempts to modernize a complex land tenure system, the RMI has established land registration offices. However, such offices have lacked a systematic method for registering parcels, instead waiting for landowners to voluntarily initiate the process. For example, only five parcels of land in the RMI had been, or were currently being, registered as of June 2006. Continued uncertainties over land ownership and land values create costly disputes, disincentives for investment, and problems regarding the use of land as an asset. Economic experts and private sector representatives describe the overall climate for foreign investment in the RMI as complex and nontransparent. Despite attempts to streamline the process, foreign investment regulations remain relatively burdensome, with reported administrative delays and difficulties in obtaining permits for foreign workers. The RMI government has endorsed public sector reform; however, efforts to reduce public sector employment have generally failed, and the government continues to conduct a wide array of commercial enterprises that require subsidies and compete with private enterprises. As of June 2006, the RMI had not prepared a comprehensive policy for public sector enterprise reform. Although the RMI development plan includes objectives for economic reform, until August 2006—2 years into the amended compact—JEMFAC did not address the country’s slow progress in implementing these reforms. The RMI Faces Challenges to Effectively Implementing Compact Assistance for Its Long-Term Development Goals The RMI has allocated funds to priority sectors, although several factors have hindered its use of the funds to meet long-term development needs. Further, despite actions taken to effectively implement compact grants, administrative challenges have limited its ability to ensure use of the grants for its long-term goals. In addition, although OIA has monitored early compact activities, it has also faced capacity constraints. The RMI allocated compact funds largely to priority sectors for 2004-2006. The RMI allocated about 33 percent, 40 percent, and 20 percent of funds to education, infrastructure, and health, respectively (see app. IV). The education allocation included funding for nine new school construction projects, initiated in October 2003 through July 2006. However, various factors, such as land use issues and inadequate needs assessments, have limited the government’s use of compact funds to meet long-term development needs. For example: Management and land use issues. The RMI government and Kwajalein landowners have been disputing the management of public entities and government use of leased land on the atoll. Such tensions have negatively affected the construction of schools and other community development initiatives. For example, the government and landowners disagreed about the management of the entity designated to use the compact funds set aside for Ebeye special needs; consequently, about $3.3 million of the $5.8 million allocated for this purpose had not been released for the community’s benefit until after September 2006. In addition, although the RMI has completed some infrastructure projects where land titles were clear and long-term leases were available, continuing uncertainty regarding land titles may delay future projects. Lack of planning for declining U.S. assistance. Despite the goal of budgetary self-reliance, the RMI lacks concrete plans for addressing the annual decrement in compact funding, which could limit its ability to sustain current levels of government services in the future. RMI officials told us that they can compensate for the decrement in various ways, such as through the yearly partial adjustment for inflation provided for in the amended compacts or through improved tax collection. However, the partial nature of the adjustment causes the value of the grant to fall in real terms, independent of the decrement, thereby reducing the government’s ability to pay over time for imports, such as energy, pharmaceutical products, and medical equipment. Additionally, the RMI’s slow progress in implementing tax reform will limit its ability to augment tax revenues. The RMI has taken steps to effectively implement compact assistance, but administrative challenges have hindered its ability to ensure use of the funds for its long-term development goals. The RMI established development plans that include strategic goals and objectives for the sectors receiving compact funds. Further, in addition to establishing JEMFAC, the RMI designated the Ministry of Foreign Affairs as its official contact point for compact policy and grant implementation issues. However, data deficiencies, report shortcomings, capacity constraints, and inadequate communication have limited the RMI and U.S. governments’ ability to consistently ensure the effective use of grant funds to measure progress, and monitor day-to-day activities. Data deficiencies. Although the RMI established performance measurement indicators, a lack of complete and reliable data has prevented the use of these indicators to assess progress. For example, the RMI submitted data to JEMFAC for only 15 of the 20 required education performance indicators in 2005, repeating the submission in 2006 without updating the data. Also, in 2005, the RMI government reported difficulty in comparing the health ministry’s 2004 and 2005 performance owing to gaps in reported data—for instance, limited data were available in 2004 for the outer island health care system. Report shortcomings. The usefulness of the RMI’s quarterly performance reports has also been limited by incomplete and inaccurate information. For example, the RMI Ministry of Health’s 2005 fourth-quarter report contained incorrect outpatient numbers for the first three quarters, according to a hospital administrator. Additionally, we found several errors in basic statistics in the RMI quarterly reports for education and RMI Ministry of Education officials and officials in other sectors told us that they had not been given the opportunity to review the final performance reports compiled by the statistics office prior to submission. Capacity constraints. Staff and skill limitations have constrained the RMI’s ability to provide day-to-day monitoring of sector grant operations. However, the RMI has submitted its single audits on time. In addition, although the single audit reports for 2004 and 2005 indicated weaknesses in the RMI’s financial statements and compliance with requirements of major federal programs, the government has developed corrective action plans to address the 2005 findings related to such compliance. Lack of communication. Our interviews with U.S. and RMI department officials, private sector representatives, NGOs, and economic experts revealed a lack of communication and dissemination of information by the U.S. and RMI governments on issues such as JEMFAC decisions, departmental budgets, economic reforms, legislative decisions, and fiscal positions of public enterprises. Such lack of information about government activities creates uncertainty for public, private, and community leaders, which can inhibit grant performance and improvement of social and economic conditions. As administrator of the amended compact grants, OIA monitored sector grant and fiscal performance, assessed RMI compliance with compact conditions, and took action to correct persistent shortcomings. For example, since 2004, OIA has provided technical advice and assistance to help the RMI improve the quality of its financial statements and develop controls to resolve audit findings and prevent recurrences. However, OIA has been constrained in its oversight role owing to staffing challenges and time-consuming demands associated with early compact implementation challenges in the FSM. RMI Trust Fund May Not Provide Sustainable Income After Compact Grants End Market volatility and choice of investment strategy could lead to a wide range of RMI trust fund balances in 2023 and potentially prevent trust fund disbursements in some years. Although the RMI has supplemented its trust fund balance with additional contributions, other sources of income are uncertain or entail risks. Furthermore, the RMI’s trust fund committee has faced challenges in effectively managing the fund’s investment. Market volatility and investment strategy could have a considerable impact on projected trust fund balances in 2023. Our analysis indicates that, under various scenarios, the RMI’s trust fund could fall short of the maximum allowed disbursement level—an amount equal to the inflation- adjusted compact grants in 2023—after compact grants end, with the probability of shortfalls increasing over time (see app. V). For example, under a moderate investment strategy, the fund’s income is only around 10 percent likely to fall short of the maximum distribution by 2031. However, this probability rises to almost 40 percent by 2050. Additionally, our analysis indicates a positive probability that the fund will yield no disbursement in some years; under a moderate investment strategy the probability is around 10 percent by 2050. Despite the impact of market volatility and investment strategy, the trust fund committee’s reports have not yet assessed the fund’s potential adequacy for meeting the RMI’s long- term economic goals. RMI trust fund income could be supplemented from several sources, although this potential is uncertain. For example, the RMI received a commitment from Taiwan to contribute $40 million over 20 years to the RMI trust fund, which improved the RMI fund’s likely capacity for disbursements after 2023. However, the RMI’s limited development prospects constrain its ability to raise tax revenues to supplement the fund’s income. Securitization—issuing bonds against future U.S. contributions—could increase the fund’s earning potential by raising its balances through bond sales. However, securitization could also lead to lower balances and reduced fund income if interest owed on the bonds exceeds investment returns. The RMI trust fund committee has experienced management challenges in establishing the trust fund to maximize earnings. Contributions to the trust fund were initially placed in a low-interest savings account and were not invested until 16 months after the initial contribution. As of June 2007, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy. U.S. government officials suggested that contractual delays and committee processes for reaching consensus and obtaining administrative support contributed to the time taken to establish and invest funds. As of May 2007, the committee had not yet taken steps to improve these processes. Conclusions Since enactment of the amended compacts, the U.S. and RMI governments have made efforts to meet new requirements for implementation, performance measurement, and oversight. However, the RMI faces significant challenges in working toward the compact goals of economic advancement and budgetary self-reliance as the compact grants decrease. Largely dependent on government spending of foreign aid, the RMI has limited potential for private sector growth, and its government has made little progress in implementing reforms needed to increase investment opportunities and tax income. In addition, JEMFAC did not address the pace of reform during the first 2 years of compact implementation. Further, both the U.S. and RMI governments have faced significant capacity constraints in ensuring effective implementation of grant funding. The RMI government and JEMFAC have also shown limited commitment to strategically planning for the long-term, effective use of grant assistance or for the budgetary pressure the government will face as compact grants decline. Because the trust fund’s earnings are intended as a main source of U.S. assistance to the RMI after compact grants end, the fund’s potential inadequacy to provide sustainable income in some years could impact the RMI’s ability to provide government services. However, the RMI trust fund committee has not assessed the potential status of the fund as an ongoing source of revenue after compact grants end in 2023. Prior Recommendations Our prior reports on the amended compacts include recommendations that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as chair of the RMI management and trust fund committees, to, among other things, ensure that JEMFAC address the lack of RMI progress in implementing reforms to increase investment and tax income; coordinate with other U.S. agencies on JEMFAC to work with the the RMI to establish plans to minimize the impact of declining assistance; coordinate with other U.S. agencies on JEMFAC to work with the RMI to fully develop a reliable mechanism for measuring progress toward compact goals; and ensure the RMI trust fund committee’s assessment and timely reporting of the fund’s likely status as a source of revenue after 2023. Interior generally concurred with our recommendations and has taken actions in response to several of them. For example, in August 2006, JEMFAC discussed the RMI’s slow progress in implementing economic reforms. Additionally, the trust fund committee decided in June 2007 to create a position for handling the administrative duties of the fund. Regarding planning for declining assistance and measuring progress toward compact goals, JEMFAC has not held an annual meeting since the December 2006 publication of the report containing those recommendations. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. Contacts and Acknowledgements For future contacts regarding this testimony, please call David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Individuals making key contributions to this testimony included Emil Friberg, Jr., Ming Chen, Tracy Guerrero, Julie Hirshen, Leslie Holen, Reid Lowe, Mary Moutsos, Kendall Schaefer, and Eddie Uyekawa. Appendix I: U.S. Assistance to Be Provided to the FSM and the RMI under the Amended Compacts, 2004 through 2023 FSM grants (Section 211) (Section 215) (Section 211) (Section 216) (Section 212) For both the FSM and the RMI, annual grant amounts include $200,000 to be provided directly by the Secretary of the Interior to the Department of Homeland Security, Federal Emergency Management Agency, for disaster and emergency assistance purposes. The grant amounts do not include the annual audit grant, capped at $500,000, that will be provided to both countries. These dollar amounts shall be adjusted each fiscal year for inflation by the percentage that equals two-thirds of the percentage change in the U.S. gross domestic product implicit price deflator, or 5 percent, whichever is less in any one year, using the beginning of 2004 as a base. Grant funding can be fully adjusted for inflation after 2014, under certain U.S. inflation conditions. Appendix II: Estimated FSM and RMI per Capita Compact Grant Assistance for Fiscal Years 1987 – 2023 Appendix III: Amended Compact Implementation Framework Annual sector grant budget FSM/RMI propoe grnt budget for ech ector tht inclde proviion report to used to: – Monitor gener – Expenditre, performnce go, nd pecific performnce indictor – Brekdown of peronnel expenditre nd other co – Informtion on U.S. federl progr nd other donor United Ste evuate the propoed ector grnt budget for: – Contency with fnding requirement in the compct nd relted – Identify poitive event thccelerte performnce otcome nd prolem encontered nd their impct on grnt ctivitie nd performnce measureopertion to ensure complince with grnt condition Submit nnual report to the U.S. Joint management and accountability committees Appendix IV: RMI Sector Grant Allocation, 2004 through 2006 Appendix V: Probability of RMI Trust Fund Income Not Reaching the Maximum Disbursement Levels Allowed Market volatility and choice of investment strategy could result in the RMI trust fund’s inability to disburse the maximum level of income allowed in the trust fund agreement, or any income, in some years. Trust fund income levels will depend on the investment strategy chosen, with a more conservative strategy carrying a lower level of market volatility and a lower level of expected returns over time than an aggressive investment strategy. Figure 1 illustrates projected RMI trust fund balances under the conservative, moderate, and aggressive investment strategies that we projected. As shown in figure 2, under all three strategies, the RMI trust fund’s annual income will likely not reach the maximum disbursement allowed, with the probability of shortfall increasing with time. For example, our analysis of the moderate investment strategy shows probability of about 10 percent that the RMI trust fund’s income will not reach the maximum allowed disbursement after 2031, with the probability rising to around 40 percent by 2050.
From 1987 through 2003, the United States provided more than $2 billion in economic assistance to the Federated States of Micronesia (FSM) and the RMI under a Compact of Free Association; approximately $579 million of this economic assistance went to the RMI. In 2003, the U.S. government approved an amended compact with the RMI that provides an additional 20 years of assistance, totaling about $1.5 billion from 2004 through 2023. The Department of the Interior's Office of Insular Affairs (OIA) is responsible for administering and monitoring this U.S. assistance. The amended compact with the RMI identifies the additional 20 years of grant assistance as intended to assist the RMI government in its efforts to promote the economic advancement and budgetary self-reliance of its people. The assistance is provided in the form of annually decreasing grants that prioritize health and education, paired with annually increasing contributions to trust funds intended as a source of revenue for the country after the grants end in 2023. The amended compact also contains several new funding and accountability provisions that strengthen reporting and bilateral interaction. These provisions include requiring the establishment of a joint economic management committee and a trust fund committee to, respectively, among other things, review the RMI's progress toward compact objectives and to assess the trust fund's effectiveness in contributing to the country's long-term economic advancement and budgetary self-reliance. In 2003, we testified that these provisions could improve accountability over assistance but that successful implementation will require appropriate resources and sustained commitment from both the United States and the RMI. Drawing on several reports that we have published since 2005, I will discuss the RMI's economic prospects, implementation of its amended compact to meet long-term goals, and potential trust fund earnings. The RMI has limited prospects for achieving its long-term development objectives and has not enacted policy reforms needed to enable economic growth. The RMI depends on public sector spending of foreign assistance rather than on private sector or remittance income; public sector expenditure accounts for more than half of its gross domestic product (GDP). The RMI government budget largely depends on foreign assistance and, despite annual decrements in compact funding to support budgetary expenditures, is characterized by a growing wage bill. Meanwhile, the two private sector industries identified as having growth potential--fisheries and tourism--face significant barriers to expansion because of the RMI's remote geographic locations, inadequate infrastructure, and poor business environment. In addition, RMI emigrants lack marketable skills that are needed to increased revenue from remittances. Moreover, progress in implementing key policy reforms necessary to improve the private sector environment has been slow. Foreign investment regulations remain burdensome, and RMI government involvement in commercial activities continues to hinder private sector development. The RMI has made progress in implementing compact assistance, but it faces several challenges in allocating and using this assistance to support its long-term development goals. RMI grant allocations have reflected compact priorities by targeting health, education, and infrastructure--for example, funding construction of nine new schools. The RMI also has not planned for long-term sustainability of services that takes into account the annual funding decrement. Capacity limitations have further affected its ability to ensure the effective use of grant funds. The RMI currently lacks the capacity to adequately measure progress, owing to inadequate baseline data and incomplete performance reports. Moreover, although accountability--as measured by timeliness in single audit reporting and corrective action plans to single audit findings--has improved, insufficient staff and skills have limited the RMI's ability to monitor day-to-day sector grant operations as the compacts require. Inadequate communication about grant implementation may further hinder the U.S. and RMI governments from ensuring the grants' effective use. The RMI trust fund may not provide sustainable income for the country after compact grants end, potential sources for supplementing trust fund income have limitations, and the trust fund committee has experienced management challenges. Market volatility and the choice of investment strategy could cause the RMI trust fund balance to vary widely, and there is increasing probability that in some years the trust fund will not reach the maximum disbursement level allowed--an amount equal to the inflation-adjusted compact grants in 2023--or be able to disburse any income. The trust fund committee's reporting has not analyzed the fund's potential effectiveness in helping the RMI achieve its long-term economic goals. Although the RMI has supplemented its trust fund income with a contribution from Taiwan, other sources of income are uncertain or entail risk. As of June 2007, for example, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy.
Background PRWORA made sweeping changes to national welfare policy, creating TANF and ending the federal entitlement to assistance for eligible needy families with children under Aid to Families With Dependent Children (AFDC). The Department of Health and Human Services (HHS) administers the TANF block grant program, which provides states with up to $16.5 billion each year through fiscal year 2002. TANF was designed to help needy families reduce their dependence on welfare and move toward economic independence. The law also greatly increased the discretion states have in the design and operation of their welfare programs, allowing states to determine forms of aid and the categories of families eligible for aid. TANF establishes time limits and work requirements for adults receiving aid and requires states to sustain 75 to 80 percent of their historic level of welfare spending through a maintenance-of-effort requirement. In addition, TANF gives states funding flexibility, which allows states to exclude some families from federal time limits and work requirements. TANF Establishes Time Limits and Work Requirements for Adults Receiving Aid TANF establishes a 60-month time limit for families receiving aid. States have the option of establishing shorter time limits for families in their state. A state that does not comply with the TANF time limit can be penalized by a 5 percent reduction in its block grant. While the intent of TANF is to provide temporary, time-limited aid, federal time limits do not apply to all forms of aid or to all families receiving aid. First, states are only to count toward the 60-month time limit any month in which an individual receives a service or benefit considered “assistance,” which is defined in the TANF regulations as cash or other forms of benefits designed to meet a family’s ongoing basic needs. Second, time limits do not apply to the following types of cases: 1. Cases in which the adult in the household does not receive cash assistance, typically called “child-only” cases.2. Families that received assistance while living in Indian country or an Native Alaskan village where 50 percent of the adults are not employed. Third, all states have the option to use federal funds to extend assistance beyond the federal 60-month limit for reasons of hardship, as defined by the state. States can extend assistance for up to 20 percent of the average monthly number of families receiving assistance (“20 percent extension”).States can also extend assistance for victims of domestic violence through federally approved domestic violence waivers.Finally, assistance that is provided solely through state MOE is not subject to the federal time limit. TANF also establishes work requirements for adults receiving aid. After 2 years of assistance, or sooner if the state determines the recipient is ready, TANF adults are generally required to be engaged in work as defined by the state. In addition, TANF establishes required work participation rates—a steadily rising specified minimum percentage of adult recipients that must participate in federally specified work or work-related activities each year. States were required in federal fiscal year 2002 to meet a work participation rate of 50 percent for all TANF families with adult members—referred to as the rate for all families. States were also required to meet a much higher rate—90 percent—for two-parent families. States must meet these work participation rates to avoid financial penalties. While states have generally met the work participation rate for all families, many states have faced financial penalties due to failure to meet the two- parent required rate in recent years. HHS issued penalty notices to 19 states in fiscal year 1997, 14 in fiscal year 1998, 9 in fiscal year 1999, and to 7 states in fiscal year 2000. In addition to establishing federal participation rate requirements, PRWORA specified that the required rates are to be reduced if a state’s TANF caseload declines. States are allowed caseload reduction credits, which reduce each state’s work participation requirement by one percentage point for each percentage point by which its average monthly caseload falls short of its fiscal year 1995 level (for reasons other than eligibility changes). In addition, federal time limits and work requirements may not apply in some states that were granted federal waivers to AFDC program rules in order to conduct demonstration programs to test state reforms. States May Choose Various State Funding Options for Providing Cash Assistance Previously, under AFDC, state funds accounted for 46 percent of total federal and state expenditures. Under PRWORA, the law requires states to sustain 75 to 80 percent of their historic level of spending on welfare through a maintenance-of-effort requirement to receive their federal TANF block grant. The federal TANF funds and state MOE funds can be considered more like funding streams than a single program and states may use their MOE to assist needy families in state programs other than their TANF programs. In fact, states have flexibility to expend their MOE funds for cash assistance in up to three different ways, some of which allow states to exclude some families from time limits and work requirements. A state may use its state MOE funds in three different ways to provide cash assistance for needy families. Commingling: A state can provide TANF cash assistance by commingling its state MOE with federal funds within its TANF program. Segregating: A state can provide some TANF cash assistance with state MOE accounted for separately from its federal funds within its TANF program. Separating: A state can use its state MOE to provide cash assistance to needy families in any one or more non-TANF state programs, referred to as “separate state programs.” Each state may choose one or more of these options to provide cash assistance. In some cases, in this testimony, we refer to the second and third options as using “state-only” funds when the distinction between segregating and separating funds is not necessary. In addition, we focus only on cash assistance and not on other forms of aid or services, including, for example, child care and transportation, for which time limits and work requirements generally do not apply. How a state structures its funds determines which TANF rules apply to the needy families being served. (See table 1.) When a state commingles funds, it must meet all TANF requirements. For example, states that commingle all their state MOE with federal funds are only able to exclude families from time limits through the 20 percent extension, cannot exclude families from counting towards the federal work participation rate, and cannot provide assistance to certain groups of legal immigrants. States may exclude families from time limits by funding their cash assistance with state MOE, either through “segregated funds” or in any non-TANF state programs. More specifically, any month of cash assistance funded solely by state MOE funds does not count toward the federal 60- month limit and may be provided to families who have reached their federal time limit. States may exclude families from federal time limits if they Stop the clock. States can “stop the clock” so that a family’s cash assistance does not count towards the federal time limit. This is accomplished by funding any month of cash assistance with state-only funds rather than with federal or commingled federal and state dollars. For example, if a state provides monthly cash assistance to working families with state-only funds, those months of assistance do not count toward the federal time limit. Extend the time limit. States can provide cash assistance beyond the 60-month time limit by using state-only funds. A state may extend a family’s time limit because it has determined that the adult needs more time to prepare for and find employment. Finally, while not required by federal law, states may choose to apply time limits on their state-funded assistance. In this case, states may also decide to stop the clock or extend time limits for certain families. In addition, families provided cash assistance funded by state MOE through non-TANF state programs are not subject to federal work requirements, though states may choose to impose their own work requirements on these families. One-Third of Families Receiving Cash Assistance Are Child- Only Cases Not Subject to Federal Work Requirements or Time Limits States reported that in the fall of 2001, 2.1 million families received cash assistance funded with federal TANF or state MOE dollars, with about 700,000, or one-third, of these families composed of children only. Generally, child-only cases are not subject to work requirements or time limits. The most common types of child-only cases were families in which the caregiver is a nonparent, such as a relative, often a grandparent (40 percent); parent is receiving Social Security or Supplemental Security Income and not eligible for TANF (25 percent); parent is a noncitizen ineligible for federally funded TANF (23 parent is subject to sanctions (7 percent). (See figure 1.) The breakdown of child-only cases varied significantly across states, however. For example, child-only cases in which the parent is an ineligible noncitizen ranged from 0 percent in ten states to 39 percent in California and 77 percent in Texas; this variation is likely due to the variation in immigrant populations across the states. (For more information on each state’s child-only caseload, see Appendix I.) States Use Flexibility under PRWORA to Exempt Some Families from Federal Work Requirements Reduced federal participation targets—due to declining caseloads and the caseload reduction credit—and states’ use of their MOE funds in non- TANF programs give states considerable flexibility in implementing work requirements. (For more information on how states use their MOE funds, see Appendix II). Since the implementation of welfare reform, states have experienced strong economic growth and welfare caseloads have declined dramatically, from 4.4 million in August 1996 to 2.1 million as of September 2001, marking a 52 percent decline in the number of families receiving cash welfare. The work participation target rate for every state in fiscal year 2002 is 50 percent for all families. However, once the caseload reduction credit is taken into account, the target rates can be greatly reduced. For example, as shown in table 2, the actual rate for all families reported by HHS for fiscal year 2000 was zero in 31 states and less than 25 percent in all but two states. As a result, states have had increased flexibility in determining the numbers of adults that are to be working or preparing for work and the types of activities required. For states to count families’ activities towards the work participation rate, families have to be participating in federally approved work activities. In a previous report, we found that some states included recipients in a range of work and work-preparation activities that extend beyond those that meet federal work participation requirements, particularly to meet the needs of recipients considered hard to employ.Officials in one state told us that because the work participation rates are so low due to caseload reduction credits, states have more flexibility in the types of activities or services provided, for example, substance abuse treatment or mental health services, without fear of not meeting their federal work participation rates. In other cases, the lower target rates give states more flexibility in exempting TANF recipients considered hard to employ from meeting work requirements, as we found in our report on TANF recipients with mental and physical impairments. In addition to the flexibility provided by reduced federal target rates, many states have increased work requirement flexibility by using state MOE funds to provide cash assistance through non-TANF programs, as allowed by PRWORA. Twenty-six states use state MOE funds to provide cash assistance through separate state programs, which allows states to exclude families from federal work requirements and to serve certain immigrants ineligible for federal TANF. Sixteen of these states provide cash assistance to two-parent families through these programs. Several state officials told us they provide aid in this way to avoid the risk of financial penalties for failing to meet the federal two-parent work participation rate. State officials told us that two-parent families often have as many or more challenges as single parents, making the higher target rate for two-parent families difficult to meet. While states expressed concern about failing to meet the federal target rate for two- parent families, all 16 of these states imposed their own state work requirements on these families. Thirteen of the 26 states used state MOE in separate programs to provide cash assistance to certain legal immigrants not eligible for federal TANF aid; these 13 states still apply a state work requirement for these families as well. Overall, approximately nine-tenths of the families receiving cash assistance in separate state programs are still subject to a state work requirement. While states generally imposed work requirements, about half of them also have policies in place to exclude families facing significant barriers to work from work requirements. For example, 13 states exclude families with an adult who is disabled and 13 states exclude families that care for someone with a disability. States Excluded 11 Percent of Adult Families from Federal and State Time Limits States generally targeted time limit exclusions to families they considered hard to employ, families that were working but not earning enough to move off of TANF, and families that were cooperating with program requirements but had not yet found employment. During fall 2001, states excluded from federal or state time limits 11 percent of the 1.4 million cash assistance families with adults. The number of families excluded from time limits may increase in the future because most families have not yet reached their federal or state-imposed cash assistance time limit. Federal 20 Percent Extension and State- Funded Time Limit Exclusion Policies Generally Target Working or Hard-to-Employ Families States targeted time limit exclusions to families they considered “hard to employ”, families that were working but not earning enough to move off of TANF, and families that were cooperating with program requirements. The majority of states excluded “hard-to-employ” families in which the parent had a disability or was caring for a child with a disability, families dealing with domestic violence, and families with a head of household of advanced age. (See figure 2.) Some of these exclusions are granted on a temporary basis (such as for disabled recipients pending transfer to the Supplemental Security Income program), and others are granted for longer periods of time (such as for family heads of advanced age). Twenty-two states exclude working families from time limits, either through the federal 20 percent extension or by using state-only funds. Maryland and Illinois, for example, “stop the clock” for working families by funding them with state-only dollars. Officials from both states told us that their states adopted this policy to reward working families for complying with program requirements. States that exclude families by using state-only funds use similar criteria to those used by states that rely solely on the federal 20 percent hardship extension. Using the 20 percent extension, states are able to extend time limits for a broad range of families, such as families cooperating with program requirements or making a “good faith effort” to find employment. For example, officials from Michigan, a state that commingles all of its state funds with federal funds, told us that they will use the 20 percent extension for all recipients following the rules of the program; if the number of families they want to provide and extension to begins to exceed 20 percent, they plan to continue providing assistance through state funds. Almost half of the states exclude families making a good faith effort to find employment. While States Had Excluded 11 Percent of Families with Adults from Time Limits as of Fall 2001, This Percentage May Increase as More Families Reach Their Time Limits States have excluded from time limits 11 percent of the approximately 1.4 million families with adults receiving federal- or state-funded cash assistance. (See Appendix III for the percent of exclusions by state.) As shown in figure 3, 45 percent of these families—mostly in Illinois, Massachusetts, and New York—were excluded through states use of state- only funds. An additional 43 percent of the families were excluded from time limits under federal waivers granted to states before welfare reform to conduct demonstration programs. Many of these waivers remain in effect. Even though states are free to exclude all state-funded families from time limits, 64 percent of state-funded families that include adults were still subject to a time limit imposed by the state. Twenty-six of the 33 states with state-only funds apply a state time limit to some or all of their state- funded cases. (See Appendix IV for additional information on state choices regarding funding and time limits.) The percentage of the caseload that is excluded from time limits may increase, since most families have not reached their time limit. In 22 states TANF had not been in effect long enough for families to reach either the federal or the state time limit by the time we conducted our survey. Even in those states where it was possible to have received 60 months of cash assistance, many families had not reached their time limit because they have cycled on and off welfare, slowing their accrual of time on assistance. As a result, only 15 states had begun to use the federal 20 percent hardship extension, and all of these states were applying it to less than 6 percent of their total caseload. One state we visited, California, told us it estimated that over 100,000 families with adults would reach the federal time limit in the next year. California plans to use state-only funds to continue aid beyond 60 months to children by removing the adult from the case. California also plans to continue aid to families that are making a good faith effort to find employment and to families that are hard to employ because the adult is aged, disabled, caring for a disabled family member, or experiencing domestic violence. States’ Experiences with TANF Highlight Issues for Reauthorization States’ experiences with implementing TANF time limits and work requirements for families receiving cash assistance highlight key issues related to reauthorization of TANF provisions. Officials from the four states we visited and eight states we interviewed shared their views on work requirements and time limits, and the flexibility they have to implement them. Some state officials commented on the limited extent of states’ experiences with time limits, given that many families have not yet reached their time limits, as well as their inexperience with operating TANF during times of state budget pressures. State officials also highlighted their concerns about the federal 90 percent work participation requirements for two-parent families. States Support TANF Flexibility, but Some States Have Concerns In general, state officials we spoke with were supportive of time limits and work requirements. For example, Maryland officials said that one advantage of time-limits assistance and work requirements was that families understood that the receipt of cash assistance was no longer an entitlement, thereby changing the culture of welfare. In addition, another Maryland official noted that time limits encourage caseworkers to link families, particularly the hard to employ, to the services they need to become self-sufficient. States also said that, for the most part, flexibility in implementing time limits and work requirements were important in allowing them to meet the needs of special populations while supporting the federal goal of reducing dependency. The flexibility in implementing their own time limits helps to ensure that states can adapt the federal program to meet state and local needs while still emphasizing the transitional nature of cash assistance through time limits. While state officials were generally supportive of TANF flexibility, officials in almost all of the states we spoke with expressed the desire to have more flexibility in counting education and training towards the federal work participation rate. Some states officials also expressed a desire to count activities such as mental health and substance abuse counseling towards the federal work participation rate. The states that did not opt for additional flexibility through the use of state-only funds expressed two general concerns. First, they were uncertain about the consequences of their funding flexibility under TANF. A Mississippi TANF official told us that the state plans to follow the federal regulations rather than risk penalties by establishing its own program rules that could become confused with the federal regulations. Second, Colorado state officials were concerned about the potential administrative burden that could result from creating separate funding or programs that used state-only funds. Changing Economic Conditions May Pose Difficult Choices for States in the Future Up until very recently, TANF has been implemented under conditions of strong economic growth, with declining cash assistance caseloads and the resulting increase in resources available to states to assist families. This has fostered increased flexibility in how state officials use their federal TANF and state maintenance-of-effort dollars. Several states we interviewed now face budget pressures and increasing cash assistance caseloads, which could affect the policy choices they make about funding mechanisms and time limit exclusions in the future. This could affect some states’ choices regarding continued support for families that take longer to become self-sufficient. California state officials noted that its plan to continue aid for all children whose parents have reached time limits may pose a future financial burden on the state. States’ Experiences with Adequacy of the 20 Percent Federal Extension May Change as More Families Reach Time Limits State officials generally thought the 20 percent federal extension was adequate now, but were less sure about the future, given that many families have not yet reached the 60-month time limit. Given that states’ experiences with families reaching their time limits is still limited, it is important to emphasize that much remains unknown nationwide about the numbers, characteristics, and experiences of families who have reached or are close to reaching federal time limits on assistance. In the past we have recommended that HHS work with state officials on this issue to promote research and provide guidance that would encourage and enable state officials to identify who has reached the 60-month time limit before they are able to work. HHS has taken steps to do so. States Support the Goal of Helping Two-Parent Families Reduce Their Dependency but Would Like More Flexibility in the Federal Two-Parent Work Participation Rate State officials cited their difficulties in meeting the federal work participation target rate for two-parent families and a few discussed their solutions—serving two-parent families in separate state programs to avoid potential financial penalties. These states typically apply their own work requirements and time limits to these families, demonstrating the states’ expectation that these families take steps to reduce dependency in the absence of a federal requirement to do so. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other members of the subcommittee may have. GAO Contacts and Acknowledgments For future contacts regarding this testimony, please call Cynthia M. Fagnoni at (202) 512-7215 or Gale Harris at (202) 512-7235. Individuals making key contributions to this testimony included Sigurd Nilsen, Katrina Ryan, Elisabeth Anderson, Kara Kramer, Kim Reniero, and Patrick DiBattista. Appendix I: States’ Child-Only Caseloads and Reasons for Child-Only Cases Appendix II: State Funding Choices Most states use some form of state MOE funding to provide cash assistance to families. Eighteen states relied solely on federal or commingled federal and state funds in their TANF programs to provide cash assistance, as shown in figure 4. The other 33 states used at least one of the state MOE funding options in addition to commingled funds: 7 had segregated state funds; 17 had separate state programs; 9 had both segregated funds and separate state programs. States across the nation have opted to use state MOE funds to provide cash assistance. (See Table 3.) States with larger caseloads are more likely to use segregated funds or separate state programs than smaller states; similarly, states with the smallest caseloads are more likely to commingle all of their state and federal funds. Even though two-thirds of the states have opted to use segregated funds, separate state programs, or both to provide cash assistance, only 11 percent of the total number of families receiving cash assistance is funded with these funds. Appendix III: Percentage of TANF or MOE Families with Adult Recipients in Each State Not Subject to Federal or State Time Limits Delaware was not able to provide us with data on the families excluded from time limits in its caseload. Appendix IV: State-By-State Information on State Funding, Application of Time Limits, and Use of 20 Percent Extension X X X X X X Delaware was not able to provide data on their use of the federal 20 percent extension.
One-third of the 2.1 million cases of cash assistance provided under federal or state welfare programs in the fall of 2001 went to children. Because no adults in these families received either Temporary Assistance for Needy Families (TANF) or state maintenance-of-effort funds, work requirements and time limits did not apply. Welfare reform legislation passed in 1996 included a caseload reduction credit that reduces each state's mandated participation rate if its welfare caseload declines. Because of the dramatic declines in welfare caseloads since 1996, states have generally seen greatly reduced participation rates for TANF programs. After accounting for cases involving only children, states excluded 11 percent of the remaining 1.4 million families with an adult from federal or state time limits. States' experiences with work requirements and time limits highlight key issues Congress may wish to consider when reauthorizing TANF provisions, including the relatively few families who have reached their time limit so far and the future adequacy of the federal 20-percent extension.
Trends in DOD’s Portfolio of Major Acquisitions There can be little doubt that we can—and must—get better outcomes from our weapon system investments. As seen in table 1, the value of these investments in recent years has been on the order of $1.5 trillion or more, making them a significant part of the Federal discretionary budget. As one can see, cost and schedule growth for DOD’s aggregate portfolio remain significant. For example, when measured against programs’ first full estimates, the total cost of the portfolio has increased by nearly $448 billion with an average delay of 28 months in initial operating capability. Also, as indicated in table 1, 42 percent of programs have had unit cost growth of 25 percent or more. On the other hand, we have recently seen some modest improvements in a large number of programs. For example, 50 of the 80 programs in the portfolio reduced their total acquisition costs over the past year. A number of these programs have improved their buying power by finding efficiencies. While these modest improvements are encouraging, the enormity of the investment in acquisitions of weapon systems and its role in making U.S. fighting forces capable, warrant continued attention and reform. The potential for savings and for better serving the warfighter argue against complacency. One Side of Acquisitions: Stated Policy and Process When one thinks of the weapon system acquisition process, the image that comes to mind is that of the methodological procedure depicted on paper and in flow charts. It is the “how to” side of acquisitions. DOD’s acquisition policy takes the perspective that the goal of acquisition is to obtain quality products that satisfy user needs in a timely manner at a fair and reasonable price. The sequence of events that comprise the process defined in policy reflects principles from disciplines such as systems engineering, as well as lessons learned and past reforms. The body of work we have done on benchmarking best practices has also been reflected in acquisition policy. Recent, significant changes to the policy include those introduced by the Weapon Systems Acquisition Reform Act of 2009 and the Department’s own “Better Buying Power” initiatives which, when fully implemented, should further strengthen practices that can lead to successful acquisitions. The policy provides a framework for developers of new weapons to gather knowledge at appropriate stages that confirms that their technologies are mature, their designs are stable, and their production processes are in control. These steps are intended to ensure that a program will deliver the capabilities required utilizing the resources—cost, schedule, technology, and personnel—available. Successful product developers ensure a high level of knowledge is achieved at key junctures in development. We characterize these junctures as knowledge points. While there can be differences of opinion over some of the specifics of the process, I do not believe there is much debate about the soundness of the basic steps. It is a clear picture of “what to do.” Table 2 summarizes these steps and best practices, organized around three key knowledge points in a weapon system acquisition. Our work over the last few years shows that, to the extent reforms like the Weapon Systems Acquisition Reform Act and DOD’s Better Buying Power initiatives are being implemented, they are having a positive effect on individual programs. For example, we found that over 80 percent of the 38 programs included in our annual assessment of weapon programs this year had conducted a “should-cost” analysis—one of DOD’s Better Buying Power initiatives—and reported an anticipated savings of approximately $24 billion, with more than half of this amount to be reallocated to meet other DOD priorities. In addition, we recently reviewed several programs to determine the impact of the Weapon Systems Acquisition Reform Act and found that the programs are: making early tradeoffs among cost, schedule, and technical developing more realistic cost and schedule estimates, increasing the amount of testing during development, and placing greater emphasis on reliability. These improvements do not yet signify a trend or suggest that a corner has been turned and, in fact, we found in our annual assessment of programs that most are not yet fully following a knowledge-based acquisition approach. The reforms themselves still face implementation challenges, such as staffing and clarity of guidance and will doubtless need refining as experience is gained. We have made a number of recommendations on how DOD can improve implementation of the Weapon Systems Acquisition Reform Act. To a large extent, the improvements we have seen tend to result from external pressure exerted by higher level offices within DOD on individual programs. In other words, the reforms have not yet been institutionalized within the services. We still see employment of other practices—not prescribed in policy—such as concurrent testing and production, optimistic assumptions, and delayed testing. These are the same kinds of practices that perpetuate the significant cost growth and schedule delays that have persisted in acquisitions through the decades. They share a common dynamic: moving forward with programs before the knowledge needed to reduce risk and make those decisions is sufficient. We have found that programs proceed through the critical design review without having a stable design, although we have made recommendations on the importance of this review and how to prepare for it. Programs also proceed with testing and production before they are ready. The F-35 Joint Strike Fighter program is a classic example of how concurrency can erode the cost and schedule of an acquisition. Further, some programs are significantly at odds with the acquisition process. Among these I would number the Ballistic Missile Defense System, Littoral Combat Ship, and airships. We also recently reported on the Unmanned Carrier-Launched Airborne Surveillance and Strike program which proposes to complete the main acquisition steps of design, development, testing, manufacturing, and initial fielding before it formally enters the acquisition process. The fact that programs adopt practices that run counter to what policy and reform call for is evidence of the other pressures and incentives that significantly influence program practices and outcomes. I will turn to these next. Another Side of Acquisitions: Incentives to Deviate from Sound Practices An oft-cited quote of David Packard, former Deputy Secretary of Defense, is: “We all know what needs to be done. The question is why aren’t we doing it?” To that point, reforms have been aimed mainly at the “what” versus the “why.” They have championed sound management practices, such as realistic estimating, thorough testing, and accurate reporting. Reforms have also added program decision points, reviews, and reporting requirements to help ensure these practices are used. We need to consider that these reforms mainly address the mechanisms of weapon acquisitions. Seen this way, the practices prescribed in policy are only partial remedies. The acquisition of weapons is much more complex than this and involves very basic and strongly reinforced incentives to pursue weapons that are not always feasible and affordable. Accordingly, rival practices, not normally viewed as good management techniques, comprise an effective stratagem for fielding a weapon because they reduce the risk that the program will be interrupted or called into question. I will now discuss several factors that illustrate the pressures that create incentives to deviate from sound acquisition management practices. Mismatch between Requirements and Resources A key cause of poor acquisition outcomes is the mismatch between the validated capability requirements for a new weapon system and the appropriate systems engineering knowledge, funding, and time that is planned to develop that new system. DOD’s three key decision making processes for acquiring weapon systems—requirements determination, resource allocation, and the acquisition management system—are fragmented, making it difficult for the department to achieve a balanced mix of weapon systems that are achievable and affordable and provide the best military value to the warfighter when the warfighter needs them. In addition, these processes are led by different organizations, making it difficult to hold any one person or organization accountable for saying “no” to an unrealistic requirement or for tempering optimistic cost and schedule estimates. While the department has worked hard to overcome this fragmented decision making paradigm and policies have been written to force more integrated decisions and more accountability, we continue to see programs that have experienced cost and schedule growth. This is because weapon system programs often begin with validated requirements that have not been informed by solid systems engineering practices, often do not represent true “needs” as much as “desires,” have optimistic cost and schedule estimates, and, all too often, are unachievable. Program managers are handed a business case that can be fatally flawed, and usually have no recourse other than to execute it as best they can and therefore cannot be held accountable. Conflicting Demands The process of planning and executing the program is (1) shaped by many different participants and (2) far more complex than the seemingly straightforward purchase of equipment to defeat an enemy threat. Collectively, as participants’ needs are translated into actions on weapon programs, the purpose of such programs transcends efficiently filling voids in military capability. Weapons have become integral to policy decisions, definitions of roles and functions, justifications of budget levels and shares, service reputations, influence of oversight organizations, defense spending in localities, the industrial base, and to individual careers. Consequently, the reasons “why” a weapon acquisition program is started are manifold and thus acquisitions do not merely provide technical solutions. While individual participants see their needs as rational and aligned with the national interest, collectively, these needs create incentives for pushing programs and encouraging undue optimism, parochialism, and other compromises of good judgment. Under these circumstances, persistent performance problems, cost growth, schedule slippage, and difficulties with production and field support cannot all be attributed to errors, lack of expertise, or unforeseeable events. Rather, a level of these problems is embedded as the undesirable, but apparently acceptable, consequence of the process. These problems persist not because they are overlooked or under-regulated, but because they enable more programs to survive and thus more needs to be met. The problems are not the fault of any single participant; they are the collective responsibility of all participants. Thus, the various pressures that accompany the reasons why a program is started can also affect and compromise the practices employed in its acquisition. Funding Dynamics There are several characteristics about the way programs are funded that create incentives in decision-making that can run counter to sound acquisition practices. First, there is an important difference between what investments in new products represent for a private firm and for DOD. In a private firm, a decision to invest in a new product, like a new car design, represents an expense. Company funds must be expended that will not provide a revenue return until the product is developed, produced, and sold. Thus, leading companies have an incentive to follow a disciplined approach and acquire requisite knowledge to facilitate successful product development. To do otherwise could have serious economic consequences. In DOD, there can be few consequences if funds are not used efficiently. For example, as has often been the case in the past, agency budgets generally do not fluctuate much year to year and, programs that experience problems tend to eventually receive more funding to get well. Also, in DOD, new products in the form of budget line items can represent revenue. An agency may be able to justify a larger budget if it can win approval for more programs. Thus, weapon system programs can be viewed both as expenditures and revenue generators. Second, budgets to support major program commitments must be approved well ahead of when the information needed to support the decision is available. Take, for example, a decision to start a new program scheduled for August 2016. Funding for that decision would have to be included in the Fiscal Year 2016 budget. This budget would be submitted to Congress in February 2015—18 months before the program decision review is actually held. DOD would have committed to the funding before the budget request went to Congress. It is likely that the requirements, technologies, and cost estimates for the new program— essential to successful execution—may not be very solid at the time of funding approval. Once the hard-fought budget debates put money on the table for a program, it is very hard to take it away later, when the actual program decision point is reached. Third, to the extent a program wins funding, the principles and practices it embodies are thus endorsed. So, if a program is funded despite having an unrealistic schedule or requirements, that decision reinforces those characteristics instead of sound acquisition practices. Pressure to make exceptions for programs that do not measure up are rationalized in a number of ways: an urgent threat needs to be met; a production capability needs to be preserved; despite shortfalls, the new system is more capable than the one it is replacing; and the new system’s problems will be fixed in the future. It is the funding approvals that ultimately define acquisition policy. Industry Relationship DOD has a unique relationship with the Defense industry that differs from the commercial marketplace. The combination of a single buyer (DOD), a few very large prime contractors in each segment of the industry, and a limited number of weapon programs constitute a structure for doing business that is altogether different from a classic free market. For instance, there is less competition, more regulation, and once a contract is awarded, the contractor has considerable power. Moreover, in the Defense marketplace, the firm and the customer have jointly developed the product and, as we have reported previously, the closer the product comes to production the more the customer becomes invested and the less likely they are to walk away from that investment. While a Defense firm and a military customer may share some of the same goals, certain key goals are different. Defense firms are accountable to their shareholders and can also build constituencies outside the direct business relationship between them and their customers. This relationship does not fit easily into a contract. J. Ronald Fox, author of Defense Acquisition Reform 1960-2009: An Elusive Goal, sums up the situation as follows. “Many defense acquisition problems are rooted in the mistaken belief that the defense industry and the government-industry relationship in defense acquisition fit naturally into the free enterprise model. Most Americans believe that the defense industry, as a part of private industry, is equipped to handle any kind of development or production program. They also by and large distrust government ‘interference’ in private enterprise. Government and industry defense managers often go to great lengths to preserve the myth that large defense programs are developed and produced through the free enterprise system.” But neither the defense industry nor defense programs are governed by the free market; “major defense acquisition programs rarely offer incentives resembling those of the commercial marketplace.” The Right People Dr. Fox also points out that in private industry, the program manager concept works well because the managers have genuine decision-making authority, years of training and experience, and understand the roles and tactics within government and industry. In contrast, Dr. Fox concludes that DOD program managers often lack the training, experience, and stature of their private sector counterparts, and are influenced by others in their service, DOD, and Congress. Other acquisition reform studies over the past decade have highlighted this issue as well. The studies highlight the need for a more professional program manager cadre within each of the military services, and new incentives and improved career opportunities for acquisition personnel. In 2006, we reported that program managers indicated to us that the acquisition process does not enable them to succeed because it does not empower them to make decisions on whether the program is ready to proceed forward or even to make relatively small trade-offs between resources and requirements as unexpected problems are encountered. Program managers said that they are also not able to make personnel shifts to respond to changes affecting the program. We have also reported on the lack of continuity in the tenure of key acquisition leaders across the timeframe of individual programs. A major acquisition can have multiple program managers during product development. For example, the F-35 Joint Strike Fighter program has had six different program managers since it was approved to start development in 2001. Other key positions throughout the acquisition chain of command also turn over frequently. For example, the average tenure of the Under Secretary of Defense for Acquisition, Technology and Logistics since the position was established in 1986 has been only about 22 months. Consequently, DOD acquisition executives do not necessarily stay in their positions long enough to develop the needed long-term perspective or to effectively change traditional incentives. Moreover, their decisions can be overruled through the cooperative actions of other acquisition participants. The effectiveness of reforms to the acquisition process depends in large measure on a cadre of good people who may be inadequately prepared for their position or forced into the near-term perspective of their tenures. In this environment, the effectiveness of management can rise and fall on the strength of individuals; accountability for long term results is, at best, elusive. Where Do We Go From Here? I do not necessarily subscribe to the view that the acquisition process is too rigid and cumbersome. Clearly, this could be the case if every acquisition followed the same process and strategy without exception, but they do not. We repeatedly report on programs where modifications of the process are approved. DOD refers to this as tailoring, and we see plenty of it. While one should always be looking to improve the process and make it more efficient, at this point, the focus should be to build on existing reforms by holding decision makers more accountable, tackling existing incentives, and providing new ones. To do this, we need to look differently at the familiar outcomes of weapon system acquisitions—such as cost growth, schedule delays, large support burdens, and reduced buying power. Some of these undesirable outcomes are clearly due to honest mistakes and unforeseen obstacles. However, they also occur not because they are inadvertent but because they are encouraged by the incentive structure. I do not think it is sufficient to define the problem as an objective process that is broken. Rather, it is more accurate to view the problem as a sophisticated process whose consistent results are indicative of its being in equilibrium. The rules and policies are clear about what to do, but other incentives force compromises. The persistence of undesirable program outcomes suggests that these are consequences that participants in the process have been willing to accept. Drawing on our extensive body of work in weapon system acquisition, there are six areas of focus regarding where to go from here. These are not intended to be all-encompassing, but rather, practical places to start the hard work of realigning incentives with desired results. Hold decision makers accountable from top to bottom: Our work over the years benchmarking best practices at leading commercial product developers and manufacturers has yielded a wide range of best practices for efficiently and quickly developing new products to meet market needs. Firms we visited described an integrated process for establishing product requirements, making tradeoffs among cost and product performance well ahead of a decision to begin product development, and ensuring that all decision makers—requirements setters, product developers, and finance—agree to and are held accountable for the business case presented to the program manager for execution of a new product’s development. These firms had trained professionals as program managers with backgrounds in technical fields such as engineering and various aspects of project management. Once empowered with an achievable, executable business case, they were in charge of product development from beginning to end. Therefore, they could be held accountable for meeting product development cost, schedule, and performance targets. Today, getting managers to make hard decisions, when necessary, and say no to those that push unrealistic or unaffordable plans continues to be a challenge because the critical processes to acquire a new weapon system are segregated, independent, and have different goals. DOD must be open to examining best practices and implementing new rules to really integrate the processes into one and holding all communities accountable for decisions. I do not pretend to have all the answers on how to change the current environment, but it is clear that top decision makers cannot be held accountable to work in concert on such large and critical investments unless they begin with an executable business case. Congressional and DOD leadership must be in concert on this. Attract, train, and keep acquisition staff and management: Dr. Fox’s book does an excellent job of laying out the flaws in the current way DOD selects, trains, and provides a career path for program managers. I refer you to this book, as it provides sound criticisms. We must also think about supporting people below the program manager who are also instrumental to program outcomes, including engineers, contracting officers, cost analysts, testers, and logisticians. There have been initiatives aimed at program managers and acquisition personnel, but they have not been consistent over time. RAND, for example, recently analyzed program manager tenure in DOD and found that the intent of policies designed to lengthen tenure may not have been achieved and no enforcement mechanism has been readily apparent over time. RAND indicates this could be because of the fundamental conflict that exists between what military officers need to do to be promoted and their tenure as program managers. Unless these two things are aligned, such that experience and tenure in an acquisition program can be advantageous for promotion, then it appears unlikely that tenure policies will consistently yield positive results. The tenure for acquisition executives is a more challenging prospect in that they arguably are at the top of their profession and already expert. What can be done to keep good people in these jobs longer? I am not sure of the answer, but I believe part of the problem is that the contentious environment of acquisition grinds good people down at all levels. In top commercial firms, a new product development is launched with a strong team, corporate funding support, and a timeframe of 5 to 6 years or less. In DOD, new weapon system development can take twice as long, have turnover in key positions, and every year must contend for funding. This does not necessarily make for an attractive career. Several years ago, the Defense Acquisition Performance Assessment Panel recommended establishing the military department’s service acquisition executives as a five-year, fixed-term position to add leadership continuity and stability to the acquisition process. I believe something like this recommendation is worth considering. And perhaps the military services should examine the current career track for acquisition officers to ensure it provides appropriate training, rewards, and opportunities for advancement. Reinforce desirable principles at the start of new programs: The principles and practices programs embrace are determined not by policy, but by decisions. These decisions involve more than the program at hand: they send signals as to what is acceptable. If programs that do not abide by sound acquisition principles win funding, then seeds of poor outcomes are planted. The highest point of leverage is at the start of a new program. Decision makers must ensure that new programs exhibit desirable principles before they are approved and funded. Programs that present well informed acquisition strategies with reasonable and incremental requirements and reasonable assumptions about available funding should be given credit for a good business case. As an example, the Presidential Helicopter, Armored Multi Purpose Vehicle, and Enhanced Polar System are all acquisitions slated to start in 2014, with development estimates currently ranging from nearly $1 billion to over $2.5 billion. These and other programs expected to begin system development in 2014 could be viewed as a “freshman” class of acquisitions. It would be beneficial for DOD and Congress to assess them as a group to ensure that they embody the right principles and practices. Recent action by DOD to terminate the Army’s Ground Combat Vehicle program, which was slated to start this year, and instead focus efforts on selected science and technology activities reinforces sound principles. On the other hand, approving the Unmanned Carrier-Launched Airborne Surveillance and Strike program despite its running counter to sound principles sends a conflicting message. Identify significant program risks upfront and resource them: Weapon acquisition programs by their nature involve risks, some much more than others. The desired state is not zero risk or elimination of all cost growth. But we can do better than we do now. The primary consequences of risk are often the need for additional time and money. Yet, when significant risks are taken, they are often taken under the guise that they are manageable and that risk mitigation plans are in place. In my experience, such plans do not set aside time and money to account for the risks taken. Yet in today’s climate, it is understandable—any sign of weakness in a program can doom its funding. This needs to change. If programs are to take significant risks, whether they be technical in nature or related to an accelerated schedule, these risks should be declared and the resource consequences acknowledged. Less risky options and potential off ramps should be presented as alternatives. Decisions can then be made with full information, including decisions to accept the risks identified. If the risks are acknowledged and accepted by DOD and Congress, the program should be supported. A potential way to reduce the risks taken in acquisition programs is to address the way in which DOD leverages its science and technology enterprise. Leading commercial companies save time and money by separating technology development from product development and fully developing technologies before introducing them into the design of a system. These companies develop technology to a high level of maturity in a science and technology environment which is more conducive to the ups and downs normally associated with the discovery process. This affords the opportunity to gain significant knowledge before committing to product development and has helped companies reduce costs and time from product launch to fielding. Although DOD’s science and technology enterprise is engaged in developing technology, there are organizational, funding, and process impediments which make it difficult to bring technologies into acquisition programs. For example, it is easier to move immature technologies into weapon system programs because they tend to attract bigger budgets than science and technology projects. Creating stronger and more uniform incentives that encourage the development of technologies in the right environment to reduce the cost of later changes, and encourage the technology and acquisition communities to work more closely together to deliver the right technologies at the right time would be beneficial. More closely align budget decisions and program decisions: Because budget decisions are often made years ahead of program decisions, they depend on the promises and projections of program sponsors. Contentious budget battles create incentives for sponsors to be optimistic and make it hard to change course as projections fade in the face of information. This is not about bad actors; rather, optimism is a rational response to the way money flows to programs. Aside from these consequences, planning ahead to make sure money is available in the future is a sound practice. I am not sure there is an obvious remedy for this. But, I believe ways to have budget decisions follow program decisions should be explored, without sacrificing the discipline of establishing long-term affordability. Investigate other tools to improve program outcomes: There are ways to structure an acquisition program that would create opportunities for better outcomes. Key among these are: limits on development time (time certain development of 5 years), which limits the scope of the development task; evolutionary or incremental product development, wherein the initial increment of a new weapon system adds value for the warfighter, is delivered to the field faster, and can be followed with block upgrades as technologies and funding present themselves; and strategies that focus more on incentivizing overall cost reduction over profit limitation. DOD should investigate the potential of these concepts as it structures and manages programs moving forward. Central to opening an environment for these tools is the need to focus on requirements that are well understood and manageable. This would allow the department to offer contracts that place more cost risk on the contractor and less on the government. A prime example of this is the KC-46 Tanker program that is being developed under a fixed-price development contract with incentives for holding cost down. The government and industry felt comfortable with that arrangement specifically because it was an incremental program based on a commercial airframe. The first development program is to militarize a commercial aircraft to replace a portion of the existing KC-135 fleet. Future increments may be approved to replace the rest of the KC- 135 fleet and the KC-10 fleet and provides DOD an opportunity to include the new technologies. Also, the contractor had significant systems engineering knowledge about the design and the ability to meet the requirements. A word of caution: if time certain development (e.g., 5 years), incremental acquisition strategies, and contracts that incentivize cost reduction over profit limitations are to be explored, the government will need to examine whether they have the contract management and negotiation expertise to do this. DOD has begun to examine ways to strengthen contract incentives and restructure profit regulations through its Better Buying Power initiatives; however, it is too soon to tell whether these efforts will lead to needed improvements. Mr. Chairman, this concludes my statement and I would be happy to answer any questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD's acquisition of major weapon systems has been on GAO's high risk list since 1990. Over the past 50 years, Congress and DOD have continually explored ways to improve acquisition outcomes, including reforms that have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering. While some progress has been made, too often GAO reports on the same kinds of problems with acquisition programs today that it did over 20 years ago. The topic of today's hearing is: “Reform of the Defense Acquisition System.” To address the topic, this testimony discusses (1) the performance of DOD's major defense acquisition program portfolio; (2) the management policies and processes currently in place to guide those acquisitions; (3) the incentives to deviate from otherwise sound acquisition practices; and (4) suggestions to temper these incentives. This statement draws from GAO's extensive body of work on DOD's acquisition of weapon systems. The Department of Defense (DOD) must get better outcomes from its major weapon system investments, which in recent years have totaled around $1.5 trillion or more. Recently, there have been some improvements, owing in part to recent reforms. For example, 50 of the 80 weapon system programs in the portfolio reduced their total acquisition costs over the past year, and a number of them also improved their buying power by finding efficiencies. Still, cost and schedule growth remain significant; 42 percent of programs have had unit cost growth of 25 percent or more. DOD's acquisition policy provides a structured framework for developers to gather knowledge at appropriate stages that confirms that their technologies are mature, their designs stable, and their production processes are in control. The Weapon Systems Acquisition Reform Act of 2009 and DOD's recent “Better Buying Power” initiatives introduced significant changes that, when fully implemented, should further strengthen practices that can lead to successful acquisitions. While recent reforms have benefited individual programs, it is premature to say there is a trend or a corner has been turned. The reforms still face implementation challenges and have not yet been institutionalized within the services. Reforms that focus mainly on the mechanisms of the acquisition process are only partial remedies because they do not address incentives to deviate from sound practices. Weapons acquisition is a complex system, complete with incentives to pursue programs that are not always feasible and affordable. These incentives stem from several factors. For example, the fragmented decision making paradigm in DOD and different participants involved in the acquisition process impose conflicting demands on weapon programs so that their purpose transcends filling voids in military capability. Also, the budget process forces funding decisions to be made well in advance of program decisions, encouraging undue optimism. Finally, DOD program managers' short tenures and limitations in experience and training can foster a short-term focus and put them at a disadvantage with their industry counterparts. Drawing on its extensive body of work in weapon systems acquisition, GAO sees several areas of focus regarding where to go from here: 1) examining best practices to integrate critical requirements, resources, and acquisition decision making processes; 2) attracting, training, and retaining acquisition staff and managers so that they are both empowered and accountable for program outcomes; 3) at the start of new programs, using funding decisions to reinforce desirable principles such as well-informed acquisition strategies; 4) identifying significant risks up front and resourcing them; 5) exploring ways to align budget decisions and program decisions more closely; and 6) investigating tools, such as limits on system development time to improve program outcomes. These are not intended to be all-encompassing, but rather, practical places to start the hard work of holding decision makers more accountable and realigning incentives with desired results.
Characteristics of the 1981 Block Grants Block grants are broader in scope and offer greater state discretion in the use of funds than categorical programs; in addition, block grants allocate funding on the basis of a statutory formula. Block grants have been associated with a variety of goals, including encouraging administrative cost savings, decentralizing decisionmaking, promoting coordination, spurring innovation, and providing opportunities to target funding. However, block grants have historically accounted for only a small proportion (11 percent) of grants to states and localities, as figure 1 shows. Before OBRA created nine block grants in 1981, three block grants had been created under President Nixon for community development, social services, and employment and training. More recently, the Job Training Partnership Act was passed in 1982, and the largest block grant program in terms of funding, the Surface Transportation Program, was created in 1991. (See app. II for a more detailed discussion of block grants.) OBRA Created Nine Block Grants Under OBRA, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs and 3 existing block grants into 9 block grants and shifting primary administrative responsibility for these programs to the states. The OBRA block grants carried with them significantly reduced federal data collection and reporting requirements as compared to the previous categorical programs, although some minimal requirements were maintained to protect federal interests. Overall, federal funding was reduced by 12-percent, or about $1 billion, but varied by block grant. (See app. III for a more detailed discussion of the 1981 block grants. App. VI includes a bibliography on block grants.) States were given broad discretion under the block grants to decide what specific services and programs to provide, as long as they were directly related to the goals of the grant program. Four of the block grants were for health, three for social services, and one each for education and community development. The three block grants that were in place prior to OBRA but were modified by OBRA were (1) the Health Incentives Grant for Comprehensive Public Health, which was incorporated into the Preventive Health and Health Services Block Grant; (2) the Title XX Block Grant, which was expanded into the new Social Services Block Grant; and (3) the Community Development Block Grant, which had been in existence since 1974. Under OBRA, Community Development Block Grant funds for cities with a population under 50,000 were given to the states to allocate. In two cases (the Primary Care and Low-Income Home Energy Assistance Block Grants), a single categorical program was transformed into a block grant. Overall Federal Funding Reduced Overall federal funding for the block grants in 1982 was about 12 percent, or $1 billion, below the 1981 level for the categorical programs, as table 1 shows. However, changes in federal funding levels for the block grants varied by block grant—ranging from a $159 million, or 30-percent, reduction in the Community Services Block Grant, to a $94 million, or 10-percent, increase in the Community Development Block Grant. The Social Services Block Grant was reduced by the largest amount—$591 million, representing a 20-percent reduction. Funding and Other Requirements Viewed as Less Onerous The funding and other federally imposed requirements attached to the 1981 block grants were generally viewed by states as less onerous than under the prior categorical programs. Funding requirements were used to (1) advance national objectives (for example, providing preventive health care, or more specifically, to treat hypertension); (2) protect local service providers who have historically played a role in service delivery; and (3) maintain state contributions. Set-aside requirements and cost ceilings were used to ensure certain services are provided. For example, the Preventive Health and Health Services Block Grant required that 75 percent of its funding be used for hypertension. A limitation in the Low-Income Home Energy Assistance Block Grant specified that no more than 15 percent of funds be used for residential weatherization. Pass-through requirements—notably the requirement that 90 percent of 1982 allocations under the Community Services Block Grant be awarded to community action agencies—were used to protect local service providers. The community action agencies were the primary service providers under the prior categorical program. Finally, provisions were included to maintain state involvement by preventing states from substituting federal for state funds. Data Collection and Reporting Requirements Reduced Block grants carried with them significantly reduced federal data collection and reporting requirements compared with categorical programs. Under the categorical programs, states were required to comply with specific procedures for each program, whereas the block grants had only a single set of procedures, and the administration decided to largely let the states interpret the compliance provisions in the statute. Federal agencies were prohibited from imposing burdensome reporting requirements and, for many of the block grants, states were allowed to establish their own program reporting formats. However, some data collection and reporting requirements were contained in each of the block grants as a way to ensure some federal oversight in the administration of block grants. Block grants generally require the administering federal agency to report to the Congress on program activities; provide program assessment data, such as the number of clients served; or conduct compliance reviews of state program operations. Basic reporting requirements also exist for state agencies. Experience Operating Under the 1981 Block Grants In general, the transition from categorical programs to block grants following the passage of OBRA was smooth, with states generally relying on existing management and service delivery systems. Although some continuity in funding was evident, states put their own imprint on the programs. States used a number of mechanisms to offset federal reductions for block grant programs. Block grant allocations were initially based on allocations under the prior categorical programs and were not sensitive to relative need, cost of providing services, or states’ ability to pay, posing concerns regarding their equity. Steps have been taken to improve program accountability, but problems such as noncomparable data persist. Finally, the lack of information on program activities and results may have contributed to the Congress’ adding funding constraints to block grants over time. (See app. IV for a more detailed discussion of the experience operating under the 1981 block grants.) Transition to Block Grants Smooth, Efficiencies Experienced For the most part, states were able to rely on existing management and service delivery systems. States consolidated offices or took other steps to coordinate related programs. For example, Florida’s categorical programs had been administered by several bureaus within the state’s education department; under the Education Block Grant all the responsibilities were assigned to one bureau. State officials generally found federal requirements placed on the states under the block grants created in 1981 to be less burdensome than those of the prior categorical programs. For example, state officials in Texas said that before the Preventive Health and Health Services Block Grant, the state was required to submit 90 copies of 5 categorical grant applications. Moreover, states reported that reduced federal application and reporting requirements had a positive effect on their management of block grant programs. In addition, some state agencies were able to make more productive use of their staffs as personnel devoted less time to federal administrative requirements and more time to state-level program activities. Although states reported management efficiencies under the block grants, they also experienced increased grant management responsibilities because they had greater program flexibility and responsibility. It is not possible to measure the net effect of these changes in state responsibilities on the level of states’ administrative costs. In addition, cost changes could not be quantified due to the absence of uniform state administrative cost definitions and data, as well as a lack of comprehensive baseline data on prior categorical programs. States Offset Funding Reductions Through Variety of Mechanisms States took a variety of approaches to help offset the 12-percent overall federal funding reduction experienced when the categorical programs were consolidated into the block grants. Together, these approaches helped states replace much of the funding reductions during the first several years. For example, some states carried over funding from the prior categorical programs. This was possible because many prior categorical grants were project grants that extended into fiscal year 1982. States also offset federal funding reductions through transfers among block grants. The 13 states transferred about $125 million among the block grants in 1982 and 1983. About $112 million, or 90 percent, entailed moving funds from the Low-Income Home Energy Assistance Block Grant to the Social Services Block Grant. The transfer option was used infrequently between other block grants, although it was allowed for most. States also used their own funds to help offset reduced federal funding, but only for certain block grants. In the vast majority of cases, the 13 states increased their contribution to health-related or the Social Services Block Grant programs—areas of long-standing state involvement—between 1981 and 1983. Federal Funding Allocations Based on Prior Categorical Grants Initially, most federal funding to states was distributed on the basis of their share of funds received under the prior categorical programs in fiscal year 1981. Such distributions may not be sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. With the exception of the Social Services Block Grant and Community Development Block Grant, all block grants included a requirement that the allocation of funds take into account what states received in previous years in order to ease the transition to block grants. For example, under the Alcohol, Drug Abuse, and Mental Health Services Block Grant, funds were distributed among the states for mental health programs in the same proportions as they were distributed in fiscal year 1981. For alcohol and drug abuse programs, funds had to be distributed in the same proportions as in fiscal year 1980. Today, most block grants use formulas that more heavily weigh beneficiary population and other need-based factors. For example, the Community Development Block Grant uses a formula that reflects poverty, overcrowding, age of housing, and other measures of urban deterioration. The formula for the Job Training Partnership Act Block Grant considers unemployment levels and the number of economically disadvantaged people in the state. This formula is also used to distribute funds to local service delivery areas. However, three block grants—Community Services, Maternal and Child Health Services, and Preventive Health and Health Services—are still largely tied to 1981 allocations. Steps Taken to Improve Accountability, but Problems Persist Block grants significantly reduced the reporting burden imposed by the federal government on states compared with previous categorical programs. However, states stepped in and assumed a greater role in oversight of the programs, consistent with the block grant philosophy. The 13 states we visited generally reported that they were maintaining their level of effort for data collection as under the prior categorical grants. States tailored their efforts to better meet their own planning, budgetary, and legislative needs. Given their new management responsibilities, states sometimes increased reporting requirements for local service providers. However, the Congress, which maintained interest in the use of federal funds, had limited information on program activities, services delivered, and clients served. This was because there were fewer federal reporting requirements, and states were given the flexibility to determine what and how to report program information. Due to the lack of comparability of information across states, state-by-state comparisons were difficult. In response to this situation, model criteria and standardized forms were developed in 1984 to help states collect uniform data, primarily through voluntary cooperative efforts by the states. However, continued limitations in data comparability reduced the usefulness of the data to serve the needs of federal policymakers, such as for allocating federal funds, determining the magnitude of needs among individual states, and comparing program effectiveness among states. Just as with data collection and reporting, the Congress became concerned about financial accountability in the federal financial assistance provided to state and local entities. With the passage of the 1984 Single Audit Act, the Congress promoted more uniform, entitywide audit coverage than was achieved under the previous grant-by-grant audit approach. We have found the single audit approach has contributed to improving financial management practices in state and local governments. Systems for tracking federal funds have been improved, administrative controls over federal programs have been strengthened, and oversight of entities receiving federal funds has increased. However, the single audit process is not well designed to assist federal agencies in program oversight, according to our 1994 review. To illustrate, we found limitations with the usefulness of single audit reports. For example, reports do not have to be issued until 13 months after the end of the audit period, which many federal and state program managers found too late to be useful. In addition, managers are not required to report on the adequacy of their internal control structures, which would assist auditors in evaluating the entity’s management of its programs. In addition, the results of the audits are not being summarized or compiled so that oversight officials and program managers can easily access and analyze them to gain programwide perspectives and identify leads for follow-on audit work or program oversight. Yet, we believe that the Single Audit Act is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. State Flexibility Reduced Over Time as Funding Constraints Added Even though block grants were intended to increase state flexibility, over time additional constraints were placed in these programs that had the effect of “recategorizing” them. These constraints often took the form of set-asides, requiring a minimum portion of funds to be used for a specific purpose, and cost-ceilings, specifying a maximum portion of funds that could be used for other purposes. This trend reduced state flexibility. Many of these restrictions were imposed because of congressional concern that states were not adequately meeting national needs. In nine block grants, from fiscal years 1983 and 1991, the Congress added new cost ceilings and set-asides or changed existing ones 58 times.Thirteen of these amendments added new cost ceilings or set-asides to 9 of 11 block grants we reviewed. Between fiscal years 1983 and 1991, the portion of funds restricted under set-asides increased in three block grants (Maternal and Child Health Services; Community Development, and Education). For example, set-asides for the Maternal and Child Health Services Block Grant restricted 60 percent of total funding (30 percent for preventive and primary care services for children and 30 percent for children with special health care needs). Lessons Learned Our research suggests that three lessons can be drawn from the experience with the 1981 block grants that would have value to the Congress as it considers creating new block grants. First, there clearly is a need to focus on accountability for results, and the Government Performance and Results Act may provide such a framework. Second, funding allocations based on distributions under prior categorical programs may be inequitable because they do not reflect need, ability to pay, and variations in the cost of providing services. Finally, states handled the transition to the 1981 block grants, but today’s challenges are likely to be greater. The programs being considered for inclusion in block grants not only are much larger but also, in some cases, such as Aid to Families with Dependent Children, which provides cash assistance to the poor, are fundamentally different from those programs included in the 1981 block grants. (See app. V for a more detailed discussion of lessons learned.) Need to Focus on Accountability for Results One of the principal goals of block grants is to shift responsibility for programs from the federal government to the states. This includes priority setting, program management, and, to a large extent, accountability. However, the Congress and federal agencies maintain an interest in the use and effectiveness of federal funds. Paradoxically, accountability may be critical to preserving state autonomy. When adequate program information is lacking, the 1981 block grant experience demonstrates that the Congress may become more prescriptive. For example, funding constraints were added that limited state flexibility, and, in effect, “recategorized” some of the block grants. Across the government, we have recommended a shift in focus of federal management and accountability toward program results and outcomes, with correspondingly less emphasis on inputs and rigid adherence to rules.This focus on outcomes is particularly appropriate for block grants, given their emphasis on providing states flexibility in determining the specific problems they wish to address and the strategies they plan to employ to address those problems. The flexibility block grants allow should be reflected in the kinds of national information collected by federal agencies. The Congress and agencies will need to decide the kinds and nature of information needed to assess program results. While the requirements in the Government Performance and Results Act (GPRA) of 1993 (P.L. 103-62) apply to all federal programs, they also offer an accountability framework for block grants. Consistent with the philosophy underlying block grants, GPRA seeks to shift the focus of federal management and accountability away from a preoccupation with inputs, such as budget and staffing levels, and adherence to rigid processes to a greater focus on outcomes and results. GPRA is in its early stages of implementation, but by the turn of the century, annual reporting under this act is expected to fill key information needs. Among other things, GPRA requires every agency to establish indicators of performance, set annual performance goals, and report on actual performance, in comparison with these goals, each March beginning in the year 2000. Agencies are now developing strategic plans (to be submitted by Sept. 30, 1997) articulating the agency’s mission, goals, and objectives preparatory to meeting these reporting requirements. In addition, although the single audit process is not well designed to assist federal agencies in program oversight, we believe that it is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. Equitable Funding Formulas Reflect Need and Ability to Pay The Congress will need to make tough decisions on block grant funding formulas. Public attention is frequently focused on allocation formulas because there will always be winners and losers. Three characteristics of formulas to better target funds include factors that consider (1) state or local need; (2) differences among states in the costs of providing services; and (3) state or local ability to contribute to program costs. To the extent possible, equitable formulas rely on current and accurate data that measure need and ability to contribute. We have reported on the need for better population data to better target funding to people who have a greater need of services. Today’s Transition Challenges Likely Greater Than in 1981 The experience managing the 1981 block grants contributed to increased state management expertise. Overall, states have become more capable of responding to public service demands and initiating innovations during the 1980s and 1990s. Many factors account for strengthened state government. Beginning in the 1960s and 1970s, states modernized their government structures, hired more highly trained individuals, improved their financial management practices, and diversified their revenue systems. State and local governments have also taken on an increasing share of the responsibility for financing this country’s domestic expenditures. As figure 2 illustrates, state and local government expenditures have increased more rapidly than federal grants-in-aid. Between 1978 and 1993, state and local outlays increased dramatically, from $493 billion to $884 billion in constant 1987 dollars. Many factors contribute to state fiscal conditions, not the least of which are economic. In addition, state officials have expressed concern about unfunded mandates imposed by the federal government. Practices such as “off-budget” transactions could obscure the long-term impact of program costs in some states. In addition, while states’ financial position has improved on the whole, the fiscal gap between wealthier and poorer states and localities remains significant, in part due to federal budget cuts. We reported in 1993 that southeastern and southwestern states, because of greater poverty rates and smaller taxable resources, generally were among the weakest states in terms of fiscal capacity. New block grant proposals include programs that are much more expansive than block grants created in 1981 and could present a greater challenge for the states to both implement and finance. Nearly 100 programs in five areas—cash welfare, child welfare and abuse programs, child care, food and nutrition, and social services—could be combined, accounting for more than $75 billion of a total of about $200 billion in federal grants to state and local governments. The categorical programs, which were replaced by the OBRA block grants, accounted for only about $6.5 billion of the $95 billion 1981 grant outlays. In addition, the present block grant proposals include programs that are fundamentally different from those included in the 1981 block grants. For example, Aid to Families with Dependent Children (AFDC) provides direct cash assistance to individuals. Given that states tend to cut services and raise taxes during economic downturns to comply with balanced budget requirements, these cash assistance programs could experience funding reductions affecting vulnerable populations at a time when the AFDC population is likely growing. At the same time, the needs to assist these vulnerable populations would be increasing. In addition, some experts suggest that states have not always maintained state funding for cash assistance programs in times of fiscal strain. Because the information presented in this report was largely based on previously issued reports, we did not obtain agency comments. We are sending copies of this report to the Director, Office of Management and Budget; the Secretaries of Education, Health and Human Services, Labor, and other federal departments; and other interested parties. If you or your staff have any questions concerning this report, please call me at (202) 512-7014. Major contributors to this report are listed in appendix VII. Objectives, Scope, and Methodology To review the experience with block grants, we examined our past work on the implementation of the block grants created by the Omnibus Budget Reconciliation Act of 1981 (OBRA). The work consists of a series of reports on each of the major block grants, which were released during the early to mid-1980s, as well as several summary reports of these findings released in 1985. To update this work, we reviewed our more recent work on block grants as part of our overall program oversight efforts, focusing on block grants in the health, education, and social services areas. For example, in the early 1990s, we issued reports on the administration of the Low-Income Home Energy Assistance Block Grant (LIHEAP); drug treatment efforts under the Alcohol, Drug Abuse, and Mental Health Services Block Grant (ADMS); and oversight issues with respect to the Community Development Block Grant (CDBG). In 1992, we also looked at the distribution of funds under the Maternal and Child Health Services Block Grant (MCH). We have closely tracked the implementation of the Job Training Partnership Act (JTPA) Block Grant since its inception in 1982 and have looked at the Child Care and Development Block Grant, created in 1990, in the context of our other work on child care and early childhood programs. For a list of GAO and other key reports on block grants, refer to appendix VI. Our review of the implementation of the 1981 block grants was done in the early to mid-1980s and was based on work in 13 states. These 13 states—California, Colorado, Florida, Iowa, Kentucky, Massachusetts, Michigan, Mississippi, New York, Pennsylvania, Texas, Vermont, and Washington—received about 46 percent of the 1983 national block grant appropriations and accounted for about 48 percent of the nation’s population. The results may not be projected to the nation as a whole, although the 13 states represent a diverse cross section of the country. While our more recent oversight work updates some of our understanding of how block grants have been implemented, we have not done a systematic review of block grants themselves since these earlier reports. Background on Block Grants Block grants are broader in scope and offer greater state flexibility in the use of funds than categorical programs. They have been associated with a variety of goals, including encouraging administrative cost savings, decentralizing decisionmaking, promoting coordination, spurring innovation, and providing opportunity to target funding. Before OBRA created nine block grants, three block grants had been created by President Nixon for community development, social services, and employment and training. More recently, the Job Training Partnership Act was passed in 1982, and the largest block grant program in terms of funding, the Surface Transportation Program, was created in 1991. Today, a total of 15 block grants are in effect, although block grants today, as they have historically, represent only a small proportion (about 11 percent) of all grants-in-aid to states and localities. Block Grant Features Block grants are a form of federal aid authorized for a wider range of activities compared with categorical programs, which tend to be very specific in scope. The recipients of block grants are given greater flexibility to use funds based on their own priorities and to design programs and allocate resources as they determine to be appropriate. These recipients are typically general purpose governments at the state or local level, as opposed to service providers (for example, community action organizations). Administrative, planning, fiscal, and other types of reporting requirements are kept to the minimum amount necessary to ensure that national goals are being accomplished. Federal aid is distributed on the basis of a statutory formula, which results in narrowing the discretion of federal administrators and providing a sense of fiscal certainty to recipients. Block Grant Goals Block grants have been associated over the years with a variety of goals, each of which has been realized to a greater or lesser degree depending upon the specific block grant. Block grant proponents argue that administrative cost savings would occur as a by-product of authorizing funds in a broadly defined functional area as block grants do, rather than in several narrowly specified categories. These proponents say that block grants provide a single set of requirements instead of numerous and possibly inconsistent planning, organization, personnel, paperwork, and other requirements of categorical programs. Decisionmaking is decentralized in that state and local recipients are encouraged to identify and rank their problems, develop plans and programs to deal with them, allocate funds among various activities called for by these plans and programs, and account for results. At the same time, block grants can eliminate federal intradepartmental coordination problems arising from numerous categorical grants in the same functional area, as well as help state and local recipients better coordinate their activities. Still another objective of the block grant is innovation— recipients are free to use federal funds to launch activities that otherwise could not be undertaken. By distributing aid on the basis of a statutory formula, block grants aim to better target federal funds on jurisdictions having the greatest need. However, a critical concern about block grants is whether the measures used—population, income, unemployment, housing, and overcrowding, among others—are accurate indicators of need and can be made available in a timely fashion. By contrast, a project-based categorical program would emphasize grantsmanship in the acquisition of federal aid and maximize the opportunities for federal administrators to influence grant award decisions. Block Grant History Three block grants were enacted in the mid-1970s under President Nixon. These were the Comprehensive Employment and Training Act of 1974 (CETA); the Housing and Community Development Act, which instituted CDBG; and Title XX of the Social Security Act. CETA called for locally managed but federally funded job training and public sector job creation programs. CDBG replaced categorical grant and loan programs under which communities applied for funds on a case-by-case basis. For the purpose of developing viable urban communities by providing decent housing and expanding economic opportunities, the block grant allowed communities two types of grants—entitlement and discretionary, the latter for communities with populations under 50,000. Title XX replaced prior social services programs and set forth broad national goals such as helping people become economically self-supporting; protecting children and adults from abuse, neglect, and exploitation; and preventing and reducing inappropriate institutional care. With the passage of OBRA under President Reagan, nine block grants were created. The discretionary program under CDBG became the Small Cities program. States were called on to administer this block grant program and required to give priority to activities benefiting low- and moderate-income families. The Title XX was expanded into the Social Services Block Grant (SSBG), although because the initial block grant was already state administered and very broad in scope, there were few changes as a consequence of OBRA. In addition, OBRA created block grants in the areas of health services, low-income energy assistance, substance abuse and mental health, and community services, in addition to social services and community development, as already mentioned. In 1982, the JTPA Block Grant was created. JTPA emphasized state and local government responsibility for administering federally funded job training programs, and, unlike CETA, which it replaced, partnerships with the private sector were established. Private industry councils (PIC), with a majority of business representatives, oversaw the delivery of job training programs at the local level. State job training coordinating councils also included private sector representation. The premise was that private sector leaders best understood what kinds of job training their communities needed, and would bring a concern for efficiency and performance. The Surface Transportation Program, established by the Intermodal Surface Transportation Efficiency Act of 1991, is currently the largest block grant program, with $17.5 billion awarded in fiscal year 1993. The act dramatically changed the structure of the Federal Highway Administration’s programs, which had been based on federal aid by road system—primary, secondary, urban, and rural. The Surface Transportation Program allows states and localities to use funds for construction or rehabilitation of virtually any kind of road. A portion of funds may also be used for transit projects or other nontraditional highway uses. Other block grants created after the 1981 block grants include the 1982 Federal Transit Capital and Operating Assistance Block Grant; the 1988 Projects for Assistance in Transition from Homelessness; and the 1990 Child Care and Development Block Grant. One block grant, ADMS, was broken into two different block grants in 1992. These block grants are the Community Health Services Block Grant and the Prevention and Treatment of Substance Abuse Block Grant. Among the block grants eliminated since 1981 are the Partnership for Health, Community Youth Activity, Primary Care, Law Enforcement Assistance, and Criminal Justice Assistance Block Grants. Block Grants Today Today, a total of 15 block grants are in effect. These block grants and dollars awarded in fiscal year 1993 awards appear in table II.1. Compared with categorical grants, which number 578, there are far fewer block grants. As figure II.1 demonstrates, the largest increase in block grants occurred as a result of OBRA in 1981. Not all of the 1981 OBRA block grants were still in effect in 1990. Some, such as the Primary Care Block Grant, had been eliminated. Other block grants, such as the Child Care and Development Block Grant, were created between 1980 and 1990. $22 billion, compared with total federal grants of $206 billion. About $32 billion was awarded for block grants in 1993. (es.) (est.) Federal Transit Capital and Operating Assistance Prevention and Treatment of Substance Abuse JTPA, Title II-A: Training Services for Disadvantaged Adults and Youth Payments to States for Child Care Assistance (Child Care and Development Block Grant) Characteristics of the 1981 Block Grants Under OBRA, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs into 9 block grants and shifting primary administrative responsibility for these programs to the states. Overall federal funding was reduced by 12 percent, or about $1 billion, but varied by block grant. The OBRA block grants carried with them significantly reduced federal funding and data collection and reporting requirements as compared to the previous categorical programs, although some minimal requirements were maintained to protect federal interests. OBRA Created Nine Block Grants Under OBRA of 1981, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs and 3 existing block grants into 9 block grants and shifting primary administrative responsibility for these programs to the states. However, 534 categorical programs were in effect the same year this legislation passed, meaning there continued to be many more categorical programs than were subsumed under the 1981 block grants. States were given flexibility under block grants to decide what specific services and programs to provide as long as they were directly related to the goals of the grant program. Four of the block grants were for health, three for social services, and one each for education and community development. Three existing block grants were among the 9 block grants created. As mentioned previously, these include Title XX, which was expanded into SSBG, and CDBG, for which states were give the responsibility of administering the Small Cities program. In addition, the Health Incentives Grant for Comprehensive Public Health was incorporated into the Preventive Health and Health Services Block Grant (PHHS). In two cases (Primary Care and LIHEAP), a single categorical program was transformed into a block grant. The scope of block grants was much wider than the categorical grants that were consolidated to form them. For example, Chapter 2 of the Elementary and Secondary Education Act (the Education Block Grant) funded state and local activities to improve elementary and secondary education for children attending public and private schools. The 38 categorical programs that this Education Block Grant comprised included, for example, several “Emergency School Aid Act” programs, “Civil Rights Technical Assistance and Training,” and “Ethnic Heritage Studies Program.” Some block grants were wider in scope compared with others that were more narrow. For example, the scope of LIHEAP—which covers assistance to eligible households in meeting the costs of home energy—was quite narrow, having essentially a single function. In contrast, the scope of the Community Services Block Grant (CSBG) was to support efforts to “ameliorate the causes of poverty,” including employment, education, housing, emergency assistance, and other services. Several block grants offered the flexibility to transfer funds to other block grants, providing states the option to widen their scope even further. For example, SSBG allowed a state to transfer up to 10 percent of its allotment to the four health-related block grants or LIHEAP. Such flexibility to transfer funds was offered in five of the block grants—SSBG, LIHEAP, ADMS, CSBG, and PHHS. Overall Federal Funding Reduced Overall federal funding for the block grants in 1982 was about 12 percent, or $1 billion, below the 1981 level for the categorical programs, as table III. 1 shows. However, changes in federal funding levels for the block grants varied by block grant—ranging from a $159 million, or 30-percent, reduction in the Community Services Block Grant, to a $94 million, or 10-percent, increase in CDBG. SSBG was reduced by the largest amount—$591 million, representing a 20-percent reduction. Table III.1 compares the 1981 funding levels of the categorical programs with the 1982 funding levels when these categorical programs were consolidated into block grants. CDBG (Small Cities) Funding Requirements of 1981 Block Grants The funding requirements attached to the block grants were generally viewed by states as less onerous than under the displaced categorical programs. However, the federal government used funding requirements to (1) advance national objectives (for example, providing preventive health care, or more specifically, to treat hypertension), (2) protect local providers who have historically played a role in the delivery of services, and (3) maintain state contributions. Mechanisms contained in the block grants that protected federal interests included (1) state matching requirements, (2) maintenance of effort or nonsupplant provisions, (3) set-asides, (4) pass-through requirements, and (5) cost ceilings. An illustration of each mechanism follows: State matching requirements were imposed to help maintain state program contributions. CDBG required that states provide matching funds equal to at least 10 percent of the block funds allocated. MCH required that each state match every four federal dollars with three state dollars. The Primary Care Block Grant required that states provide a 20-percent match of fiscal year 1983 funds and a 33-percent match of fiscal year 1984 funds. Many state governments chose not, or were unable, to make the match for the Primary Care Block Grant, leading to the termination of this program in 1986. A nonsupplant provision appeared in three block grants (Education, PHHS, and ADMS), which prohibited states from using federal block grant funds to supplant state and local government spending. The purpose of this provision was to maintain state involvement by preventing states from substituting federal for state funds. Set-asides require states and localities to use a specified minimum portion of their grant for a particular purpose. PHHS included a set-aside in which the states were required to provide at least 75 percent of fiscal year 1981 funds in fiscal year 1982 for hypertension and, for rape prevention, an allocation based on state population of a total of at least $3 million each fiscal year. Under pass-through requirements, state or local governments must transfer a certain level of funds to subrecipients in order to protect local providers who have historically played a role in the delivery of services. CSBG required that states award not less than 90 percent of fiscal year 1982 funds to community action organizations or to programs or organizations serving seasonal or migrant workers. Cost ceilings require that states and localities spend no more than a specified maximum percentage of their grant for a particular purpose or group. LIHEAP included a cost ceiling of 15 percent of funds for residential “weatherization” or for other energy-related home repairs. Accountability Requirements of 1981 Block Grants The 1981 block grants carried with them significantly reduced federal data collection and reporting requirements compared with categorical programs. Under the categorical programs, states had to comply with specific procedures for each program, whereas with block grants there was one single set of procedures. Federal agencies were actually prohibited from imposing “burdensome” reporting requirements. Consistent with the philosophy of minimal federal involvement, the administration decided to largely let the states interpret the compliance provisions in the statute. This meant states, for the most part, determined both form and content of block grant data collected and reported. However, some data collection and reporting requirements were contained in each of the block grants as a way to ensure some federal oversight in the administration of block grants. From federal agencies, the block grants generally required (1) a report to the Congress on program activities, (2) program assessment data such as the number of clients served, or (3) compliance reviews of state program operations. For example, ADMS required the Department of Health and Human Services (HHS) to provide agency reports to the Congress on activities and recommendations; program assessments, which included data on clients, services, and funding; and annual compliance reviews in several states. From states agencies, the block grants generally required: (1) grant applications, which included information on how the states plan to use federal funds, (2) program reports describing the actual use of federal funds, (3) fiscal expenditure reports providing a detailed picture of expenditures within certain cost categories, and (4) financial and compliance audits. For example, LIHEAP required states to provide annual descriptions of intended use of funds, annual data on numbers and incomes of households served, and annual audits. In addition to these reporting requirements, states were required to involve the public. Some block grants required states to solicit public comments on their plans or reports describing the intended use of funds. Some block grants also required that a public hearing be held on the proposed use and distribution of funds. The Education Block Grant required the state to establish an advisory committee. Experience Operating Under the 1981 Block Grants Where states had operated programs, transition to block grants was smoother as states relied on existing management and service delivery systems. However, the transition to block grants was not as smooth for LIHEAP and CSBG because of limited prior state involvement or state funding of these programs. State officials generally reported administrative efficiencies in managing block grants as compared with categorical programs, although administrative cost savings were difficult to quantify. Although states experienced a 12-percent federal funding reduction when the 1981 block grants were created, they were able to offset these reductions for the first several years through a variety of approaches, such as carrying funding over from categorical grants. Several concerns have emerged over time. First, initial funding allocations were based on prior categorical grants in order to ease the transition to block grants. Such distributions, however, may be inequitable because they are not sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. Second, although the Congress has taken steps to improve both data comparability and financial accountability, problems persist in terms of the kinds of information available for program managers to effectively oversee block grants. For example, consistent national information on program changes, services delivered, and clients served has not been available to the Congress because of the lack of standardization in block grant reporting. Third, state flexibility was reduced as funding constraints were added to block grants over time. This runs counter to an important goal of block grants, which is to increase state flexibility. Where States Had Operated Programs, Transition to Block Grants Was Smoother Prior program experience helped states manage the 1981 block grants. For the most part, states were able to rely on existing management and service delivery systems. Proceeding from their role under the prior categorical programs as well as their substantial financial commitment to certain program areas, states had a service delivery structure in place through which social services, health, and education programs were implemented. Decisions on the use of social services, health, and education block grant funds often reflected broader state goals and priorities for delivering related services. In some cases, states consolidated offices or took other steps to coordinate related programs, such as with the Education Block Grant, in which 5 of 13 states merged offices. For example, Florida’s categorical programs had been administered by several bureaus within the state’s education department. Under the block grant, all responsibilities were assigned to one bureau. The exceptions to this were LIHEAP and CSBG. The categorical programs that preceded these block grants were almost entirely federally funded. In the case of CSBG, service providers had dealt primarily with federal officials and had little contact with state administrators. With LIHEAP, planning processes were not well integrated with overall state planning processes. Officials in 11 of the 13 states we visited indicated that separate priorities were set for LIHEAP. With CSBG, not only was the planning process not well integrated, but the state had to develop a new administrative structure. Five states had to assign management of CSBG to new or different offices or change the status of existing offices. States had to develop relationships with community action agencies, whose continued participation in the block grant-funded program was ensured by a 90-percent pass-through requirement. Taking advantage of the flexibility that block grants offered them, states began to put their own imprint on the use of funds. Although some continuity in funding was evident, changes in funding patterns did emerge: Under MCH and PHHS, the states tended to provide greater support for services to children with disabilities and reduce support for lead-based paint poisoning prevention. Under SSBG, the states usually gave a higher priority to adult and child protective services and home-based services, among other services. By contrast, they often tightened eligibility standards for day care services. Given the increased availability of federal child care funding from sources other than the SSBG, states may decide to allocate fewer SSBG dollars to child care in the future. Under LIHEAP, most of the states increased funding for weatherization and crisis assistance while decreasing expenditures for heating assistance. More recently, we found that state actions differed significantly in response to a decrease in federal funding of $619 million under the block grant between fiscal years 1986 and 1989. Some states, for example, varied in the extent to which they offset federal funding cuts with other sources of funding. States’ imprint on their use of block grant funds was not evident with ADMS. This was in part due to funding constraints added by the Congress over time. States Reported Administrative Efficiencies State officials generally found federal requirements placed on them by the 1981 block grants less burdensome than those of the prior state-operated categorical programs. For example, state officials in Texas said that before PHHS, the state was required to submit 90 copies of 5 categorical grant applications. Moreover, states reported that reduced federal application and reporting requirements had a positive effect on their management of block grant programs. Also, some state agencies were able to make more productive use of their staffs as personnel devoted less time to federal administrative requirements and more time to state-level program activities. Although states realized considerable management efficiencies or improvements under the block grants, they also experienced increased grant management responsibilities through greater program discretion devolved from the federal government. It is not possible to measure the net effect of these competing forces on the level of states’ administrative costs. In addition, cost changes could not be quantified because of the lack of uniform state administrative cost definitions and data as well as a lack of comprehensive baseline data on prior categorical programs. States Offset Funding Reductions Through Variety of Mechanisms States took a variety of approaches to help offset the 12-percent overall federal funding reductions experienced when the categorical programs were consolidated into the 1981 block grants. For example, some states carried over funding from the prior categorical programs. This was possible because many prior categorical grants were project grants that extended into fiscal year 1982. In the 13 states we visited, at least 57 percent of the 1981 categorical awards preceding the three health block grants were available for expenditure in 1982—the first year of block grant implementation. By 1983, however, carryover funding had declined to 7 percent of total expenditures. Carryover funding was not available under SSBG or LIHEAP because the programs preceding them had been funded on a formula basis, and funds were generally expended during the same fiscal year in which they were awarded. States also offset federal funding reductions through transfers among block grants. The 13 states transferred about $125 million among the block grants in 1982 and 1983. About $112 million, or 90 percent, entailed moving funds from LIHEAP to SSBG. This trend was influenced by the fact that SSBG experienced the largest dollar reduction—about $591 million in 1982 alone—and did not benefit from overlapping categorical funding, while LIHEAP received increased federal appropriations. The transfer option was used infrequently between other block grants. States also used their own funds to help offset reduced federal funding, but only for certain block grants. In the vast majority of cases, the 13 states increased their contribution to health-related block grants or SSBG—areas of long-standing state involvement. Although such increases varied greatly from state to state, overall increases ranged from 9 percent in PHHS to 24 percent in MCH between 1981 and 1983. Overall, expenditures of state funds for programs supported with block grant moneys increased between 1981 and 1983 in 85 percent of the cases in which the states we visited had operated the health-related block grants and SSBG since their initial availability in 1982. Aside from the health-related block grants and SSBG, states did not make great use of their own revenues to offset reduced federal funds. Together, these approaches helped states replace much of the funding reductions during the first several years. Three-fourths of the cases we examined experienced increases in total program expenditures, although once adjusted for inflation this dropped to one-fourth of all cases.Increased appropriations in 1983 through 1985, and for 1983 only, funds made available under the Emergency Jobs Appropriations Act also helped offset these reductions. Some block grants, however, did not do as well as others. For example, some states did not restore funding for CSBG, which may be due in part to the limited prior state involvement under the categorical program preceding the block grant. Federal Funding Allocations Based on Prior Categorical Grants Initially, most federal funding to states was distributed on the basis of the state’s share of funds received under the prior categorical programs in fiscal year 1981. We found that such distributions may be inequitable because they are not sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. With the exception of SSBG and CDBG, block grants included a requirement that the allocation of funds take into account what states received in previous years in order to ease the transition to block grants. For example, under ADMS, funds had to be distributed among the states for mental health programs in the same proportions as funds were distributed in fiscal year 1981. For alcohol and drug abuse programs, funds had to be distributed in the same proportions as in fiscal year 1980. Today, most block grants use formulas that more heavily weigh beneficiary population and other need-related factors. For example, CDBG uses a formula that reflects poverty, overcrowding, age of housing, and other measures of urban deterioration. The formula for JTPA considers unemployment levels and the number of economically disadvantaged persons in the state. This formula is also used to distribute funds to local service delivery areas. However, three block grants—CSBG, MCH, and PHHS—are still largely tied to 1981 allocations. Difficulties posed in developing funding formulas that allocate on the basis of need, relative cost of services, and ability to pay are illustrated here: Because of concern that funds were not distributed equitably under ADMS, the Congress mandated that HHS conduct a study of alternative formulas that considered need-related factors, and in 1982 the Secretary of HHS reported on several formula options that would more fairly distribute funds. Legislative amendments in 1988, for instance, introduced the use of new indicators of need: (1) the number of people in specific age groups as proxies for populations at risk for drug abuse, alcohol abuse, and mental health disorders and (2) state total taxable resources as a proxy for its capacity to fund program services from state resources. These amendments also called for phasing out the distribution of funds based on categorical grant distribution. We examined the formula in 1990, finding that the formula’s urban population factor overstates the magnitude of drug use in urban as compared with rural areas and that a provision that protects states from losing funding below their 1984 levels causes a mismatch between needs and actual funding. Under MCH, funds continue to be distributed primarily on the basis of funds received in fiscal year 1981 under the previous categorical programs. Only when funding exceeds the amount appropriated in fiscal year 1983 are additional funds allotted in proportion to the number of persons under age 18 that are in poverty. We found that economic and demographic changes are not adequately reflected in the current allocation, resulting in problems of equity. We developed a formula by which equity is improved for both beneficiaries and taxpayers that includes, for example, a measure for at-risk children. In keeping with the desire to maximize state flexibility, most block grant statutes did not prescribe how states should distribute funds to substate entities. Only the Education and the newer JTPA Block Grants prescribe how states should distribute funds to local service providers. For example, the Education Block Grant requires states to distribute funds to local educational authorities using a formula that considers relative enrollment and adjusts per pupil allocations upward to account for large enrollments of students whose education imposes a higher than average cost—generally students from high-risk groups. Although this formula was prescribed, states were given the discretion to decide which factors to consider in determining who were high-cost students. Where the law did not prescribe such distribution, some states developed their own formulas. In a 1982 study, we identified nine states that developed formulas to distribute CSBG funds to local service providers based in part on poverty, leading to reductions in funding to many community action agencies compared with the funding these agencies received under the prior categorical programs. Mississippi developed a formula to distribute ADMS funds to community mental health centers based on factors such as population density and per capita income. Steps Taken to Improve Accountability, but Problems Persisted Block grants significantly reduced the reporting burden imposed by the federal government on states as compared to the previous categorical programs. However, states stepped in and assumed a greater role in oversight of programs, consistent with the block grant philosophy. The 13 states we visited generally reported that they were maintaining their prior level of effort for data collection under the categorical grants. States tailored their efforts to better meet their own planning, budgetary, and legislative needs. Given their new responsibilities, states sometimes passed on reporting requirements to local service providers. However, the Congress, which maintained interest in the use of federal funds, had limited information on program activities, services delivered, and clients served. This was because there were fewer federal reporting requirements, and states were given the flexibility to determine what and how to report program information. In addition, due to the lack of comparability of information across states, state-by-state comparisons were difficult. Federal evaluation efforts were hampered because of this diminished ability to assess the cumulative effects of block grants across the nation. In response to this situation, model criteria and standardized forms for some block grants were developed in 1984 to help states collect uniform data, primarily through voluntary cooperative efforts. We examined the data collection strategies of four block grants to assess the viability of this approach. Problems identified included the following: States reported little data on the characteristics of clients served under the Education Block Grant, and LIHEAP data on households receiving assistance to weatherize their homes were not always readily accessible to state cash assistance agencies. Because of the broad range of activities under CSBG and the Education Block Grant, it is highly likely that the same clients served by more than one activity were counted twice. In 1991, we examined reporting problems under ADMS. Because HHS did not specify what information states must provide, the Congress did not have information it needed to determine whether a set-aside for women’s services had been effective in addressing treatment needs of pregnant women and mothers with young children. In another 1991 report, we found state annual reports varied significantly in the information provided on drug treatment services, making comparisons or assessments of federally supported drug treatment services difficult. In addition, many states did not provide information in a uniform format when they applied for funds. Generally, the data were timely, and most officials in the six states we included in our review perceived the collection efforts to be less burdensome than reporting under categorical programs. However, the limitations in data comparability reduce the usefulness of the data to serve the needs of federal policymakers, such as allocating federal funds, determining the magnitude of needs among individual states, and comparing program effectiveness among states. Just as with data collection and reporting, the Congress became concerned about financial accountability in the federal financial assistance provided to state and local entities. With the 1984 Single Audit Act, the Congress promoted more uniform, entitywide audit coverage than was achieved under the previous grant-by-grant audit approach. The single audit process has contributed to improving financial management practices of state and local officials we interviewed. These officials reported that they, among other things, have improved systems for tracking federal funds, strengthened administrative controls over federal programs, and increased oversight of entities to which they distribute federal funds. Even though state and local financial management practices have been improved, a number of issues burden the single audit process, hinder the usefulness of its reports, and limit its impact, according to our 1994 report. Specifically, criteria for determining which entities and programs are to be audited are based solely on dollar amounts. This approach has the advantage of subjecting a high percentage of federal funds to audit, but it does not necessarily focus audit resources on the programs identified as being high risk. For example, even though the Office of Management and Budget (OMB) has identified Federal Transit Administration grants as being high risk, we found in our review of single audit reports that only a small percentage of the grants to transit authorities were required to be audited. The usefulness of single audit reports for program oversight is limited in several ways. Reports do not have to be issued until 13 months after the end of the audit period, which many federal and state program managers we surveyed found was too late to be useful. Audited entities’ managers are not required to report on the adequacy of their internal control structures, which would assist auditor’s in evaluating an entity’s management of its programs. In addition, the results of the audits are not being summarized or compiled so that oversight officials and program managers can easily access and analyze them to gain programwide perspectives and identify leads for follow-on audit work or program oversight. State Flexibility Reduced Over Time as Funding Constraints Added Even though block grants were intended to provide flexibility to the states, over time constraints were added which had the effect of “recategorizing” them. These constraints often took the forms of set-asides, requiring a minimum portion of funds be used for a specific purpose, and cost ceilings, specifying a maximum portion of funds that could be used for other purposes. This trend reduced state flexibility. Many of these restrictions were imposed as a result of congressional concern that states were not adequately meeting national needs. In nine block grants from fiscal years 1983 and 1991, the Congress added new cost ceilings and set-asides or changed existing ones 58 times, as figure IV.I illustrates. Thirteen of these amendments added new cost ceilings or set-asides to 9 of 11 block grants we reviewed. Between fiscal years 1983 and 1991, the portion of funds restricted under set-asides increased in three block grants—MCH, CDBG, and Education. For example, set-asides for MCH restricted 60 percent of total funding (30 percent for preventive and primary care services for children and 30 percent for children with special health care needs). However, during the same period the portion of restricted funds under two block grants—ADMS and PHHS—decreased. In addition, 5 of the 11 block grants we examined permitted states to obtain waivers from some cost ceilings or set-asides if the state could justify that this amount of funds was not needed for the purpose specified in the set-aside. Lessons Learned Three lessons can be drawn from the experience with the 1981 block grants. These are the following: (1) The Congress needs to focus on accountability for results in its oversight of the block grants. The Government Performance and Results Act provides a framework for this and is consistent with the goal of block grants to provide flexibility to the states. (2) Funding formulas based on distributions under prior categorical programs may be inequitable because they do not reflect need, ability to pay, and variations in the cost of providing services. (3) States handled the 1981 block grants, but today’s challenges are likely to be greater. The programs being considered for inclusion in block grants not only are much larger but also are fundamentally different than those programs included in the 1981 block grants. The Congress Needs to Focus on Accountability for Results One of the principal goals of block grants is to shift responsibility for programs from the federal government to the states. This includes priority setting, program management, and, to a large extent, accountability. However, the Congress and federal agencies maintain an interest in the use and effectiveness of federal funds. Paradoxically, accountability is critical to preserving state flexibility. When adequate program information is lacking, the 1981 block grant experience demonstrates that the Congress may become more prescriptive. For example, funding constraints were added that limited state flexibility, and, in effect, “recategorized” some of the block grants. We have recommended a shift in focus of federal management and accountability toward program results and outcomes, with correspondingly less emphasis on inputs and rigid adherence to rules.This focus on outcomes over inputs is particularly appropriate for block grants given their emphasis on providing states flexibility in determining specific problems to address and strategies for addressing them. The flexibility block grants allow should be reflected in the kinds of national information collected by federal agencies. The Congress and federal agencies will need to decide the kinds and nature of information needed to assess program results. While the requirements in the Government Performance and Results Act of 1993 (GPRA) (P.L. 103-62) apply to all federal programs, they also offer an accountability framework for block grants. Consistent with the philosophy underlying block grants, GPRA seeks to shift the focus of federal management and accountability away from a preoccupation with inputs, such as budget and staffing levels, and adherence to rigid processes to a greater focus on outcomes and results. By the turn of the century, annual reporting under this act is expected to fill key information gaps. Among other things, GPRA requires every agency to establish indicators of performance, set annual performance goals, and report on actual performance in comparison with these goals each March beginning in the year 2000. Agencies are now developing strategic plans (to be submitted by September 30, 1997) articulating the agency’s mission, goals, and objectives preparatory to meeting these reporting requirements. Even though GPRA is intended to focus agencies on program results, much work, however, lies ahead. Even in the case of JTPA, in which there has been an emphasis on program outcomes, we have found that most agencies do not collect information on participant outcomes, nor do they conduct studies of program effectiveness. At the same time, there is little evidence of greater reliance on block grants since the 1981 block grants were created. Categorical programs continue to grow, up to almost 600 in fiscal year 1993. We have more recently reported on the problems created with the existence of numerous programs or funding streams in three program areas—youth development, employment and training, and early childhood. Even though state and local financial management practices have been improved with the Single Audit Act, a number of issues burden the single audit process, hinder the usefulness of its reports, and limit its impact. We have made recommendations to enhance the single audit process and to make it more useful for program oversight. We believe, however, that the Single Audit Act is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. Funding Formulas Should Reflect Need and Ability to Pay Even if block grants were created to give state governments more responsibility in the management of programs, the federal government will continue to be challenged by the distribution of funds among the states and localities. Public debate is likely to focus on formulas given there will always be winners and losers. Three characteristics of formulas that better target funds include factors that consider (1) state or local need, (2) differences among states in the costs of providing services, and (3) state or local ability to contribute to program costs. To the extent possible, equitable formulas rely on current and accurate data that measure need and ability to contribute. We have reported on the need for better population data to better target funding to people who have a greater need of services. We have examined the formulas that govern distribution of funds for MCH as well as other social service programs such as the Older American Act programs. In advising on the revisions to MCH, we recommended that 3 factors be included in the formula: concentration of at-risk children to help determine level of need; the effective tax rate to reflect states’ ability to pay; and costs of providing health services, including labor, office space, supplies, and drugs. We also suggested ways to phase in formulas to keep the disruption of services to a minimum. States Handled the 1981 Block Grants; Today’s Challenges Likely Greater During the buildup of the federal grant programs, the federal government viewed state and local governments as convenient administrative units for advancing federal objectives. State and local governments were seen as lacking the policy commitment and the administrative and financial capacity to address the domestic agenda. During the 1970s, the opposition to using state and local governments as mere administrative units grew, culminating in the Reagan administration’s New Federalism policy, which focused on shifting leadership of the domestic policy agenda away from the federal government and toward states. By cutting the direct federal-to-local linkages, this policy also encouraged local governments to strengthen their relationships with their respective states. States as a whole have become more capable of responding to public service demands and initiating innovations during the 1990s. Many factors account for strengthened state government. Beginning in the 1960s and 1970s, states modernized their government structures, hired more highly trained individuals, improved their financial management practices, and diversified their revenue systems. State and local governments have also taken on an increasing share of the responsibility for financing the country’s domestic expenditures. Changing priorities, tax cuts, and mounting deficits drove federal policymakers to cut budget and tax subsidies to both states and localities. These cuts fell more heavily on localities, however, because the federal government placed substantial importance on “safety net” programs in health and welfare that help the poor, which generally are supported by federal-state partnerships. In contrast, the federal government placed less importance on other “nonsafety net” programs such as infrastructure and economic development, which generally are federal-local partnerships. Growth in spending by state governments also reflects rising health care costs as well as officials’ choices favoring new or expanded services and programs. As figure V.1 illustrates, state and local governments’ expenditures have increased more rapidly, while federal grants-in-aid represent a smaller proportion of total state and local expenditure burden. Between 1978 and 1993, state and local outlays increased dramatically, from $493 billion to $884 billion in constant 1987 dollars. With their growing fiscal responsibilities, states have reevaluated their spending priorities and undertaken actions to control program growth, cut some services, and increase revenues—by raising taxes and imposing user fees, for example. The continued use of these state budget practices, combined with a growing economy, have improved the overall financial condition of state governments. Many factors contribute to state fiscal conditions, not the least of which are economic recessions, since most states do not possess the power to deficit spend. In addition, state officials have expressed concern about unfunded mandates imposed by the federal government. Practices such as “off-budget” transactions could obscure the long-term impact of program costs in some states. In addition, while states’ financial position has improved on the whole, the fiscal gap between wealthier and poorer states and localities remains significant, in part due to federal budget cuts. We reported in 1993 that southeastern and southwestern states, because of greater poverty rates and smaller taxable resources, generally were among the weakest states in terms of fiscal capacity. New block grant proposals include programs that are much more expansive than block grants created in 1981 and could present a greater challenge for the states to both implement and finance, particularly if such proposals are accompanied by federal funding cuts. Nearly 100 programs in five areas—cash welfare, child welfare and abuse programs, child care, food and nutrition, and social services—could be combined, accounting for more than $75 billion of a total of about $200 billion in federal grants to state and local governments. Comparatively, the categorical programs, which were replaced by the OBRA block grants, accounted for only about $6.5 billion of the $95 billion in 1981 outlays. In addition, these block grant proposals include programs that are fundamentally different than those included in the 1981 block grants. For example, Aid to Families with Dependent Children provides direct cash assistance to individuals. Given that states tend to cut services and raise taxes during economic downturns to comply with balanced budget requirements, these cash assistance programs could experience funding reductions, which could impact vulnerable populations at the same time their number are likely to increase. In addition, some experts suggest that states have not always maintained state funding for cash assistance programs in times of fiscal strain. Selected Bibliography of GAO Reports and Other Studies on Block Grants The following bibliography lists selected GAO reports on block grants created by the Omnibus Budget Reconciliation Act of 1981 and subsequent reports pertaining to implementation of block grant programs. In addition, the bibliography includes studies published by other acknowledged experts in intergovernmental relations. GAO Reports on Overall Block Grant Implementation Block Grants: Increases in Set-Asides and Cost Ceilings Since 1982 (GAO/HRD-92-58FS, July 27, 1992). Block Grants: Federal-State Cooperation in Developing National Data Collection Strategies (GAO/HRD-89-2, Nov. 29, 1988). Block Grants: Federal Data Collection Provisions (GAO/HRD-87-59FS, Feb. 24, 1987). Block Grants: Overview of Experiences to Date and Emerging Issues (GAO/HRD-85-46, Apr. 3, 1985). State Rather Than Federal Policies Provided the Framework for Managing Block Grants (GAO/HRD-85-36, Mar. 15, 1985). Block Grants Brought Funding Changes and Adjustments to Program Priorities (GAO/HRD-85-33, Feb. 11, 1985). Public Involvement in Block Grant Decisions: Multiple Opportunities Provided But Interest Groups Have Mixed Reactions to State Efforts (GAO/HRD-85-20, Dec. 28, 1984). Federal Agencies’ Block Grant Civil Rights Enforcement Efforts: A Status Report (GAO/HRD-84-82, Sept. 28, 1984). A Summary and Comparison of the Legislative Provisions of the Block Grants Created by the 1981 Omnibus Budget Reconciliation Act (GAO/IPE-83-2, Dec. 30, 1982). Lessons Learned From Past Block Grants: Implications For Congressional Oversight (GAO/IPE-82-8, Sept. 23, 1982). Early Observations on Block Grant Implementation (GAO/GGD-82-79, Aug. 24, 1982). Allocation of Funds for Block Grants With Optional Transition Periods (GAO/HRD-82-65, Mar. 26, 1982). GAO Reports on Selected Block Grants Maternal and Child Health Services Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5, Apr. 2, 1992). Maternal and Child Health Block Grant: Program Changes Emerging Under State Administration (GAO/HRD-84-35, May 7, 1984). Preventive Health and Health Services States Use Added Flexibility Offered by the Preventive Health and Health Services Block Grant (GAO/HRD-84-41, May 8, 1984). Social Services States Use Several Strategies to Cope With Funding Reductions Under Social Services Block Grant (GAO/HRD-84-68, Aug. 9, 1984). Low-Income Home Energy Assistance Program Low-Income Home Energy Assistance: States Cushioned Funding Cuts But Also Scaled Back Program Benefits (GAO/HRD-91-13, Jan. 24, 1991). Low-Income Home Energy Assistance: A Program Overview (GAO/HRD-91-1BR, Oct. 23, 1990). Low-Income Home Energy Assistance: Legislative Changes Could Result in Better Program Management (GAO/HRD-90-165, Sept. 7, 1990). States Fund an Expanded Range of Activities Under Low-Income Home Energy Assistance Block Grant (GAO/HRD-84-64, June 27, 1984). Alcohol, Drug Abuse, and Mental Health Services Drug Use Among Youth: No Simple Answers to Guide Prevention (GAO/HRD-94-24, Dec. 29, 1993). ADMS Block Grant: Drug Treatment Services Could Be Improved by New Accountability Program (GAO/HRD-92-27, Oct. 17, 1991). ADMS Block Grant: Women’s Set-Aside Does Not Assure Drug Treatment for Pregnant Women (GAO/HRD-91-80, May 6, 1991). Drug Treatment: Targeting Aid to States Using Urban Population as Indicator of Drug Use (GAO/HRD-91-17, Nov. 27, 1990). Block Grants: Federal Set-Asides for Substance Abuse and Mental Health Services (GAO/HRD-88-17, Oct. 14, 1987). States Have Made Few Changes in Implementing the Alcohol, Drug Abuse, and Mental Health Services Block Grant (GAO/HRD-84-52, June 6, 1984). Community Services Community Services: Block Grant Helps Address Local Social Service Needs (GAO/HRD-86-91, May 7, 1986). Community Services Block Grant: New State Role Brings Program and Administrative Changes (GAO/HRD-84-76, Sept. 28, 1984). Elementary and Secondary Education (Chapter 2) Education Block Grant: How Funds Reserved for State Efforts in California and Washington Are Used (GAO/HRD-86-94, May 13, 1986). Education Block Grant Alters State Role and Provides Greater Local Discretion (GAO/HRD-85-18, Nov. 19, 1984). Job Training and Partnership Act Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System (GAO/T-HEHS-95-70, Feb. 6, 1995). Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency (GAO/HEHS-94-193, July 11, 1994). Multiple Employment Training Programs: Most Federal Agencies Do Not Know If Their Programs Are Working Effectively (GAO/HEHS-94-88, March 2, 1994). Job Training Partnership Act: Racial and Gender Disparities in Services (GAO/HRD-91-148, Sept. 20, 1991). Job Training Partnership Act: Inadequate Oversight Leaves Program Vulnerable to Waste, Abuse, and Mismanagement (GAO/HRD-91-97, July 30, 1991). Job Training Partnership Act: Services and Outcomes for Participants With Differing Needs (GAO/HRD-89-52, June 9, 1989). Job Training Partnership Act: Summer Youth Programs Increase Emphasis on Education (GAO/HRD-87-101BR, June 30, 1987). Dislocated Workers: Exemplary Local Projects Under the Job Training Partnership Act (GAO/HRD-87-70BR, Apr. 8, 1987). Dislocated Workers: Local Programs and Outcomes Under the Job Training Partnership Act (GAO/HRD-87-41, Mar. 5, 1987). Job Training Partnership Act: Data Collection Efforts and Needs (GAO/HRD-86-69BR, Mar. 31, 1986). The Job Training Partnership Act: An Analysis of Support Cost Limits and Participant Characteristics (GAO/HRD-86-16, Nov. 6, 1985). Job Training Partnership Act: Initial Implementation of Program for Disadvantaged Youth and Adults (GAO/HRD-85-4, Mar. 4, 1985). Transportation Transportation Infrastructure: Highway Program Consolidation (GAO/RCED-91-198, Aug. 16, 1991). Transportation Infrastructure: States Benefit From Block Grant Flexibility (GAO/RCED-90-126, June 8, 1990). 20 Years of Federal Mass Transit Assistance: How Has Mass Transit Changed? (GAO/RCED-85-61, Sept. 18, 1985). Urban Mass Transportation Administration’s New Formula Grant Program: Operating Flexibility and Process Simplification (GAO/RCED-85-79, July 15, 1985). UMTA Needs Better Assurance That Grantees Comply With Selected Federal Requirements (GAO/RCED-85-26, Feb. 19, 1985). Community Development Community Development: Comprehensive Approaches Address Multiple Needs But Are Challenging to Implement (GAO/RCED/HEHS-95-69, Feb. 8, 1995). Community Development: Block Grant Economic Development Activities Reflect Local Priorities (GAO/RCED-94-108, Feb. 17, 1994). Community Development: Oversight of Block Grant Monitoring Needs Improvement (GAO/RCED-91-23, Jan. 30, 1991). States Are Making Good Progress in Implementing the Small Cities Community Development Block Grant Program (GAO/RCED-83-186, Sept. 8, 1983). Rental Rehabilitation With Limited Federal Involvement: Who is Doing It? At What Cost? Who Benefits? (GAO/RCED-83-148, July 11, 1983). Block Grants for Housing: A Study of Local Experiences and Attitudes (GAO/RCED-83-21, GAO/RCED-83-21A, Dec. 13, 1982). HUD Needs to Better Determine Extent of Community Block Grants’ Lower Income Benefits (GAO/RCED-83-15, Nov. 3, 1982). The Community Development Block Grant Program Can Be More Effective in Revitalizing the Nation’s Cities (GAO/RCED-81-76, Apr. 30, 1981). Other Related GAO Reports Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Multiple Youth Programs (GAO/HEHS-95-60R, Jan. 19, 1995). Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-94-4FS, Oct. 31, 1994). Single Audit: Refinements Can Improve Usefulness (GAO/AIMD-94-133, June 21, 1994). Federal Aid: Revising Poverty Statistics Affects Fairness of Allocation Formulas (GAO/HEHS-94-165, May 20, 1994). Older Americans Act: Funding Formula Could Better Reflect State Needs (GAO/HEHS-94-41, May 12, 1994). Improving Government: Actions Needed to Sustain and Enhance Management Reforms (GAO/T-OCG-94-1, Jan. 27, 1994). State and Local Finances: Some Jurisdictions Confronted by Short- and Long-Term Problems (GAO/HRD-94-1, Oct. 6, 1993). Improving Government: Measuring Performance and Acting on Proposals for Change (GAO/T-GGD-93-14, Mar. 23, 1993). Intergovernmental Relations: Changing Patterns in State-Local Finances (GAO/HRD-92-87FS, Mar. 31, 1992). Federal Formula Programs: Outdated Population Data Used to Allocate Most Funds (GAO/HRD-90-145, Sept. 27, 1990). Federal-State-Local Relations: Trends of the Past Decade and Emerging Issues (GAO/HRD-90-34, Mar. 22, 1990). Other Studies Related to Block Grants Liner, E. Blaine ed. A Decade of Devolution: Perspectives on State-Local Relations. Washington, D.C.: The Urban Institute Press, 1989. Nathan, Richard P. and Fred C. Doolittle. The Consequences of Cuts: The Effects of the Reagan Domestic Program on State and Local Governments. Princeton, NJ: Princeton Urban and Regional Research Center, 1983. Nathan, Richard P. and Doolittle. Fred C., Reagan and the States. Princeton, NJ: Princeton University Press, 1987. National Governors’ Association and the National Association of State Budget Officers. The Fiscal Survey of the States. Washington, D.C.: 1994. Palmer, John L. and Isabel V. Sawhill, eds. The Reagan Experiment. The Urban Institute Press, Washington, D.C.: 1982. Peterson, George E., et al. The Reagan Block Grants: What Have We Learned? Washington, D.C.: The Urban Institute Press, 1986. Peterson, Paul E., Barry G. Rabe and Kenneth K. Wong. When Federalism Works. Washington, D.C.: The Brookings Institution, 1986. U.S. Advisory Commission on Intergovernmental Relations. Significant Features of Fiscal Federalism. Washington, D.C.: 1994). Major Contributors to This Report Sigurd R. Nilsen, Assistant Director, (202) 512-7003 Jacquelyn B. Werth, Evaluator-in-Charge, (202) 512-7070 Mark Eaton Ward, Senior Evaluator Joel Marus, Evaluator David D. Bellis, Senior Evaluator John Vocino, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
GAO provided information on federal block grant programs, focusing on: (1) states' experiences operating block grants; and (2) lessons learned that could be useful to Congress as it considers new block grants. GAO found that: (1) 15 block grants with funding of $32 billion constituted a small portion of the total federal aid to states in fiscal year 1993; (2) in 1981, Congress created 9 block grants from about 50 categorical programs to broaden program flexibility among states; (3) the states' transition to block grants was generally smooth, since the states had existing management and delivery systems for most programs, but they had difficulties in two areas because these categorical programs were entirely federally funded or directed; (4) states reported administrative efficiencies with block grants, but documenting the cost savings was difficult; (5) although the states experienced a 12-percent funding reduction under the block grants, they used various approaches, such as using carry-over funds and additional state revenues, to help them offset the funding reductions; (6) problems with the 1981 block grant included inequitable initial state allocations, the lack of useful information for Congress and program managers to effectively oversee the grants, and reduced state flexibility due to Congress recategorizing some grants; (7) lessons learned from the 1981 experience should focus on accountability for results, equitable funding allocations based on state need, ability to pay, and cost of services; and (8) states could encounter greater transition difficulties with the larger, more complex programs being considered for inclusion in the new block grants.
Background ATP, which began funding projects in fiscal year 1990, was intended to fund high-risk research and development (R&D) projects with broad commercial and societal benefits that would not be undertaken by a single company or group of companies, either because the risk was too high or because the economic benefits of success would not accrue to the investors. ATP is viewed as a mechanism for fostering investment in areas in which societal returns would exceed private returns. ATP has addressed other opportunities to achieve broader societal goals, such as small business participation, as well as the establishment of joint ventures for high-risk technologies that would be difficult for any one company to justify because, for example, the benefits spread across the industry as a whole. Thus, ATP is seen by some as a means of addressing market failure in research areas that would otherwise not be funded, thereby facilitating the economic growth that comes from the commercialization and use of new technologies in the private sector. Advocates of the program believe that the government should serve as a catalyst for companies to cooperate and undertake important new work that would not have been possible in the same time period without federal participation. Critics of the program view ATP as industrial policy, or the means by which government rather than the marketplace picks winners and losers. ATP provides funding through cooperative agreements—a type of financial assistance in which the federal government is substantially involved in project management. ATP offers these agreements through announced annual competitions. It provides multiyear funding to single companies and to industry-led joint ventures. The proposal review and selection process is a multistep process based on NIST regulations. In general, these steps include a preliminary screening, technical and business reviews, semifinalist identification, oral reviews, ranking, and final selection. At the beginning of each round of ATP competitions, NIST establishes Source Evaluation Boards (SEBs) to ensure that all proposals receive careful consideration. Each SEB is comprised of NIST technical experts as well as outside specialists with backgrounds in business and economics. ATP supplements the SEBs with outside technical reviewers, generally federal government experts in the specific industry of the proposal. Independent business experts are also hired on a consulting basis, including high-tech venture capitalists, people who teach strategic business planning, retired corporate executives from large and small high- tech businesses, as well as economists and business development specialists. All SEB members and outside reviewers must sign nondisclosure statements, agree to protect proprietary information, and certify that they have no conflicts of interest. As part of the proposal evaluation process, ATP uses the external reviewers to assess the technical and business merit of the proposed research. Each proposal is sponsored by both technical and business SEB members, whose roles include identifying reviewers, summarizing evaluative comments, and making recommendations to the SEB. The SEB evaluates the proposals, selects the semifinalists, conducts oral interviews with semifinalists, and ranks the semifinalists. A source selecting official makes the final award decisions based on the ranked list of proposals from the SEB. The three projects that we reviewed received funding through the ATP competitions announced in 1990 and 1992. In those years, the selection criteria included scientific and technical merit, potential broad-based benefits, technology transfer benefits, the proposing organization’s commitment level and organizational structure, and the qualifications and experience of the proposing organization’s staff. Each of the five selection criteria was weighted at 20 percent. Today, these same selection criteria are used but are grouped into two categories, each weighted at 50 percent. The “Scientific and Technical Merit” category addresses a variety of issues related to the technical plan and the relevant experience of the proposing organization. The second category, “Potential for Broad-Based Economic Benefits,” addresses the means to achieving an economic benefit and commercialization plans, as well as issues related to the proposer’s level of commitment, organizational structure, and management plan. Technical and business reviewers complete documentation, referred to as technical and business evaluation worksheets, that address various aspects of these criteria. Three ATP Projects Addressed Similar Research Goals to Projects in the Private Sector The three completed projects that we reviewed addressed research goals that were similar to goals the private sector was addressing at about the same time. Each of the three projects was from a different sector of technology—computers, electronics, and biotechnology. The projects include (1) an on-line handwriting recognition system for computer input, (2) a system to increase the capacity of existing fiber optic cables for the telecommunications industry, and (3) a process for turning collagen into fibers for human prostheses use. ATP Project on Handwriting Recognition Both the ATP project and several private sector projects had a similar research goal of developing an on-line system to recognize natural or cursive handwritten data without the use of a keyboard. This technology would make computers more useful where keyboard use is limited by physical problems or in situations where using a keyboard is not practical. On-line handwriting recognition means that the system recognizes handwritten data while the user writes. The primary technical problem in handwriting recognition is that writing styles vary greatly from person to person, depending upon whether the user is in a hurry, fatigued, or subject to a variety of other factors. While the technology for obtaining recognition of constrained careful writing or block print writing was commercially available, systems for cursive writing recognition were not commercially available because of the greater handwriting variability that was encountered. The ATP project we reviewed sought to develop an on-line natural handwriting recognition system that was user-independent and able to translate natural or cursive handwriting. Communication Intelligence Corporation (CIC) was the award recipient. CIC used its ATP funding of $1.2 million from 1991 to 1993 to build its own algorithms and models for developing its handwriting recognition system. During the project, CIC created a database that includes thousands of cursive handwriting samples and developed new recognition algorithms. Some of this technology has been incorporated into a registered software product that has the ability to recognize cursive writing in limited circumstances. According to the experts we interviewed, as well as literature and patent searches, several companies were attempting to achieve a similar goal of handwriting recognition through their research around the same time that the ATP project received funding. Some of the key players in the private sector conducting research on cursive handwriting recognition included Paragraph International (in collaboration with Apple Computer) and Lexicus (which later became a division of Motorola). For example, Apple licensed a cursive handwriting recognition system from a Soviet company, Paragraph International, according to articles published in computer magazines in October 1991. According to these sources, this technology provided Apple with a foundation for recognizing printed, cursive, or block handwritten text. Another indication of research with a similar goal appeared in the October 1990 edition of PC Week, which reported that “handwriting recognition is an emerging technology that promises increased productivity both for current microcomputer owners and for a new breed of users armed with hand-held ‘pen-based’ computers.” Similarly a technical journal article indicated that there was renewed interest in the 1980s in this field of on- line handwriting recognition, from its advent in the 1960s, because of more accurate electronic tablets, more compact and powerful computers, and better recognition algorithms. Moreover, according to the U.S. Patent and Trademark Office’s (PTO) database, over 450 patents were issued on handwriting recognition software, concepts, and related products from 1985 through 1999, indicating that research of a similar goal was being conducted around the time of the ATP project. Given the fact that it can take many years between the time a research project takes place and the time that an outcome is realized, this time period for a patent search allowed us to determine whether there was research ongoing during the time of the ATP project. The dates of the patents actually occurred sometime after the research was conducted. And, as we reported in a prior report, the time between the point when a patent application is filed until the date when a patent is issued, or the application is abandoned, ranged from 19.8 months to 21 months, adding additional time to when the research was done. ATP Project on Capacity Expansion of Fiber Optic Cables Another ATP project we reviewed, which proposed to develop a system to increase the capacity of existing fiber optic cables for the telecommunications industry, also had a similar goal to that of research in the private sector. At the same time, firms in the private sector were attempting to increase the number of light signals that can be transmitted through a single strand of fiber optic cable using a technology called wavelength division multiplexing (WDM). In the 1980s, telephone companies laid fiber optic cables across the United States and other countries to create an information system that could carry significantly more data than the copper wires they replaced. Tremendous increases in cable traffic, primarily from the Internet, have crowded these cables. WDM technology was aimed at providing a cost-effective alternative to the expensive option of installing additional fiber optic cables. Accuwave Corporation (Accuwave) was the ATP award recipient. Accuwave used its ATP funding of approximately $2 million from March 1993 through March 1995 to develop a wavelength division multiplexing system that would substantially increase the number of signals that could be transmitted through a single optical fiber strand, using the concept of volume holography. Volume holography uses holograms to direct multiple light signals simultaneously through a single fiber strand. Accuwave was able to make improvements on these issues but not enough to fully develop and market a successful WDM system for the telecommunications market. In 1996, a competitor beat Accuwave to the market. After the completion of the ATP project, Accuwave filed for bankruptcy protection due to its inability to successfully commercialize a wavelength division multiplexing system. Other private firms were involved in research with a similar goal of increasing the capacity of fiber optic cable at about the same time as Accuwave was conducting its research. Conceptual research on such systems dates back to the early 1980s, but development and commercialization did not flourish until the mid- to late-1990s. Bell Labs (now Lucent Technologies), Nortel Networks, and Ciena Corporation, among others, were considered some of the major competitors in the industry. In the early 1990s, these firms were attempting to develop WDM technology using different methods and materials. For example, Ciena Corporation developed a system that incorporated fiber-Bragg gratings, which are filters embedded directly onto fiber optic cable that help to separate multiple light signals through a single fiber strand. We also found an indication of WDM-related research through a review of issued patents. According to PTO’s database, over 2,000 patents were issued related to wavelength division multiplexing components, systems, and concepts from 1985 through 1999. The patents issued ranged from 10 patents in 1985 to 493 in 1999. ATP Project on Regenerating Tissues and Organs Both the ATP project and private sector projects we identified in the tissue engineering field had similar broad research goals of developing biological equivalents for defective tissues and organs utilizing diverse technical approaches. ATP’s project proposed procedures for extracting, storing, spinning, and weaving collagen (the main constituent of connective tissue and bones) into fibers suitable for human prostheses that could induce the body’s cells to regenerate lost tissue. Tissue Engineering, Inc., received ATP’s award of about $2 million for use over the years 1993 through 1996. The company’s long-term and yet unrealized goal is to transplant these prostheses into humans, after which the collagen framework, or scaffold, would induce the growth and function of normal body cells within it, eventually remodeling lost human tissue and replacing the scaffold. Within the very innovative field of tissue engineering, however, many competitors were attempting to achieve similar broad research goals. Organogenesis, the Collagen Corporation, Integra LifeSciences, Advanced Tissue Sciences, Genzyme Tissue, Osiris Therapeutics, Matrix Pharmaceuticals, and ReGen Biologics are key players in the market to develop structures that could replace or regenerate cells, tissues, and organs such as skin, teeth, orthopedic structures, cartilage, and valves. A number of these companies have subsequently received ATP awards. In addition, universities and medical schools have researchers investigating the many possibilities to engineer human tissues, and eventually complex organs, such as the liver, pancreas, and heart. According to one expert, there is a great deal of competition within the field of tissue engineering. Although the Tissue Engineering, Inc. research focused on the use of collagen as the basis for these structures, other companies were pursuing a variety of technical approaches for addressing the goal of developing biological equivalents for defective tissues and organs. In addition to research in collagen, other companies and researchers have also been attempting to create human tissues and organs from other biological materials, synthetics, and hybrid products, which are both biologic and synthetic. For example, researchers from the Massachusetts Institute of Technology (MIT) developed an artificial skin product using collagen and a natural polymer. Several companies have since developed comparable products. In 1986, researchers from MIT and a hospital in Massachusetts began inserting cells into scaffolds created of biodegradable polymer. As the cells multiply, tissues form. The magazine BusinessWeek reported this concept as “an elegantly simple concept that underlies most engineered tissue.” Two competitors, Integra LifeSciences and Organogenesis, reported that they were also doing work on the use of collagen in various applications. Although their technical approaches were different than the ATP project, the broad research goals were similar. In addition to our discussions with experts and literature searches, patent research shows that there was activity related to the field of tissue engineering prior to and during the ATP project. According to a search done on the PTO website, at least 370 patents were issued related to cell culturing, scaffolding or matrix development, and tissue engineering from 1985 through 1999. Experts have also indicated that there are several patents related to the field, with a considerable amount of overlap in the technologies described in those patents. ATP’s Award Selection Process Is Unlikely to Avoid Funding Similar Research Two factors in ATP’s award selection process could result in ATP’s funding research similar to research that the private sector would fund in the same time period. These two factors are inherent in the review process and limit the information the reviewers have on similar private sector research efforts. Due to conflict-of-interest concerns, technical reviewers are precluded from being directly involved with the proposed research, making them less likely to know about all the research in an area. Also, the information available about private sector research is limited because of the private sector practice of not disclosing research results. Until a patent is issued, a private sector firm generally publishes very few details about the research to protect proprietary information. Therefore, it is difficult for the reviewers to identify other cutting-edge research. ATP’s Conflict-of-Interest Provision Limits Its Ability to Identify Similar Research ATP selection officials rely on outside technical reviewers to evaluate a proposal’s scientific and technical merit. All reviewers must certify that they have no conflicts of interest. To minimize possible conflicts of interest, the technical reviewers are generally federal government employees who are experts in the specific technology of the research proposal but are not directly involved with the proposed research area. Although this approach helps to guard against conflict of interest, it has inherent limitations on the program’s ability to identify similar research efforts. The technical reviewers rely on their own knowledge of research underway in the private sector. One of the technical reviewers we interviewed said that he did not personally know of other companies that were doing similar work. However, he believed that it was unlikely that there were not dozens of others working on the same issue. Proprietary Information Limits ATP’s Ability to Identify Similar Research ATP reviewers are significantly limited in their ability to identify similar research efforts by an inherent lack of information on private sector research. Although ATP officials use several sources, such as colleagues, conferences and symposia, and current technical literature, to try to identify research efforts conducted by the private sector and the federal government, this information is often proprietary. Most of the private sector and university experts we consulted agreed that it can be very difficult to identify the specific research that private sector firms are conducting, especially considering the competitive nature of most industries. The early release of information on a company’s research could be costly to the firm. If a competing firm could determine the nature and progress of another company’s research, it could help the competitor to develop and commercialize an identical or higher-quality product before the other firm. At the very least, the early release of research information by a firm can give competitors an idea as to the focus of the firm’s strategic plan. Thus, many firms are very careful about releasing detailed information related to research and development activities they are conducting. In conclusion, Mr. Chairman, the process ATP follows to select projects for funding is limited in its ability to identify similar research efforts in the private sector. Our retrospective look at the three ATP research projects showed that their goals were similar to research goals already being funded by the private sector. Examining the process that ATP uses to select projects, we found two inherent factors—the need to guard against conflicts of interest and the need to protect proprietary information—that limit ATP’s ability to identify similar research efforts in the private sector. These two factors have not changed since the beginning of the program. We recognize the valid need to guard against conflicts of interest and to protect proprietary information; thus, we did not recommend any changes to the award selection process. However, we believe that it may be impossible for the program to ensure that it is consistently not funding existing or planned research that would be conducted in the same time period in the absence of ATP financial assistance. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. Contacts and Acknowledgements For further information about this testimony, please contact Robin M. Nazzaro at 202-512-6246. Diane Raynes, Carol Herrnstadt Shulman, and Jessica Evans made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Advanced Technology Program (ATP) supports research that accelerates the development of high-risk technologies with the potential for broad-based economic benefits for the nation. Under the program, administrators at the National Institute of Standards and Technology are to ensure that they do not fund research that would be conducted in the same period without ATP funding. Between 1990 and September 2004, ATP funded 768 projects at a cost of about $2.3 billion. There is a continuing debate over whether the private sector has sufficient incentives to undertake research on high-risk, high-payoff emerging technologies without government support, such as ATP. This testimony discusses the results of GAO's April 2000 report, Advanced Technology Program: Inherent Factors in the Selection Process Could Limit Identification of Similar Research (GAO/RCED-00-114) and provides updated information. GAO determined (1) whether ATP had funded projects with research goals that were similar to projects funded by the private sector and (2) if ATP did, whether its award selection process ensures that such research would not be funded in the future. The three completed ATP-funded projects GAO reviewed, which were approved for funding in 1990 and 1992, addressed research goals that were similar to those already funded by the private sector. GAO chose these 3 projects from among the first 38 completed projects, each representing a different technology sector: computers, electronics, and biotechnology. These three technology sectors represent 26 of the 38 completed ATP projects, or 68 percent. The projects included an on-line handwriting recognition system, a system to increase the capacity of existing fiber optic cables for the telecommunications industry, and a process for turning collagen into fibers for human prostheses use. In the case of the handwriting recognition project, ATP provided $1.2 million to develop a system to recognize cursive handwriting for pen-based (i.e., without a keyboard) computer input. GAO identified several private firms that were conducting similar research on handwriting recognition at approximately the same time the ATP project was funded. In fact, this line of research began in the late 1950s. In addition, GAO identified multiple patents, as early as 5 years prior to the start of the ATP project, in the field of handwriting recognition. GAO found similar results in the other two projects. Two inherent factors in ATP's award selection process--the need to guard against conflicts of interest and the need to protect proprietary information--make it unlikely that ATP can avoid funding research already being pursued by the private sector in the same time period. These factors, which have not changed since 1990, make it difficult for ATP project reviewers to identify similar efforts in the private sector. For example, to guard against conflicts of interest, the program uses technical experts who are not directly involved with the proposed research. Their acquaintance with ongoing research is further limited by the private sector's practice of not disclosing its research efforts or results so as to guard proprietary information. As a result, it may be impossible for the program to ensure that it is consistently not funding existing or planned research that would be conducted in the same time period in the absence of ATP financial assistance. GAO made no recommendations in its April 2000 report.
Background According to a 1995 effectiveness study of the Big Brothers Big Sisters Program, children who participated in this mentoring program achieved higher grades in school, skipped school less frequently, developed closer relationships with parents and peers, and were less likely to initiate the use of drugs and alcohol than were similar children who were not enrolled in the program. Mentoring is often defined as a sustained relationship between a youth and an older person, typically an adult, in which the adult provides the younger person with support, guidance, and assistance. Mentoring is based on the premise that if young people have access to caring, concerned adults, they will be more likely to become successful adults themselves. Historically, mentoring has meant that one volunteer commits to mentoring one child at a time. More recently, mentoring has moved beyond this traditional relationship to encompass other formats, including group and e-mail mentoring. Mentoring programs are established in many communities. Programs like the Big Brothers Big Sisters Program, a national program operating in every state, have a long history of mentoring neighborhood youth. In community-based programs, youth are often referred for mentoring by family members. Potential mentors often submit to extensive background checks and trained mentors are often allowed to engage in unsupervised activities with the youth. School-based mentoring, as its name implies, takes place on school grounds. Given their location in the school, program staff in school-based mentoring programs can easily meet with teachers. Often teachers refer youth for mentoring whom they believe could benefit from additional attention and guidance. In school-based mentoring programs, volunteers typically meet with the youth during or after school and their interactions are typically supervised. Mentors and youth can spend time on schoolwork, but also engage in other activities, including playing sports, attending a concert, reading, eating lunch together, or just “hanging out together.” To improve the outcomes of our nation’s school age children, Congress passed NCLBA. Among other things, this act authorized 3-year grants for student mentoring programs. NCLBA required that selected programs serve children with the greatest need, that is, children most at risk of failing school, dropping out of school, or being involved in criminal or delinquent activity, as well as those lacking strong positive role models. NCLBA also authorized grants to entities to achieve one or more goals for participating children, including improved academic achievement and reduced delinquent behavior and involvement in gangs. (See table 1.) A number of types of organizations are eligible to receive funding under the program, including local educational agencies, nonprofit, community- based organizations, and partnerships between a local educational agency and a nonprofit, community-based organization. NCLBA requires that applicants demonstrate that they meet a number of criteria, which Education in turn required be detailed in their grant applications. Specifically, Education required applicants to demonstrate that mentors would receive training and support and would be screened using appropriate references and background checks. The agency also required applicants to meet criteria that are not specifically outlined in NCLBA. For example, the agency required that applicants outline how they intended to achieve performance goals, such as improved academic achievement among participating children, or reduced incidences of involvement in gangs, illegal drugs, and alcohol. Grant recipients can use the funding for activities to establish or implement a mentoring program. For example, grants may use funds to hire mentor coordinators and support staff; recruit, screen and train mentors; and disseminate recruiting materials. In fiscal year 2002, Education awarded competitive grants to 122 grantees from a pool of nearly 1,300 applicants. Education funded at least one grantee in every state, with grant amounts ranging from about $39,000 to nearly $500,000 in both fiscal years 2002 and 2003. (See app. II for a list of grantees by state.) Funding for these grantees ends in fiscal year 2004; funding for the mentoring program over the 3 years will total about $50 million. Congress has increased funding for fiscal year 2004. According to Education, these funds will be used to fund an additional 200 grantees and the last year of the current wave of mentoring grantees. Education has a number of responsibilities regarding administration and oversight of the mentoring program. The agency oversees program implementation, provides technical assistance, and disseminates information on best practices. With respect to monitoring, Education, like other federal agencies, is required as part of its monitoring responsibilities to review grantees’ single audit reports if they contain findings. The Single Audit Act requires state and local governments and nonprofit organizations that expend $300,000 or more in federal awards in a fiscal year to have either a single audit or program-specific audit conducted for that year. Audit findings from such reports can include problems such as internal control weaknesses; material noncompliance with the provisions of laws, regulations, or grant agreements; and fraud affecting a federal award. Education receives a copy of the audit report if it contains findings relevant to an Education program. Key Elements of Successful Mentoring Programs Are Planning, Management, Sustainability, and Evaluation According to the literature we reviewed, prior to implementation, successful mentoring programs make key decisions about which youth they will serve and expected outcomes, how they will recruit mentors, and how the program will be funded; put in place management structures, such as screening, training, and recruitment policies and procedures to ensure that the program is well-managed on a daily basis; market their programs and pursue strategies to ensure long-term program viability; and evaluate program outcomes and disseminate outcome information to key stakeholders to further garner and sustain support for their programs. (See fig. 1.) Successful Student Mentoring Programs Develop Initial Plans for How Their Programs Will Be Designed and Operated According to the literature that we reviewed, successful student mentoring programs engage in considerable planning prior to launching their efforts. Such planning enables them to assess the need for the services they plan to offer and to determine whether their organizations have the assets they need to be successful. Pre-implementation planning can also help programs determine the extent to which individuals or corporations may be willing to invest in the programs. Successful mentoring programs make many decisions pertaining to program design and operations as part of their early planning processes. For example, decisions regarding program design may include how many youth a program will serve, what kinds of services it will offer, where and when mentoring will take place, the types of individuals to be recruited as mentors, and expected outcomes. In addition, successful programs often decide whether mentors will meet with youth individually or in groups. Successful programs also determine what function mentors will serve, such as whether they will offer academic support or help to socialize youth. Successful Student Mentoring Programs Ensure Policies and Procedures Are in Place to Sustain Daily Operations Research suggests that having policies and procedures in place to sustain and support mentors and youth are critical elements of successful mentoring programs. According to the literature, three elements are particularly critical to the success of a mentor program: (1) mentor screening, (2) orientation and training, and (3) support and supervision. First, screening procedures provide programs a basis for selecting those adults who are most likely to be successful as mentors. Screening enables programs to better predict how a potential mentor may interact with a mentored youth, such as whether the potential mentor understands the importance of being a caring adult. In addition, screening can help determine whether the volunteer can commit enough time to the youth to build a meaningful relationship. Screening can also help ensure the safety of participating youth and can protect the program’s reputation. When screening mentors, many programs interview the potential mentors, review personal references, and check police records. Second, research indicates that mentor orientation and training experiences are critical to program success, although research has not identified how much training is ideal or what topics such training should cover. Mentor orientation and training experiences can help student mentoring programs succeed in several ways. For example, orientation and training experiences can prepare volunteers to successfully become mentors and can help ensure that both youth and mentors understand what their roles entail. In addition, orientation and training experiences can help mentors understand what they can reasonably expect to accomplish. Moreover, given that mentors often have very different backgrounds from the youth they mentor, training can help mentors better understand the youth and more effectively work toward building relationships. Third, while training can prepare mentors for potential challenges, successful programs also provide mentors with ongoing support, either from professional staff or through mentor support groups. Such ongoing support can help mentors continue to invest in their relationships with youth so these relationships can survive and thrive. By supervising and supporting mentor and youth matches, program staff can help ensure that pairs meet regularly over a substantial period of time; such regular interaction is critical to developing positive relationships between mentors and youth. Research suggests that programs in which professional staff provide regular support to volunteers are more likely to have mentor- youth matches that meet regularly. In addition, participants of such programs are more likely to report being satisfied with their mentoring relationships. In contrast, programs in which staff do not regularly contact mentors report more “failed matches”—those that do not meet consistently and, thus, do not develop into relationships. Successful Student Mentoring Programs Market Their Programs and Develop Strategies to Ensure Long-term Operation Successful mentoring programs market themselves and establish strategies to ensure long-term program viability, according to the literature. Marketing and sustainability strategies can take several forms. For example, programs may design resource development plans. Such plans may help programs diversify their fundraising by establishing how the programs will seek in-kind gifts, solicit funding from individuals and corporate donors, and apply for government funding. In addition, programs may try to garner private-sector support for mentoring by encouraging leaders in the private sector to make it easier for their employees to mentor youth. For example, program staff may encourage company leaders to allow employees to take time off from work to mentor youth. Marketing and program sustainability also includes public relations efforts. For example, mentoring programs may develop partnerships and collaborations with other organizations that support similar efforts to improve youth outcomes. Public relations also include recognition of mentors by providing tangible tokens of appreciations such as plaques or letters to mentors’ employers. Successful Student Mentoring Programs Establish Processes to Measure and Disseminate Program Outcomes Successful mentoring programs develop plans to measure expected outcomes and systematically examine and disseminate evaluation findings. For example, successful mentoring programs develop plans to measure program outcomes, determine how to measure such outcomes appropriately, and use their planned evaluation designs to assess their successes and areas needing improvement. Successful mentoring programs also disseminate their evaluation findings to volunteers, participants, funders, and the media to garner further support for their programs. Moreover, having information on program outcomes enables these programs to refine program design and operations based on evaluation findings. Mentoring Grantees Shared Many Characteristics and Had Some Elements of Successful Programs, but Ease of Implementation Differed among New and Established Grantees Most of the mentoring grantees Education funded were similar in many respects—most grantees had considerable experience operating mentoring programs, had similar goals for youth, and matched one mentor with one youth. Mentoring programs differed in the number and characteristics of youth served and the services offered them. In addition, all of the mentoring programs Education funded listed some key elements of successful programs in their applications. However, the well-established grantees we visited experienced fewer implementation challenges than did grantees new to mentoring. Most Mentoring Grantees Shared Many Characteristics Such as Considerable Mentoring Experience and Similar Goals for Youth Our analyses of grant applications showed that most of the mentoring grantees Education funded were well-established, with considerable mentoring experience. Specifically, 81 percent of the grantees were well- established—with 5 years or more experience operating mentoring programs. For example, one grantee in Florida had mentored youth for over 40 years. Conversely, 19 percent (23) of the grantees Education funded were relatively new, with less than 5 years of experience. (See fig. 2.) In addition, most of the grantees Education funded cited similar goals for youth, reflecting the criteria identified in the application guidance, according to our review of grant applications. Nearly all grantees had goals related to improving academic achievement of participating youth (96 percent) and reducing their involvement in harmful behaviors, such as drug use and violence (87 percent). These goals were consistent with those identified in NCBLA as goals of the mentoring program. (See table 2.) About three-quarters of all grantees paired each youth to his or her own mentor, while 3 percent of all grantees (3) mentored children exclusively in groups, with 3 or 4 youth meeting at one time with a mentor. (See fig. 3.) Around one-fifth of all grantees provided both individual and group mentoring. About 70 percent of grantees listed in their grant applications that they asked prospective mentors to commit to spending at least 1 hour per week with their youth, and over 60 percent required a commitment of at least 1 school year. Other programs asked prospective mentors for a longer commitment. For example, a Nebraska grantee we visited asked prospective mentors to continue the mentoring relationships until the youth had graduated from high school. A few grantees asked mentors to commit less time than 1 hour a week. For example, one grantee we visited in Illinois asked mentors to meet with their youth for 1 hour a month. However, the mentors told us that they wanted to increase the frequency of the meetings. Although there were many similarities among grantees, they did differ in some respects, such as the number of youth they planned to serve, how much funding was available to them, and which specific at-risk youth they planned to serve. The number of youth grantees planned to serve in total over the 3-year grant period ranged from 18 in Nebraska to a high of 3,200 youth in New Mexico, according to grantee applications. Grantee award amounts varied from about $39,000 to nearly $500,000, with the average grant amount about $140,000. Although all grantees served at-risk youth, some targeted a specific group of at-risk youth. For example, one grantee in Virginia targeted children of Vietnamese refugees, another grantee in California targeted youth in foster care and residential group homes, and a New York grantee targeted court-involved youth. Grantees also differed on the types of activities mentors and youth participated in. During our visits, we observed a range of activities, some focused on academics, such as tutoring or playing a game that promoted literacy or math skills—while other activities focused on building relationships. Such activities included playing chess, playing basketball, or just simply talking. In addition, some mentors told us that they engage youth in cultural activities, such as attending a concert. Mentors also reported participating in activities with their youths that supported the youths’ communities, such as planting bulbs at a local retirement home or decorating a Christmas tree to be auctioned off at a local charity event. (See fig. 4.) Many of these mentoring activities were carried out inside of the school, such as in classrooms, the library, the gym, or in resource centers. Less frequently, mentors met with youth in their community settings, such as in a neighborhood church, community center, or public library. All Grantees Had Some Elements of Successful Programs, though in General More Established Grantees Reported Fewer Implementation Challenges than Newer Grantees According to grantee applications, all grantees had some of the key elements of successful programs: initial plans for the program design and operations, including for example the number and characteristics of youth served; policies and procedures for program management such as mentor screening and training; and program evaluation activities that include an assessment of program outcomes. However, during our site visits, we found that established grantees already had fairly well-defined programs, having generally completed most aspects of the first two elements— planning and program management. Thus, these more established grantees encountered fewer implementation challenges, such as problems recruiting mentors, than did newer grantees. However, these established grantees noted the challenges they had faced in starting up their programs initially and the benefits they derived from talking with other more experienced program staff to help them along. Many of the established grantees we visited often required little additional planning for their mentoring grants. These grantees often used plans and strategies already in place, such as what youth to serve, the types of services to provide, and how to conduct mentor recruitment and training activities. For example, staff from a well-established Florida grantee that we visited told us they used the Education grant to continue serving the same youth they had served through a mentoring program whose funding had expired. Staff from a California grantee told us they used the Education grant to expand their existing school-based mentoring program into additional schools. In contrast, some of the newer grantees we visited did not have an existing base upon which to build their mentoring efforts, particularly those that were using grant funds to start a new program. As a necessary step toward implementing successful mentoring programs, these grantees had to take time during the initial part of their grant period to engage in planning activities. This planning involved determining key program design features, such as establishing program outcomes and resolving operational issues such as how to recruit mentors. Sometimes newer grantees had to revise their original plans when they experienced unexpected implementation difficulties. For example, a Delaware grantee new to mentoring had planned to provide one-to-one mentoring at local churches, but encountered difficulties transporting the children to the various locations. Subsequently, the grantee switched to a small-group mentoring approach where mentors met the children at school. Another new grantee that we visited in Nebraska had difficulty recruiting enough mentors and retaining enough youth for their mentoring effort. Moreover, during our visit to a new grantee in Idaho, we observed that some youth did not have mentors and were being randomly assigned to an available mentor on the spot for a group activity. Our review of grantee applications showed that all grantees had some policies and procedures in place to manage their ongoing operations, such as policies pertaining to mentor recruitment, screening, and training, but during our site visits we found that established and new grantees differed in the extent to which they had been able to implement such policies and procedures. Established grantees we visited already had in-place many of the policies and procedures necessary to operate a mentoring program. For example, these grantees generally had long-standing agreements with organizations in their communities that helped them attract, screen, and retain mentors. In addition, established grantees had a structure that helped them to begin operations immediately after the grant award. For example, two well-established grantees—one in Florida and the other in Ohio had either staff dedicated to recruiting or had advisory boards made up of community leaders to help them recruit and promote their efforts. In addition, more established grantees were able to retain their mentors by providing appreciation gifts and having mentor appreciation dinners and ceremonies. Some established grantees also gave mentors small gifts such as pins and note pads with the program logo on it. (See fig. 5.) In contrast, as expected for organizations in the start-up phase of their programs, the newer grantees we visited generally did not have as well- developed policies and procedures, such as those related to mentor training, recruitment, and support, as the established grantees we visited. For example, a grantee we visited in Illinois had to borrow materials from other programs to develop its training manual. Furthermore, some of the newer grantees we visited had not completed making all of their matches or the mentors and youth had only met a few times. Established grantees we visited were generally better positioned than newer grantees to market and sustain their mentoring efforts at the end of the Education grant. In particular, because many of the established grantees we visited had secured funding from multiple sources or were part of larger organizations, they were better positioned to sustain their mentoring efforts when the grants ended. For example, an established Florida grantee received funding from multiple sources, including its national affiliate, private foundations, and the United Way. In contrast, the Education mentoring grant was the only source of funding for a new grantee in Georgia. Some established grantees also developed a wide variety of materials to promote their program, including portable presentation packages, colorful, professionally printed brochures and pamphlets, magnets, and promotional videotapes. (See fig. 6.) Finally, established grantees often had more experience collecting youth and program outcomes, our site visits showed. For example, some of the established grantees that were affiliated with a national organization, such as Big Brothers Big Sisters of America, already had a set evaluation strategy, including standardized data collection forms and analysis tools. Although new grantees’ overall evaluation plans were outlined in their grant applications, some of the newer grantees we visited did not have established data collection processes or evaluation plans. Thus, unlike some of the established grantees we visited, they had to develop such processes and plans. Grantees Reported Benefits from Learning about Other Mentoring Implementation Strategies During our site visits, some of the established grantees reported on both the challenges of starting a new program and the benefits of learning about the strategies that other mentoring programs had used to address such challenges. These grantees reported that the start-up process required many different types of activities to establish a structure and operational framework. To facilitate their implementation, they found that discussions with staff from other established mentoring organizations helped them by providing information on program design, such as strategies for recruiting and supporting mentors and program evaluation. For example, staff from an established New York grantee we visited told us they contacted other mentoring organizations for advice on mentor screening, support, and recruitment. These established grantees noted the time and effort it took to get a program operational and that key to their successful efforts was assistance they received from other more experienced programs. During our site visits, new grantees reported facing start-up difficulties, such as recruiting and retaining potential mentors. Some of the newer grantees reported seeking assistance from more experienced mentoring programs on establishing operational procedures. For example, staff from a new grantee we visited in Georgia noted they were better able to make a realistic estimate of the number of youth they could serve after consulting with an experienced mentoring program. After the grantee awards were made, Education did not establish a formal process to facilitate information sharing among grantees, although the department acknowledged the importance of information sharing among grantees and is considering such an effort. Many of the grantees we visited acknowledged the need for information sharing on grantees’ activities that could provide valuable lessons. Three of the established grantees we visited put processes in place to facilitate information sharing or presented information about their organization at conferences. For example, one grantee in Ohio that we visited developed a regional mentoring institute to share its mentoring experiences and expertise to assist interested school districts and nonprofits throughout a tristate area. To facilitate information sharing among grantees, Education is considering designating some of its fiscal year 2004 funding to develop a technical assistance center. Education Used Multiple Methods to Monitor Program Implementation, but Monitoring May Not Be Sufficient to Identify Possible Fiscal and Programmatic Weaknesses Education officials within the OSDFS monitored grantees using multiple methods, including calling grantees regularly, examining annual performance reports, and reviewing grantee expenditure rates. However, officials did not review findings from grantees’ single audit reports. Single audit reports provide information on weaknesses related to grantee financial management, internal control, and compliance issues. Education Used Multiple Methods to Monitor Grantees, Including Review of Performance Reports and Expenditure Tracking OSDFS’s monitoring process included: postaward performance calls to establish progress measures; semiannual calls to grantees to determine implementation progress and issues; reviews of annual grantee performance reports to assess implementation; monitoring of expenditure rates; and visits to a limited number of sites. Based upon grantees’ annual performance reports and other data, OSDFS officials determined whether it would continue funding. With one exception, OSDFS determined that mentoring grantees were making adequate progress and warranted continued funding. Table 3 outlines elements of OSDFS’s monitoring process, including the purpose of each monitoring tool and how OSDFS provides grantees with feedback after assessing their performance using that particular tool. First, OSDFS' staff made postaward performance telephone calls soon after awarding the mentoring grants to ensure understanding of established outcomes and to offer technical assistance. During these initial telephone contacts, OSDFS staff communicated the specific outcomes the agency expected grantees to achieve and answered grantees’ questions. They also discussed measures to assess the grantee’s implementation progress. Second, OSDFS’s monitoring process has involved semiannual telephone calls to grantees to ensure that grantees are on track and to provide technical assistance as needed. During these telephone calls, OSDFS monitoring staff asked a set of questions to determine the extent to which grantees are implementing their programs as planned. Agency officials also asked grantees questions to assess the extent to which grantees have hired staff and how much staff turnover they have encountered. Third, OSDFS examined grantees’ annual performance reports. Education requires grantees to provide information in these reports that helps the agency monitor grantees. Such information includes specific examples of grantee accomplishments as well as any objectives the grantee did not meet. For example, a Florida grantee provided information in the report on the extent to which youth were meeting the program’s outcome goals, noted where desired outcomes had not been reached, and explained why. In addition, if grantees have not implemented scheduled activities, OSDFS asks that grantees explain why. OSDFS also asks grantees to describe any corrective actions they have taken or plan to take in response to previous problems OSDFS staff may have identified. Agency officials also used performance reports to ensure that grantees reconciled their expenditures with their budgets and described significant changes to their current or future budgets. Fourth, OSDFS monitored expenditure rates on a continuous basis through the Grants Accounts Payments System, according to agency staff. Agency staff used such information to identify potential problems, such as if a grantee was not expending funds at an appropriate rate. For example, while monitoring expenditure rates, OSDFS found that one grantee had spent funds, even though it had not yet begun operations. That grantee later voluntarily relinquished its grant. Fifth, as part of its monitoring process, OSDFS staff has visited a small number of grantees each year to observe how they are implementing their programs. However, because of the limited number of grantees OSDFS visited and the method by which grantees were selected, on-site monitoring is of limited value as a monitoring tool. For example, in fiscal year 2003, OSDFS officials visited three grantees. Two of them were selected because of their proximity to another grantee funded by Education under a different grant. The third program was chosen because it had ties to the program director of the grantee that voluntarily relinquished its grant. During an OSDFS visit with this grantee, agency staff also reviewed the grantee’s budget to ensure that proposed costs were allowable. Staff also verified that the grantee was serving the target population described in its application. For all three visits, OSDFS prepared a brief description of the program and the status of program implementation. The Office Responsible for Monitoring Mentoring Grantees Did Not Review Grantees’ Single Audit Act Reports, Creating the Potential for It to Miss Fiscal and Programmatic Weaknesses Education officials in OSDFS who were directly responsible for monitoring the mentoring grants told us that they did not review grantees’ single audit reports, even though the office’s own monitoring guidance requires them to do so. Specifically, OSDFS monitoring guidance states that to decrease the likelihood of a grantee from being labeled as high risk, OSDFS should review annual performance reports, evaluation reports, and information from single audit reports, and other information readily available to them. Education officials told us that the Office of the Chief Financial Officer (CFO) within Education receives and reviews single audit reports. According to Education officials, this office did not forward information to the OSDFS officials responsible for monitoring mentoring grants because none of the information in the single audit reports pertained to the mentoring grants. Moreover, Education officials said that CFO does not receive single audit reports in instances where Education does not directly fund the program. For example, CFO would not receive a single audit report for state-administered programs, such as Title I. Using information readily and easily accessible through the online Single Audit Act database, we reviewed the mentoring grantees’ single audit summary reports. In reviewing these summary audit reports, we did not expect to find information pertaining to grantees’ handling of mentoring grants, as these were relatively new. Rather, we wanted to determine if there were issues in these same grantees’ handling of other Education grant funds they received before or around the time Education awarded them mentoring grants. How well these grantees handled other funds they received from Education could suggest how well they would manage their mentoring grant funds. We found that 8 percent of the mentoring grantees had problems with respect to other Education grants they received that were substantial enough to be reported as audit findings. For example, we found that grantees’ audit findings covered problems with cash management, procurement and reporting on Education programs. By using the online Single Audit Act database, we were also able to access information about subgrantees’ handling of Education funds. Education Considering Conducting National Study of Mentoring Programs to Augment the Evaluations It Has Required Grantees to Submit Education is currently considering whether or not it will undertake a study of its mentoring program. Although Education’s plans for an evaluation are not defined, it has required grantees to provide an evaluation of their programs at the end of the 3-year grant period. Most grantees plan to do a descriptive evaluation by reporting information on youth outcomes, particularly those related to academic achievement, incidences of harmful behavior, attendance, and drop out rates. However, the grantees varied considerably with respect to how they plan to measure these outcomes. This limits the extent to which Education can use information from the grantees to provide a national perspective on grantee outcomes. While Education Plans for Mentoring Study Are Not Defined, It Has Required Each Grantee to Provide an Evaluation Currently, Education does not have plans to conduct a descriptive study to report on mentoring program outcomes or an effectiveness study to establish any linkages between outcomes and youth participation in mentoring programs. Many researchers consider effectiveness studies to be the best method for isolating the program’s effect on participants, from other factors, such as schooling, that could also influence participant outcomes. Such studies, which must be carefully planned and executed, are often multiyear, complex, and costly. Education officials said that although discussions are underway on whether the department will conduct a study evaluating the mentoring program, no final decision has been made. Education has required that all grantees evaluate their programs at the end of their 3-year grants and to describe their evaluation plans in the grant applications. Our review of grantee evaluation plans showed that most grantees plan to compare outcomes of participating youth at the beginning of the programs to their outcomes at the end of the 3-year grant period. In particular, grantees report plans to examine outcomes related to academic achievement, attendance, and criminal and harmful behaviors. During our site visits, we found that established and new grantees’ evaluation plans varied both in what they measured as well as measurement strategies. Newer grantees more often planned to measure program processes, such as the duration of the mentoring relationship, the number of students matched, or the number of mentors recruited. In contrast, established grantees more often had plans to report on student outcomes, such as academic achievement. Moreover, established grantees more often reported plans to use data, such as actual school grades and attendance records to measure outcomes. Newer grantees, however, more often reported plans to survey parents or teachers to gauge the extent to which outcomes improved. For example, an established grantee in New Mexico reported plans to use data from school records as well as surveys of mentors, youth, and teachers to assess whether attendance, homework completion, relationships with adults and peers, and attitudes toward school had improved. In addition, this program established targets for improvement, such as plans to decrease discipline referrals by 20 percent. In contrast, an Illinois grantee that recently began operating a mentoring program, planned mainly to report how well it complied with its process, rather than how well youth performed on outcome measures. This grantee planned to measure the extent to which its recruitment process generated participants each year and the number of children matched with mentors. Moreover, a new grantee operating in Delaware that we visited said it would report youth outcomes through self-reported information from teachers and youth. Such self-reported information may not be as accurate as that reflected in official school records. Moreover, while these individual grantee evaluations will provide some information about youth enrolled in mentoring, because of the different measures used, Education cannot combine results to provide an overall national picture. Conclusions Over the past 3 years, the Congress has invested funds in a mentoring program aimed at helping children who face a significant risk of failing at school or becoming involved in illegal drugs, gangs, or alcohol have a better chance of succeeding. In funding the mentoring program in fiscal year 2004, the Congress significantly expanded the mentoring program, providing $50 million to support the last year of the existing grants as well as about 200 additional grantees. Given the recent program expansion, it will be especially important to address issues that arose during the first 2 years of the mentoring program, for example, challenges new grantees face in starting programs; limited use of monitoring tools, and the absence of a cohesive national picture of program outcomes. New mentoring grantees are faced with making many decisions about program design and operation, whereas established grantees generally have policies and procedures in place that facilitate implementation. Established mentoring grantees have benefited from consultation with other programs and from lessons learned through years of experience; both helped them operate successful programs. Without a mechanism for new grantees to access program design and implementation information, they are more likely than established grantees to struggle with program start-up and operational issues, such as recruiting and training mentors. Through its monitoring of grantees, Education has attempted to ensure that programs are managed well. However, Office of Safe and Drug Free Schools staff responsible for monitoring the mentoring grant have not used all of the means available to help it more effectively oversee programs. Findings from audit reports represent an additional monitoring tool that could provide useful information about a grantee’s stability and fiscal capacity and that may influence ongoing funding decisions. By not using single audit reports, the office responsible for monitoring the mentoring grant may be lacking information that could help it effectively assess whether programmatic and fiscal problems could weaken a grantee’s ability to successfully implement its mentoring program. Finally, Education will have some information about outcomes of youth participating in mentoring because it has required grantees to provide evaluations of their efforts. However, because these evaluations measure different outcomes and use different methodologies, their results cannot be meaningfully combined to provide a cohesive picture of program outcomes nationally. Lacking such information, Education cannot gauge the extent to which the youth outcomes NCLBA sought to affect through the mentoring grants did indeed improve during the grant period. Furthermore, because Education does not have plans for an effectiveness study, it will not be positioned to determine whether participation in the mentoring program contributes to improved youth outcomes. Recommendations for Executive Action We recommend that the Secretary of Education (1) explore ways to facilitate the sharing of successful practices and lessons learned to help new grantees more quickly and effectively implement their programs; (2) ensure that the Office of Safe and Drug Free Schools uses grantees’ single audit reports as part of its monitoring process to take advantage of all monitoring tools that could improve the identification of fiscal and programmatic weaknesses; and (3) undertake a national study of mentoring program outcomes and in doing so, explore the feasibility of examining the effectiveness of the mentoring program in improving youth outcomes and consider collecting limited, uniform data on the next wave of mentoring grantees that could be used as the basis for such study. Agency Comments We provided a draft of this report to the Department of Education for review and comment. Education’s Executive Secretariat confirmed that department officials had reviewed the draft and had technical comments. In these comments, Education officials said that there is a mechanism within Education for reviewing and resolving single audit findings. Specifically, CFO within Education receives and reviews single audit reports on those entities for which the agency makes direct grants. Thus, according to Education, CFO would not receive audit reports for programs for which it does not make direct grants. We have adjusted the report to reflect Education’s technical comments. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. Please contact me on (202) 512-7215 if you or your staffs have any questions about this report. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Other GAO contacts and staff acknowledgments are listed in appendix III. Appendix I: Selected Studies on the Elements of Successful Mentoring Mentor/National Mentoring Partnership, Elements of Effective Practice, 2nd Edition, Alexandria, VA: 2003. Michael Garringer, with Mark Fulop and Vikki Rennick, Foundations of Successful Youth Mentoring: A Guidebook for Program Development, March 2003: Portland, OR. National Mentoring Center, Northwest Regional Laboratory and the Office of Juvenile and Delinquency Prevention. Susan Jekielek, Kristin Moore, et al, Mentoring Programs and Youth Development: A Synthesis. Child Trends January 2002: Washington, D.C. Jean Baldwin Grossman, editor. Contemporary Issues in Mentoring, June 1999: Philadelphia, PA, Public/Private Ventures. David DuBois, Bruce E. Holloway, et al, Effectiveness of Mentoring Programs for Youth: A Meta-Analytic Review (April, 2002, American Journal of Community Psychology. Vol. 30. No. 2, pp. 157 –197). Cynthia L. Sipe, Mentoring: A Synthesis of P/PV Research: 1988-1995. Fall 1996, Philadelphia, PA. Public/Private Ventures. Carla Herrera, C. Sipe, and et al, Mentoring School-Age Children: Relationship Development in Community-based and School Based Programs, April 2000: Philadelphia, PA, Public/Private Ventures (Prepared for the National Mentoring Partnership and funded by the U.S. Department of Education). Tierney and Grossman, Making A Difference, An Impact Study of Big Brothers Big Sister, 1995, Philadelphia, PA. Appendix II: Characteristics of Education Mentoring Grantees by State State Amount Targeted group(s) Alaska $191,540 Established One-to-one pairs meeting at school 1-2 hours a week for at least 1 school year. Youth in grades 1-8 experiencing troubled home environments and attending Title I schools. pairs meeting at school 6 hours a week and groups meeting 3 hours a week for at least 4 months. Youth in grades 4-9. Youth in grades 4-12. pairs meeting at school and in the community; community- based pairs meet 3-5 hours, 2-4 times a month; school-based pairs meet 1 hour a week. Youth in grades 4-12 from Spanish-speaking families. Calif. $164,341 Established One-to-one pairs meeting at school 1 week for at least 6 months. Youth in grades 4-8 who are Hispanic. Targeted group(s) Calif. $149,885 Established One-to-one pairs meeting in the community for at least 11 months. Youth in grades 4-12 from immigrant and refugee families, youth with disabilities, or youth in foster care. Calif. $182,250 Established One-to-one pairs meeting at school. Youth living in housing projects, many whose families recently immigrated families from South America or youth from families with drug or alcohol addictions or violence. Calif. $168,530 Established Groups meeting in the community for at least 6 months. Girls in grades 4- 8 who are involved in the juvenile justice system. Calif. $171,185 Established One-to-one pairs meeting at school 3-6 hours a week. Youth in grades 3-12 who are Asian and Pacific Islander immigrants with limited English proficiency. Calif. $191,540 Established One-to-one pairs and groups meeting at school for at least 1 hour a week, for at least 1 year. Youth in grades 4-9 who are deaf or hard of hearing, in foster care, English language learners, or have mental health problems. Grass Valley Calif. $117,448 Established One-to-one pairs, meeting weekly at school for at least 1 year. Youth in grades 3-8. Targeted group(s) Calif. $184,986 Established One-to-one pairs meeting 2 hours a week at school and in the community for at least 1 year. Youth in grades 4-8 who are non- English speaking. Calif. $187,562 Established One-to-one pairs meeting at school. Youth in grades 4-12 who are involved in criminal or delinquent activities, many of whom are Mexican or non- English speaking immigrants. Calif. $95,749 Established One-to-one pairs and groups meeting in the community 4-6 hours a month for at least a year. Youth in grades 4-12 who are highly at-risk, are in foster care, reside in a group home, or have emotional and behavioral problems due to past abuse. Calif. $178,358 Established One-to-one pairs meeting at school 1-2 hours twice a week and one- to-one pairs meeting in the community 4-6 hours a week for at least a year. Youth who are Spanish speaking, girls, disabled, or Native American. Calif. $122,888 Established One-to-one pairs meeting at school and in the community 4 hours a week for at least a school year. Primarily African- American youth. Targeted group(s) Santa Paula Calif. $90,598 Established One-to-one pairs with adult mentors making weekly contact and meeting at least twice a month and one-to-one pairs with a high school mentor meeting biweekly. Youth in grades 4-12 who are most affected by violence. Youth in grades 4-12, many with limited English proficiency, including Hispanic, Asian, and refugee populations. pairs meeting in the community for 2 hours a week for at least 1 school year. Calif. $180,466 Established One-to-one pairs meeting in the community for 6 hours a month for at least 1 year. Youth in grades 4-8, middle and high school students in alternative programs such as court and community schools, and homeless and runaway youth. Calif. $183,633 Established One-to-one pairs meeting at school for 1 hour a week for at least 8 weeks (1 school semester). Youth in grades 4-8 who are Hispanic, lack adult role models, are socioeconomically disadvantaged, or have significant physical or emotional disabilities. Targeted group(s) Calif. $187,506 Established One-to-one pairs meeting at school. Youth in grades 4-12 whose families are homeless, who live in poverty (subsidized housing), who have behavior problems, or who are victims of child abuse or domestic violence. One-to-one pairs meeting at school and in the community for 2 hours a week for at least 1 year. At-risk 6 grade students who are Latino immigrants, first generation, and are involved with human services or juvenile justice. One-to-one pairs meeting at school for 1 hour a week for at least 1 school year. Youth in grades 5-6 who are Latino or bilingual. Colo. $140,231 Established One-to-one pairs meeting for at least 2 hours a week for at least 1 year. Youth in grades 4-8, including a considerable population of Hmong and Laotian children. Conn. $139,766 Established One-to-one pairs meeting at school. Court-involved youth making the transition from juvenile justice program back to public schools. Targeted group(s) One-to-one pairs and groups meeting in the community 4 contacts a month for at least 1.5 hours, with at least one in person visit lasting at least 1 hour, for at least 1 year. Youth in grades 6-8 who attend SouthEast Academy of Scholastic Excellence and live in the Capitol Hill District. One-to-one pairs meeting at school 6 hours a month plus weekly phone contact. Youth in grades 6-9 most of whom are African American. pairs. Girls in elementary, middle, or high school who are African American or other minorities. pairs meeting in the community for 2 hours a week for at least 9 months. Youth in grades 8-12 from public & charter schools with average academic records. One-to-one pairs and groups meeting at school and in the community at least twice a month. Youth in grades 4-8. pairs meeting at school for 1 hour a week, for at least 1 school year. Youth in grades 4-8 who have a history of involvement with juvenile justice system. Targeted group(s) pairs meeting at a juvenile assessment center for at least 1 year. Youth in grades 4-8 who have significant learning or emotional problems, are in an alternate school environment, or have extreme school phobias or related disorders. pairs meeting at school and in the community 30- 45 minutes a week for mentoring and twice a week for tutoring. Youth in grades 4-8. $185,985 Established This is a “drop- in” program where youth may work with several mentors during the week on different technology projects. Mentors commit to 2.5 hours a week for 6 months. Girls and Haitian, Central American, and Puerto Rican youth. pairs meeting at school 4 hours a week and groups meeting once a week for at least 1 year. Youth in grades 6-8 who are African American and reside in the 33311 zip code area. Targeted group(s) pairs meeting at school and in the community for a half hour to an hour once week. Youth in grades 4-8. pairs meeting at school at least 1 hour once a week for at least 1 school year. Youth in grades K-5 who are at risk of not reaching graduation. One-to-one pairs and groups. Youth in grades 4-12 with performance, behavior, and attendance problems. One-to-one pairs meeting at school and in the community for at least 4 hours a month. Youth in grades 4-12. pairs meeting at school 1 hour a week. Youth in grades 3-5 who consistently exhibit unruly behavior and/or are at risk of academic failure, have special needs, including but not limited to behavioral disorders, or are minority Caucasian and Asian students. Targeted group(s) pairs meeting once a week and groups meeting twice a month in the community for at least 1 year. Youth in grades 4-8. pairs meeting in the community for 2 hours a week for at least 1 year. Youth in grades 4-8 whose parents are not at home immediately after school to assist with home work or for whom English is not their first language. One-to-one pairs meeting at school for 3 hours a month. Youth in grades 6-8 who are Hawaiian. pairs meeting at school for at least 1 year. Youth 6-14 years old in Story and Boone Counties. pairs and groups meeting at school 30 minutes once a week. Youth in grades K-12. Youth in grades 4-12 who have learning disabilities or behavioral issues, a parent in prison, a parent with an addiction, or who have been in foster care. pairs meeting in the community for 1-5 hours a week for at least 1 year. Targeted group(s) pairs and groups meeting at school and in the community for at least two contacts. Youth in grades 6-12 who are Hispanic. One-to-one pairs meeting at school. Youth in grades 4 and 8-12 who are Native Americans or Hispanic migrants. Idaho $137,086 Established One-to-one pairs meeting in school for at least 1 hour a week. Youth in grades 4-8 who teachers believe are most likely to dropout, especially girls. One-to-one pairs and groups meeting at school and in the community for at least 3 years. Youth in grades 4-8. Targeted group(s) One-to-one pairs meeting at school for at least 4 hours a month for at least 1 year. Youth in grades 4-8 who are involved in criminal or delinquent activities. pairs meeting at school for 4- 8 hours a month of mentoring, 12 hours a month of tutoring, 4-8 hours a month of character development, and 30 hours a year of case management services for at least 1 year. Youth between the ages of 5 and 14. pairs meeting at school and in the community for 1-2 hours a week for at least 1 year. Youth in grades 4-8 who are African American. pairs meeting at school for 1 hour a week for at least 1 year. Youth in grades 4-5. In one elementary school, emphasis on serving youth from single-parent households. Youth who live in domestic violence emergency shelters and transitional housing. Targeted group(s) Kans. $185,959 Established One-to-one pairs meeting at school and in the community. Youth in grades 4-8. pairs meeting at school for at least 1-2 hours a week for at least 1 year. Youth in grades K-8. pairs and groups meeting at school and in the community for at least a year. Tutor Buddies meet for 1 hour a week. Big Buddies meet for 5-6 hours a month. Enrichment Buddies meet for 1 hour a week. Youth in grades K-8. One-to-one pairs and groups meeting for at least 1 year. Youth in grades 2-8 who are adopted and out- of-home youth. Framingham Mass. $126,000 Established One-to-one pairs meeting at school for 2.5 hours a week for at least 36 weeks. Youth in grades 3-5. Mass. $143,666 Established One-to-one pairs and groups meeting at school and in the community. Youth in grade 9- 12 who are Hispanic and who are talented and gifted. Targeted group(s) Youth in 8th grade who are Haitian, African- American, Caribbean, or West Indian. pairs meeting at law firms for 2 hours every other week for at least 1 year. Youth in grades 4-12 who are African-American or Hispanic, are immigrants, low- income, or have mental health or behavior problems. pairs and groups meeting in the community for 2 hours twice a week for at least 1 year. Maine $150,510 Established One-to-one pairs meeting at school for 1 hour a week for at least 1 school year. Native American youth in 2 schools. Youth in grades K-8. pairs meeting at school and in the community for 4 hours every 2 weeks. One-to-one pairs meeting at school for 1 hour a week for at least 1 year. Youth in grades 4-8, primarily boys. Minneapolis Minn. $162,407 Established One-to-one pairs meeting in the community for 3 hours a week for at least 1 year. Youth in grades 4-9 who are frequently truant. Targeted group(s) Youth in grades 4-12 who are refugees or are immigrants from Somalia, Mexico, Ethiopia, West Africa, and Latin and Central America. pairs and groups meeting at school for 1-3 hours a week for at least 1 year. Immigrant youth of Hmong, Vietnamese, Cambodian, Northeast African, and East African (Somali) descent. pairs meeting in the community for 1 hour a week for at least 1 school year. One-to-one pairs and groups meeting for 1 hour a week for at least 1 year. Youth who are Hispanic immigrants, in out-of-home placements or children of a teenage, incarcerated or court-involved parent. pairs meeting at school 2-4 times a month for at least 1 year. Youth in grades 4-8. One-to-one pairs meeting at school and in the community for 4 times a week. Youth in grades 4-12. Mont. $133,476 Established One-to-one pairs meeting at school for 1 hour a week. Youth in grades 4-12 who are learning disabled, are emotionally disturbed, have health problems, or receive inadequate support services. Targeted group(s) pairs meeting at school for 4 hours a week for at least 1 year. Youth in grades 4-8 who are people of color or are Hispanic. pairs meeting at school and in the community twice a week for a total of 3 hours a week for at least 1 year. Youth in grades 4-8 who are Hispanic and attend English as a Second Language schools. Youth are minority females, mostly African American or Hispanic. pairs meeting at school and in the community for 2 hours a week or 8 hours a month for at least 1 year. pairs meeting in the community for 3 hours a week for at least 1 year. Girls who are involved with the juvenile court, have multiple school suspensions, have experienced school failure, child abuse, poverty, and parental substance abuse, or have mental health problems. One-to-one pairs meeting at school and in the community for at least 2 years. Youth in grades 3-12 who are Native American from rural or reservation settings. Targeted group(s) pairs meeting at school for 1 hour a week for at least 1 school year. Youth in grades 1-12 with emotional, social, mental, learning, or physical disabilities or those with juvenile offenses. One-to-one pairs meeting at school for at least 1 year. Hispanic youth in grades 4-12. pairs meeting at school and in the community for 1 hour a week. Youth in grades 4-12. One grantee program serves a residential facility for juvenile offenders and another serves a school for disabled children. One-to-one pairs meeting at school and in the community with primary mentors 3 hours twice a week and twice a week phone calls, and with secondary mentors 4 hours on weekends, 4 times a month, for at least 1 year. Youth in grades 4-12 from schools with high minority populations. pairs meeting in the community for at least 1 hour a week for at least 1 school year. Mostly Hispanic and recent Mexican immigrant youth in grades 4-8. Targeted group(s) pairs meeting for 2 hours a week for at least 1 year. Youth involved with the juvenile justice system. One-to-one pairs and groups meeting at school for 1 hour a week, for at least 1 school year. Youth in grades 5-8. One-to-one pairs meeting at school for 2 hours a week for at least 1 year. Youth in the Bronx who are bilingual and multicultural. Youth in grades 3-5. pairs meeting at school for at least 1 year. Youth in grades 4-8 who are in foster care, group homes, residential mental health programs, or are “at-risk” of removal from their home due to child abuse or neglect. pairs meeting in the community.pairs and groups meeting in the community for 2-4 hours a week, for at least 1 year. Court-involved youth from the Bronx. Targeted group(s) Boys in grades 4- 8, who are in a residential treatment center and require special education, are in foster care, or have serious mental health problems. pairs meeting at school for at least 2 hours a month and meeting in the community for at least 10 hours a month, for at least 1 year. Boys. pairs meeting once a month with weekly phone contact and groups meeting 3 times a week for at least 1 year. Youth in grades 2-12. pairs meeting at school for a half hour and 1 hour a month, for at least 1 year. Youth in grades 4-8 who live in a home environment with alcoholism and/or drug addiction. pairs meeting at school for 1 hour a week and groups meeting in the community twice a month plus 2 other contacts, for at least 1 year. pairs meeting at school and in the community for at least 1 year. Youth in grades 4-8 who are first and second generation Urban Appalachians, or are deaf and have special needs. Targeted group(s) pairs and groups meeting at school and in the community for 1 hour a week, for at least 1 year. Youth in grades 4-6. Okla. $136,602 Established One-to-one pairs meeting at school for 1 hour a week for at least 1 school year. Very high-risk youth in grades 4- 8, including those from alternative schools. pairs meeting at school and in the community for 4 hours a week for at least 1 year. Youth are Hispanic, Native American, or African American. One-to-one pairs meeting at school and in the community for at least 4 hours a week. Youth in grades 3-8. One-to-one pairs and groups meeting in the community for 1 hour a week for at least 1 year. Youth in grades 4-8 who are high- risk, nonadjudicated, are adjudicated juvenile delinquents, or are adjudicated court-dependent. pairs meeting at school once a month and groups meeting in the community bimonthly for at least 1 year. Girls in grades 6- 8. Targeted group(s) Boys in 6th grade. Adults pairs and groups of three mentors/youth meeting in the community 2 days a week.pairs and groups meeting at school and in the community for 2 hours a week for at least 1 school year. Minority youth (mostly boys) in grades 6-9 who are at risk of juvenile delinquency. pairs meeting for 1-2 hours a week for at least 1 year. Youth are grade 4-8. pairs meeting for 6 hours a month for at least 1 school year. Youth are grades 2-12 with parents having a history of incarceration, addiction, or involvement with child welfare agencies. pairs meeting at school for at least 1 hour a week for at least 1 school year. Youth in grades K-12. Also focus on 9th grade at- risk and English as a Second Language students. pairs meeting at school for 1 hour a week for at least 1 school year. Youth in grades 1-12. Targeted group(s) Youth in grades 4-8, many of whom are at-risk African American youth with an incarcerated family member or with involvment in the juvenile justice system. pairs meeting for 2 hours a week.pairs meeting for at least 1 hour a week and groups meeting monthly at school. Youth in grades 3-12 who are Hispanic or African-American. Youth in grades 4-8. pairs meeting in the community for 4 hours a week.pairs and groups meeting at school once a week. Youth in grades 4-8. pairs meeting at school for 1 hour a week for at least 1 school year. Youth in grades 4-6. pairs meeting at school for 1 hour a week for at least 2 years. Youth who live in high crime areas and/or have experienced violence at home and are having mild behavior problems in school. Over half school population is Hispanic. Targeted group(s) pairs meeting in the community for 2 hours a week. Hispanic youth in grades 4-8. pairs meeting at school and in the community for at least 4 hours a week for at least 1 year. Youth are Vietnamese immigrants and refugees. pairs meeting at school and in the community for at least 4 hours a month. Youth in grades 4-8 or 9-12. Wash. $190,121 Established One-to-one pairs meeting at school and in the community for at least 1 year. Youth in grades 4-12 who are English as a Second Language students—mostly Spanish speaking or Vietnamese, Somali, Cambodian, or Russian/Ukrainian immigrants. One-to-one pairs meeting in the community at least twice a month. Youth in grades 4-8 from 3 local schools, with significant Latino and English as a Second Language students. Targeted group(s) One-to-one pairs meeting at school and in the community for at least 1 hour a week. Youth in grades 4-8. Wisc. $117,797 Established One-to-one pairs and groups meeting in the community for 2 hours a week for at least 1 year. Youth in grades 4-12 who are exhibiting predelinquent behaviors and who are involved with child protective services. Huntington W. Va. $112,363 Established One-to-one pairs meeting at school for 1 hour a week for in school program and 90 minutes a week for the after school program. Youth in grades 4-8. Wyo. $191,540 Established One-to-one pairs. Youth in grades K-12. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Karen Brown, Luann Moy, James Rebbe, Thomas Broderick, and Amy Buck made key contributions to the report. Related GAO Products No Child Left Behind Act: More information Would Help States Determine Which Teachers Are Highly Qualified. GAO-03-631. Washington, D.C.: July 17, 2003. Flexibility Demonstration Programs: Education Needs to Better Target Program Information. GAO-03-691. Washington, D.C.: June 9, 2003. Title I: Characteristics of Tests Will Influence Expenses: Information Sharing May Help States Realize Efficiencies. GAO-03-389. Washington, D.C.: May 8, 2003.
As part of the No Child Left Behind Act (NCLBA) of 2001, the Congress authorized a 3-year, $17 million per year school-based mentoring grant program. For fiscal year 2004, Congress has increased funding to about $50 million to fund additional mentoring efforts. Congress requested that GAO provide information on the student mentoring program. To do this, GAO answered the following questions: (1) What are the basic elements, policies, and procedures of successful mentoring programs? (2) What are the key characteristics of NCLBA-funded mentoring efforts, including the extent to which they have the basic elements, policies, and procedures of successful mentoring programs? (3) How does the Department of Education monitor program implementation? (4) What are Education's and grantees' plans to assess program outcomes? According to the literature GAO reviewed, successful mentoring programs (1) plan their programs carefully prior to implementation; (2) develop policies and procedures to effectively manage their programs, including mentor screening and training; (3) ensure program sustainability through marketing; and (4) evaluate program outcomes and disseminate their evaluation findings. Most of the 121 mentoring grantees that Education funded shared many characteristics--most had 5 years or more of experience mentoring youth, had similar goals, and offered "one-to-one" mentoring. All mentoring grantees listed in their applications that they had some elements of successful programs, but established grantees GAO visited reported fewer implementation challenges, such as problems recruiting mentors, than did newer grantees. Most of the 11 grantees GAO visited said they would benefit from learning about other implementation strategies through information sharing. However, Education has not facilitated information sharing among mentoring grantees, although it is considering doing so. Education used multiple methods to monitor grantees, including expenditure tracking, but the office responsible for monitoring mentoring grants did not review single audit reports as required by its guidance. Education's Chief Financial Officer reviewed the audits but did not forward audits to the office overseeing the mentoring grants because findings did not pertain to these new grants. However, GAO found that 8 percent of the mentoring grantees had audit findings related to how well they handled other Education grants. Education is currently assessing whether it will conduct an overall evaluation of its mentoring program. Education required that all grantees have evaluation plans, and most plan to report on youth outcomes related to academic achievement and attendance. However, grantees plan to use different methodologies, making it difficult for Education to have a cohesive picture of its mentoring program as a whole.
Background DOD’s military Tuition Assistance Program helps active duty service members—some of whom may regularly be reassigned to another location, including overseas—pursue an education. Through partnership agreements between DOD and more than 3,000 schools, during off-duty hours, service members are able to take undergraduate, graduate, vocational, licensure, certificate, and language courses. They may also complete their high school education, if necessary.service members who participate in the program are enrolled in undergraduate courses. The Undersecretary of Defense for Personnel and Readiness within DOD is responsible for implementing the Tuition Assistance Program, which includes the provision of educational counseling for service members.However, each military service is responsible for establishing and operating its own program. Through 195 education centers located on U.S. military bases, advisors are available to provide assistance and information to service members pursuing their education. These bases also make classroom space available for service members to take classes on base, although the majority of service members enroll in online classes. To participate in the program, service members must meet certain requirements. In consultation with an advisor, they must develop an education goal and education plan, and maintain a 2.0 grade point average (GPA) for undergraduate-level courses and a 3.0 GPA for graduate-level courses. Service members can receive up to $250 in tuition assistance per credit hour, with a maximum of $4,500 each year. If the cost of tuition exceeds the amount that the program provides, service members are eligible for other federal financial aid, such as federal grants and loans, to cover their expenses. Tuition is paid directly to the schools by DOD, and if a service member fails to complete a course or receives a failing grade, the student must pay back the money for those courses. Schools participating in the military Tuition Assistance Program must sign DOD’s Voluntary Education Partnership memorandum of understanding (MOU), which requires, among other things, that the schools (1) be accredited by a national or regional accrediting agency recognized by the Department of Education (Education); (2) comply with state authorization requirements consistent with Education regulations; (3) be certified to participate in federal student aid programs authorized under Title IV of the Higher Education Act of 1965; (4) disclose basic information about the school’s programs and costs, including tuition, fees, and other charges to service members; and (5) undergo, when requested, an evaluation of the quality of the education programs it is providing to service members. DOD contracts with an independent entity to assess the quality of postsecondary educational programs and services used by service members to assist in the improvement of these educational programs and services. In accordance with contract requirements, the contractor was to conduct evaluations of individual postsecondary schools. Also, for selected military bases that have a school on the base, the contractor was to evaluate all of the schools located on a single base and the facilities and operations of that base that support these schools and the delivery of education services to service members. Specifically, each year DOD required: four evaluations of military bases (with limited scope evaluations of the schools located on the base); two evaluations of distance learning (or online) schools; and four evaluations of schools located in close proximity to the base. Education is responsible for the administration of all federal student aid under Title IV of the Higher Education Act of 1965. Under that act, Education has oversight of the more than 7,200 postsecondary schools that participate in federal student aid programs, including those that participate in DOD’s Tuition Assistance Program (but not with respect to compliance with DOD’s requirements). Specifically, Education must certify a school’s eligibility to participate in federal student aid by determining that the school is accredited by an accrediting agency it recognizes, is authorized to operate within a state, and meets certain administrative and financial requirements. In addition, postsecondary schools that provide federal student aid are subject to program reviews by Education, which are made available on its website. Education also maintains websites that provide publicly available information about schools that participate in federal student aid programs authorized under Title IV of the Higher Education Act of 1965, including graduation rates, default rates, and costs. In addition to Education reviews, these schools are subject to compliance and financial audits by independent auditors. 38 U.S.C. §§ 3301-3325. housing and book payments to eligible students. As part of this program, VA is required to conduct annual compliance reviews to assess whether postsecondary schools that receive VA educational benefits adhere to applicable laws and regulations. Postsecondary schools that participate in Education, VA, and DOD education benefits programs include (1) public schools, which are operated and funded by state or local governments; (2) nonprofit schools, which are owned and operated by nonprofit organizations whose net earnings do not benefit any shareholder or individual; and (3) for-profit schools, which are privately owned and whose net earnings can benefit individuals or shareholders. In Fiscal Year 2013, 571 Advisors Provided Information on Programs and Educational Support to Nearly 280,000 Eligible Service Members Since Fiscal Year 2011, the Number of Advisors Increased Slightly Overall, with the Air Force Driving Most of the Increase In fiscal year 2013, 571 advisors were available to provide a range of information on programs and educational support to nearly 280,000 service members taking courses funded under the Tuition Assistance Program (see fig. 1). Each military service determines the number of advisors it will allocate to support service members’ education, based on competing priorities that balance education support with readiness needs. Across all four services, there was a net increase of 27 advisors from fiscal year 2011 through 2013. That increase, however, was driven primarily by the Air Force, which added 48 advisors to its program. According to one Air Force official, the increase was largely the result of provisions in law requiring that service members separating from the military be provided with transition assistance services (including education advice). Conversely, the Navy reduced the number of its advisors by 25. A Navy official said this decrease was due to budget cuts resulting from sequestration. This official said that the decrease in the number of advisors also reflects a shift within the Navy to reduce support for off-duty education and direct it towards training that directly affects military readiness—such as training in operating and maintaining complex radar and fire control systems for ballistic missile defense. (See app. I for statistics on service members and advisors participating in the Tuition Assistance Program, fiscal years 2011 through 2013.) From fiscal year 2011 through 2013, the number of service members taking courses using the Tuition Assistance Program declined by 8,819 (about 3 percent), according to DOD data. DOD officials said the decrease was due in part to a temporary suspension of the program resulting from the automatic spending reductions to federal budgets as part of the 2013 sequestration. Further, according to DOD officials, the drawdown of forces in 2013 accounted for some of this decrease. As the forces continue to draw down, the demand for tuition assistance could drop as service members leave active duty service and are no longer eligible for the program. Decreases in the number of active duty service members could potentially affect the number of advisors needed in the future. Service Members Receive Program Information and Support from DOD Advisors and from School Personnel, as well as from Online Sources To help service members pursue educational opportunities, DOD advisors are available to provide a range of assistance and information and service members can also receive assistance from other sources (see fig. 2). DOD advisors, who are required to have specific training in educational advising, are available to help service members determine their educational goals, advise them about the range and types of schools they can attend, and the types of courses and degrees these schools offer. (See app. II for a description of DOD advisor qualifications.) In addition, advisors provide information on possible sources of funding for education, such as federal grants and loans, and discuss the program requirements. Each of the services has guidance that describes the roles and responsibilities of their respective DOD advisors. (See app. III for information on guidance the services provided about the role of DOD advisors.) At Joint Base Andrews—where 2,223 service members were enrolled in courses during fiscal year 2013—the two advisors we interviewed said that as service members arrive on base they are provided with information about Tuition Assistance Program funding, eligibility, and the schools available. These advisors reported holding information sessions, distributing brochures and pamphlets, posting information on the military base’s Facebook™ page, and holding office hours for service members who want to meet in person to receive this overview. Once a service member decides to register for the Tuition Assistance Program, the advisors said they are available to answer questions about the schools, financing education, and transfer of credits if reassigned to another location. In addition, they said that they might also explain the differences in the types of programs schools offer and share information such as graduation rates and other school statistics available from Education websites. Further, the advisors we spoke with said they also discuss a range of financing options available, including other military aid, federal financial aid (grants and loans), and private student loans. With respect to private student loans, which sometimes have significantly higher interest rates than federal student loans, the advisors told us they advise service members to exhaust all federal options to cover college costs before pursuing private loans. Once a service member has completed a certain number of course hours, as required by the program, advisors we interviewed told us they work with that individual to develop, and then approve, an education plan for completing a specific course of study. These plans are to help students and advisors track the student’s progress toward fulfilling course requirements that will ultimately lead to a degree, certificate, or license. All services require the development of an education plan for service members. When service members are registering for classes, advisors we interviewed said they are the busiest. They also told us that requests for their services vary by service member, with some requiring little assistance. Advisors added that other support staff at the education center on the base are available to assist service members when an advisor is not needed. For example, support staff may answer routine questions about the program; mail or e-mail the service member materials; and confirm that the service member has dropped or withdrawn from a course, or that course grades have been entered into the system. All of the services have similar staff to perform these functions. Further, as required by DOD, all participating schools have their own counselors available to help service members, for example, by recommending courses and explaining how to register for classes. At Joint Base Andrews, counselors representing each of the five schools operating on that base told us that they help students decide which courses they will need to complete a degree program and help transfer the students’ credits if they are reassigned to another duty station. These counselors also told us that while they are able to answer routine questions about the Tuition Assistance Program, they refer the students to the DOD advisors for more detailed information. Lastly, we found that DOD and each of the services provide online information and tools about the Tuition Assistance Program through multiple websites. For example, one of the DOD websites provides a list of all of the schools eligible to participate in the program. In addition, all of the services report that they are increasingly using online tools to deliver information and support to service members. Also, the Navy relies heavily on its online resources to help answer service members’ questions. In addition, the Army website allows service members to register for classes, and access information about other financial aid programs, such as federal grants, loans, and GI bill funding, which provides funding for veterans’ education. Evaluations of Schools Participating in the Programs Do Not Provide DOD with Information Needed for Assessment The evaluations of schools by DOD’s contractor have provided a range of different information on schools participating in the Tuition Assistance Program, according to DOD officials, but they do not provide the information DOD needs to assess the schools. This is because DOD lacked a specific plan to frame and guide the evaluations, and did not require the contractor to develop one. According to federal standards, an evaluation plan should clearly define the evaluation questions and methodology and address the collective knowledge, skills, and experience needed by the entity conducting the evaluations. According to DOD’s contract, evaluations were to assess school quality. However, our review of the contract found that the 15 areas for evaluation that DOD provided to the contractor were often broad, not clearly defined, and lacked specificity (see app. IV for list of 15 areas). For example, one area was simply stated as “the methods whereby academic institutions are invited on the installation.” Another simply stated “the degree of congruence among various missions (the military, installation, and the institutions), the education plan of the installation, the educational programs provided by the institutions which have a MOU with the installation and the distance learning providers.” Based on our review of these 15 areas, it was not always clear what DOD was asking the contractor to evaluate and how the 15 areas would be measured. For example, in asking the contractor to assess the “degree of congruence,” it was not clear what DOD meant and how the contractor would measure this area. According to the contractor, DOD did not clearly define quality and the 15 areas to be assessed were the only formal guidance provided to them. Further, evaluation questions are critical because they frame the scope of the assessment and drive the evaluation design, the selection of data to collect, and the study results. For this reason, evaluation questions must be clear and specific and use terms that can be readily defined and measured. Had DOD developed a specific plan to frame and guide the evaluations, such a plan could have better positioned DOD to fully assess the skills needed by the contractor before awarding the evaluation contract. Specifically, DOD did not initially include requirements in its contract to ensure that the contractor provided personnel with the requisite education and experience needed to conduct the evaluations. Thus, DOD had to modify its contract to obtain the needed skills (see app. V for contract modifications). For example, in a 2013 modification, DOD required that the contractor acquire staff with a working knowledge of measurement methods and tools; sufficient experience in postsecondary education; and expertise in education theories, principles, and practices; among other skills. Although the contractor hired additional staff in time for the fiscal year 2014 evaluations, we identified problems in some fiscal year 2013 evaluation reports that were also present in the more recent 2014 evaluation reports we reviewed, where data from student surveys were misinterpreted and erroneously reported. Specifically, the contractor made broad generalizations about student satisfaction with their school based on survey responses from a non-representative sample of students whose responses may not have been the prevailing view of the other students who did not respond. In providing technical comments on a draft of this report, the contractor said that they made no attempt to draw general conclusions about the student body as a whole based on the limited survey responses. We found, however, that several of the reports did contain broad generalizations. For example, a recent evaluation report stated that there was a “high overall degree of student satisfaction” with the institution based on responses from a non-representative sample of students. Lastly, in some instances, we found that the contractor made recommendations that only tangentially related to quality. For example, a fiscal year 2012 report of the schools on one military base recommended replacing the artwork in the classrooms with pictures of students in the hallways to foster a positive climate. In addition, a fiscal year 2013 evaluation of one school included a recommendation that for clarity and consistency, the school change the names of the academic terms to read “mid-Fall,” “mid-Winter,” “mid-Spring,” and “mid-Summer.” In a fiscal year 2014 report, the contractor recommended that the school change the term “Military Assistance” to “Tuition Assistance” in the listing of funding sources within the school’s online application process. In providing technical comments on a draft of this report, the contractor stated that their contract with DOD required them to evaluate all aspects of the course offerings provided by educational institutions under review. As a result, the contractor said in total it made over 300 recommendations, which were both tangential and more central to quality. For example, the contractor cited several recommendations which they viewed as more central to quality including recommendations that the close working relationship between one education office and a school be maintained; additional professional development opportunities be offered to adjunct faculty members; and an advisor position that had recently been cut be reinstated and filled with an experienced advisor. By not fully considering the skills and expertise needed by the contractor before awarding the contract, DOD risked receiving information provided from the contractor that would not meet its needs. According to the American Evaluation Association, agencies should ensure that contractors conducting evaluations possess the education, skills, and experience necessary to undertake the evaluations. After concluding that the evaluations are not providing the information needed, DOD decided not to renew its school evaluation contract. The agency is suspending the evaluations and plans to refocus its evaluations in accordance with a recently issued Executive Order. Under that Executive Order, DOD, along with Education and VA, are to ensure that schools provide meaningful information to service members about the cost and quality of programs to help them make informed choices about how to use their tuition benefits; prevent abusive and deceptive recruiting practices that target service members; and ensure that schools provide high-quality academic and student support services to service members, among other things. Further, the Order requires that DOD, Education, and VA work collaboratively to develop a comprehensive strategy for developing student outcome measures (e.g., graduation and retention rates) and strengthen their oversight of schools to protect service members participating in educational programs, such as the Tuition Assistance Program. According to DOD officials, they are meeting with Education and VA to discuss how they might coordinate efforts and leverage information from these agencies. DOD officials have been discussing a process for determining the number of school evaluations DOD will conduct each year and how they will select the schools for evaluation. DOD officials are also exploring whether these evaluations will be conducted under contract or through an interagency agreement. Although DOD has several efforts in place, it does not have a plan guiding these efforts. Fundamentally, the agency has not yet developed a plan that includes specific questions to frame the evaluations and the qualification requirements for those conducting the evaluations. Unless DOD addresses these issues, it risks receiving future evaluations that will not provide all of the information necessary to evaluate schools. Conclusions Although the federal investment in service member education is substantial, DOD’s current approach has left the department without the information it needs about the quality of schools that served about 280,000 service members in fiscal year 2013. By not having a plan in place to guide the evaluation of schools, DOD’s ability to effectively assess the schools has been limited. Without a plan, it will be difficult for DOD to obtain information on the quality of the schools and determine whether any adjustments are needed in the program. Recommendation for Executive Action To improve the usefulness of information from school evaluations, we recommend that the Secretary of Defense direct the Undersecretary of Defense for Personnel and Readiness to develop a plan for future school evaluations that includes, among other things, clearly-defined evaluation questions and an assessment of the experience, expertise, and skills needed by the personnel from the entity or entities conducting the school evaluations. Agency Comments and Our Evaluation We provided a draft copy of this report to the Department of Defense for review and comment. We received written comments from DOD, which are reproduced in appendix VI. In its comments, DOD agreed with our recommendation that a plan was needed to guide future school evaluations. However, DOD expressed concern that we had not made recommendations about whether the number of advisors was sufficient relative to the number of service members they serve. As stated in our report, each military service determines the number of advisors it will allocate to support service members’ education, based on competing priorities that balance education support with readiness needs. We made no recommendations in this area in acknowledgement of the difficulty inherent in weighing readiness needs against education needs. Further, with respect to the advisors, DOD stated that the numbers cited in our report can be misleading because its advisors perform other education functions unrelated to the Tuition Assistance Program. Our report, however, already acknowledges that advisors also support additional service members who are participating in DOD’s other voluntary education programs. We also provided a draft copy of this report to the Department of Education and the DOD contractor for review and comment, and received technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Education, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Selected Statistics Related to Active Duty Service Members and Advisors for the Military Tuition Assistance Program, Fiscal Years 2011 through 2013 Total number of advisors Army Appendix II: DOD’s Required Basic Qualifications of Military Service Advisors Each service requires that a military service advisor has obtained a degree that included or was supplemented by at least 24 semester hours in one or a combination of the following areas: Tests and measurement: Study of the selection, evaluation, administration, scoring, interpretation, and uses of group and individual aptitude, proficiency, interest, and other tests. Adult education: Study of the adult as a learner, teaching-learning theories for adults, models and procedures for planning, designing, managing, and evaluating adult learning activities. Educational program administration: Study of the foundation and methods in organizing for adult and continuing education programs. Curriculum development or design: Study of the principles and techniques for development of curricula for adult or vocational education programs. Teaching methods: Study of teaching strategies and learning styles of the adult learner. Guidance and counseling: Study of the purposes and methods in counseling and guidance, the role of the counselor in various settings, approaches to counseling, and the uses of tests in the counseling situation. Career planning: Study of career development, learning activities, systems, approaches, program coordination, use of educational and community resources, and vocational counseling systems. Occupational information: Study of theories of occupational choice and vocational development and their application to the guidance process. Identification and utilization of various types of occupational information and resources. College or university-sponsored practicum in counseling. At least one course in Test and Measurement or Adult Education. Service Army Assist soldiers in determining appropriate educational goals (for example, occupational certificates/ diplomas, terminal, or transferable college degrees). Counsel (face-to-face or electronic medium) all soldiers before Tuition Assistance is initially approved to ensure that soldiers understand their degree plan and their responsibilities regarding Tuition Assistance use. Counsel all soldiers about the requirement to have a documented degree plan after completion of nine semester hours of college credit with one school. Provide information on alternative funding, such as the use of in-service GI Bill benefits and federal financial aid. Approve an active duty soldier’s request to use Tuition Assistance for more than 8 semester hours at one time when the soldier’s prior academic history indicates likelihood for success. Discuss comparative cost effectiveness of similar programs when assisting the soldier in choosing a degree program. Explain the Tuition Assistance reimbursement requirement. Advise active duty soldiers of their responsibility to accomplish all Tuition Assistance-related actions through the GoArmyEd portal. Advise soldiers with regard to enrollment with schools not accredited by a regional accrediting body recognized by the Department of Education. Credits earned may not be accepted in transfer by regionally accredited colleges. Assist personnel in establishing an educational goal based on the individual’s academic background, aptitudes, work experience, and career objectives. Establish education plans to enable sailors to pursue their educational goals by providing information on available education institutions, degrees, and courses. Brief sailors on requirements for using Tuition Assistance benefits. Recommend and/or administer appropriate or required examinations. Assist with enrollment in schools and programs. Provide information on financial aid programs and procedures to include assisting sailors in applying on line to the Free Application for Federal Student Aid, http://www.fafsa.ed.gov. Conduct regularly scheduled Education Service Officers’ education workshops for the purpose of training military education services personnel, career counselors, and command master chiefs. Advise enlisted and officer airmen on academic and career development from the time they enter active duty until the time they retire or separate. Provide counseling prior to authorization of Military Tuition Assistance for first time use on a specific education goal. Additional counseling will be provided to meet specific needs as they arise during an airman’s progress toward an education goal. Counsel and assist students with application for the education benefits programs available under existing Department of Veterans Affairs programs of title 38 of the U.S. Code to include, but not limited to, the following: Chapter 30, Montgomery GI Bill; Chapter 32, Post-Vietnam Era Veterans Educational Assistance Program; and Chapter 33, Post-9/11 GI Bill. Further assistance with GI Bill-related issues should be referred to the Regional Department of Veterans Affairs Office, as appropriate. Provide counseling in both group and individual venues so students can make informed decisions on their eligibility for, and use of, GI Bill benefits. Counseling is appropriate when requested, but may also be provided prior to separation or retirement in, or apart from, Transition Assistance briefings. Emphasize potential benefits available to airmen who are scheduled to be involuntarily separated. Guidance to advisors Officers and enlisted marines appointed as top echelon education officers at installations, division or wing level will provide educational guidance and counseling as follows: Provide counseling at the first permanent duty station, at each new duty station, prior to separation, and at other suitable intervals during their military career. Identify and counsel, individually, those enlisted marines who do not possess a high school credential and those officers who do not possess a baccalaureate degree. Identify and screen all eligible Military Academic Skills Program personnel and provide for enrollment opportunity. Provide assistance to marines applying for Military Academic Skills Program. Maintain official Lifelong Learning program files, records, and data. Prepare a Lifelong Learning program education plan for all Lifelong Learning program participants. Publicize and promote the opportunities available through the Lifelong Learning program, using a variety of appropriate media. Appendix IV: Fifteen Areas Covered by the DOD Contractor Evaluations Appendix IV: Fifteen Areas Covered by the DOD Contractor Evaluations Evaluation Area The character of the academic partnership of the institution with the service to include but not limited to programs offered, demographics, and facilities (for non-distance learning delivery), student services for all delivery learning programs, learning resource support, library resources, institutional support for military education needs, coordination of academic institution’s satellite offices with home campus, and the working relationship between the service’s education staff and the academic institution personnel. The educational needs assessment by the service. The methods whereby academic institutions are invited on the installation. The methods by which the installation and the academic institution assess institutional effectiveness in meeting the educational needs of the installation and monitoring institutional compliance with the MOU or contract (if overseas). The support provided by various levels of the military, the Undersecretary of Defense for Personnel and Readiness, service headquarters, major commands and installations for voluntary education. The degree of congruence among various missions (the military, installation, and the institutions), the education plan of the installation, the educational programs provided by the institutions which have a MOU with the installation and the distance learning providers. The degree of consistency of the distance learning programs for both military and civilian students and traditional in-the- classroom courses, if offered, and the consistency of satellite campus courses to home campus courses. The responsiveness and flexibility toward service members with regard to programmatic, administrative, and academic processes—such as flexibility in areas of admissions, credit transfer, and academic residency requirements. The means by which students are given the opportunity to evaluate the learning they receive and how the institutions respond to those evaluations. The institution’s disclosure in its marketing and communication regarding its mission, accreditation, courses and programs, services, transfer credit, tuition and fees, recruitment incentives/commissions in its enrollment and recruitment processes. The institutional outcomes, on-campus students compared to military students, regarding retention, graduation, subject course completion, and withdrawal rates of the students. The resources institutions offer to support student learning. All faculty members meet the same academic qualifications and standards in accord with the institution’s accrediting and oversight bodies. The institution has clear, consistent policies, measures, and procedures to evaluate the performance and needs of faculty members. The institution’s academic and administrative student services—standards and processes to ensure timely response to student’s questions and concerns. Appendix V: Evaluation Contract Modifications to Address Skills Required Appendix V: Evaluation Contract Modifications to Address Skills Required Description of DOD’s Modifications to the Contract for School Evaluations Required that one member on the team evaluating schools have postsecondary or tuition assistance program experience. Working knowledge of measurement methods and tools. Expert in education theories, principles, and practices and in roles of federal and state governments sufficient to plan, evaluate, and advise DOD voluntary education agencies and other relevant stakeholders. 2 to 3 years experience in postsecondary adult education, including one of following: voluntary education for the military, accreditation assessment or evaluation, institutional self-studies, academic counseling or administration on postsecondary level from an accredited institution and academic instruction, with a degree in adult education or instructional/curriculum design, with an emphasis on designing Distance Learning Coursework. One assessor on each assessment team must have two to three years experience in voluntary education with the military. Assessors who have worked on previous evaluations but do not meet the qualification requirements may have requirements waived on a case-by-case basis. The first item was modified on October 19, 2012; the next four items were modified on September 25, 2013. The contract for assessment services was awarded August 31, 2011, at a cost of $645,438. At the time of our report, DOD had extended the contract twice, each time for a year, before it allowed the contract to expire. As of June 2014, DOD had paid a total of $2,069,329 for evaluation services provided by the contractor. Appendix VI: Comments from the Department of Defense Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Sherri Doughty (Assistant Director), Sandra Baxter, Kurt Burgeson, and Linda Siegel made key contributions to this report. Also contributing to this report were Susan Bernstein, Deborah Bland, Jessica Botsford, Holly Dye, Mimi Nguyen, Michael Silver, and Craig Winslow.
DOD's military Tuition Assistance Program includes partnership agreements with about 3,000 schools through which service members can pursue a postsecondary education. Through this program, service members' tuition is paid directly to participating schools and in fiscal year 2013, the program spent $540 million. The program also provides service members with education advisors, and conducts evaluations of schools to assess quality. Congress mandated that GAO provide information on the role of these advisors and on the DOD contractor evaluations of schools participating in the program. GAO examined (1) the number of advisors and the type of advice they provide, and (2) the information collected through evaluations of schools participating in the military Tuition Assistance Program. For this work, GAO analyzed DOD data on the program from fiscal year 2011 through 2013; reviewed all DOD contractor evaluations for fiscal years 2012 and 2013; and interviewed officials from DOD and the military services, contractor staff responsible for the evaluations, and advisors at Joint Base Andrews, Maryland. GAO visited this base because many of its service members participate in the program, and some of the participating schools were evaluated by DOD's contractor in 2013. In fiscal year 2013, 571 Department of Defense (DOD) education advisors were available to provide information and educational support to the nearly 280,000 service members taking courses funded through the military Tuition Assistance Program. This program accommodates service members, who may regularly be reassigned to another location (including overseas), by allowing them to take classes online, directly on base, or at nearby schools. DOD advisors offer a range of services to service members such as helping them understand the types of degrees and courses schools offer and helping them develop educational goals and plans. DOD used a contractor to conduct evaluations of schools participating in the Tuition Assistance Program, however, according to DOD, the evaluations did not provide the agency the information it needed to assess schools. This is because DOD lacked a specific plan to frame the evaluations, which according to federal standards, should clearly define the evaluation questions and methodology and address the collective knowledge, skills, and experience needed by the entity conducting the evaluations. According to DOD's contract, evaluations were to assess school quality, but the 15 areas DOD provided the contractor for evaluation were often not clearly defined and it was not clear what the contractor was to evaluate. For example, one of the areas was the “degree of congruence” among various entities involved in delivering educational services, which DOD provided the contractor without further specificity. Further, because DOD's contract did not specify all the skills needed by the contractor, DOD had to modify its contract to require such skills. However, still lacking information it needs, DOD recently decided not to renew the contract. DOD has suspended the evaluations and is exploring alternative options for evaluating schools, but does not yet have a plan to guide future efforts. Absent a plan, it will be difficult for DOD to have all of the information it needs to effectively evaluate schools.
Background Under current law, marketed medical products are classified into two groups: one group has about 65,000 products that are safe for consumers to use only as prescribed by a physician; the other group has over 300,000 products that, according to U.S. Food and Drug Administration standards, are safe for use on the basis of a manufacturer’s labeling instructions alone. Prescription products are available only in licensed pharmacies; whereas, other products are available over the counter at a wide variety of outlets. OTC products are generally for conditions for which users can recognize their own symptoms and levels of relief. VA Pharmacies Provide an Assortment of OTC Products VA physicians prescribed OTC products for veterans more than 7 million times in fiscal year 1995, accounting for about one-fifth of all VA prescriptions. VA pharmacies filled these OTC prescriptions over 15 million times, about one-fourth of all prescriptions filled. VA physicians prescribed more than 2,000 different OTC products. VA pharmacies classify these products into three groups: medications (such as antacids), medical supplies (such as insulin syringes), and dietary supplements (such as Ensure). Medications account for about 73 percent of the 15 million OTC prescriptions filled; medical supplies for 26 percent; and dietary supplements for less than 1 percent. VA Facilities Limit Physicians’ Prescription of OTC Products VA’s network and facility directors have considerable freedom in developing operating policies, procedures, and practices for VA physicians and pharmacies. Some facility directors have taken different actions to limit the number of OTC products available through the pharmacies and the quantity of products veterans can receive. Little uniformity in the application of limits is evident, however. In general, each facility has a pharmacy and therapeutics committee that decides which OTC products to provide based on product safety, efficacy, and cost-effectiveness. These products are listed on a formulary and VA physicians are generally to prescribe only these products. Of the 2,000 different OTC products dispensed systemwide, individual pharmacies generally handled fewer than 480, with the number of OTC products ranging from 160 to 940. (See app. II for a list of VA facilities and the number of OTC products dispensed.) Medical supplies account for the majority of products, with pharmacies generally dispensing fewer than 10 types of dietary supplements. Moreover, three facilities’ formularies excluded dietary supplements. The volume of OTC products dispensed also varied among facilities. Overall, OTC products accounted for about 25 percent of all prescriptions filled systemwide. But OTC products represented between 7 and 47 percent of all prescriptions dispensed at individual facilities. (See app. III for a list of facilities with the percentage of pharmacy workload represented by OTC products.) Of note, 100 products accounted for about 70 percent of the 15 million times that OTC products were dispensed. VA pharmacies dispensed analgesics such as aspirin and acetaminophen almost 3 million times in fiscal year 1995. The most frequently dispensed OTC products included (1) the medications aspirin, acetaminophen, and insulin; (2) the dietary supplements Sustacal and Ensure; and (3) the supplies alcohol prep pads, lancets, and glucose test strips. (See app. IV for a list of commonly dispensed OTC products.) Some Facilities Restrict OTC Products to Certain Veterans Facilities have sometimes restricted physicians’ prescriptions of OTC products to veterans with certain conditions or within certain eligibility categories. For example, 115 facilities restricted dietary supplements to veterans who required tube feeding or received approval for the supplements from dieticians. For medical supplies, one facility provided certain supplies only to patients who received them when hospitalized, and another provided diapers only to veterans with service-connected conditions. One facility provided OTC medications only to veterans with service-connected disabilities. Some Facilities Restrict Quantities of OTC Products Facilities have sometimes restricted the quantities of OTC products that pharmacies may dispense. Twenty-eight facilities had restrictions that included limits on the quantity of OTC products dispensed within specified time periods or on the number of times a prescription could be refilled. For example, one facility restricted cough syrup prescriptions to an 8-ounce bottle with one refill. It had similar quantity restrictions for 15 other OTC medications. Another facility had a no-refill policy for certain medical supplies, such as diapers, underpads, and bandages. Other Health Care Plans Provide Few, If Any, OTC Products Unlike VA, other public and private health care plans cover few, of any, OTC products for their beneficiaries. The Department of Defense, for instance, operates a health care system for military beneficiaries, including active duty members, retired members, and dependents, that provides a more restricted number of OTC products than most VA facilities. In 1992, Defense eliminated all OTC products except insulin from its formularies to control costs. Subsequently, however, Defense reinstated a few OTC products in its formularies because physicians had begun substituting more expensive prescription medications. All beneficiaries are eligible for covered OTC products without a copayment. The Health Care Financing Administration directs the Medicare and Medicaid programs that pay nonfederal health care providers for medical care for people who are elderly, disabled, or poor. Unlike VA, Medicare does not cover outpatient OTC medications for its beneficiaries. Like VA, Medicaid, at the option of the states, can cover OTC products for its low-income beneficiaries. The availability of OTC products varies by state, ranging from very few to a substantial array of products. The Federal Employees Health Benefits Program offers a range of health insurance plans to federal employees and their dependents. The program requires plans to meet certain minimum standards, which include coverage for prescription medications but not for OTC products, except for insulin and related supplies. Blue Cross and Blue Shield and Kaiser Permanente, two of the larger plans involved, cover no OTC products other than insulin and related supplies. Both plans require beneficiaries to help cover the cost of prescriptions. Kaiser charges $7 for each prescription provided by its pharmacies. Blue Cross and Blue Shield requires beneficiaries to pay a $50 deductible and 15 to 20 percent of the cost of individual prescriptions obtained at retail pharmacies, depending on whether the beneficiaries have high- or standard-option plans. Finally, most private health insurers generally do not cover OTC products, with a few exceptions such as insulin and insulin syringes. For example, the Group Health Cooperative of Puget Sound, in Seattle, provides insulin with a $5 copayment but no other OTC products. Before 1995, the Cooperative provided an OTC drug benefit but dropped it because no other similar health plan provided this benefit. Federal Resources Finance Most of VA’s OTC Costs Nationwide, VA pharmacies spent an estimated $117 million to purchase OTC products and $48 million to dispense them to veterans in fiscal year 1995. Of the total $165 million spent, about $85 million was for medications, with purchasing costs representing about two-thirds of that amount. About $74 million was spent for medical supplies and $6 million for dietary supplements, with purchasing costs accounting for most of these costs, as shown in figure 1. Purchasing and dispensing costs differ among the product categories for two reasons. First, VA physicians generally provide more prescriptions with refills for medications than for supplies, thereby causing pharmacies to handle medications more often. Second, ingredient costs of medications are generally significantly lower than those of medical supplies. VA recovered an estimated $7 million of total OTC costs (about 4 percent) through veterans’ copayments. By law, unless they meet statutory exemption criteria, veterans are to pay $2 for each 30-day supply of OTC medications and dietary supplements that VA provides. Veterans’ copayments are not required for any OTC products used to treat service-connected conditions. Also, veterans are exempt from the copayment requirement if they have low incomes. Our analysis of veterans’ copayments and pharmacy costs at VA’s Baltimore facility showed that copayments offset 7 percent of costs for OTC products, as shown in table 1. Federal funds financed most of Baltimore’s OTC product costs. Copayments collected covered a relatively small portion of these costs for several reasons. First, the $2 copayment collected for a 30-day supply represented only a portion of the ingredient, dispensing, and collection costs of most OTC medications and dietary supplements. Second, copayments were not required for medical supplies. Third, most veterans receiving medications and dietary supplements were exempted, and some nonexempt veterans did not make the copayments they owed. For individual OTC products, veterans’ medication copayments covered from 4 percent to more than 100 percent of VA’s costs, depending on the type of product and the quantities dispensed. For example, a veteran’s medication copayment of $6 for a 90-day supply of a relatively expensive product, such as the dietary supplement Ensure, may cover about 4 percent of VA’s costs. In contrast, a veteran’s copayment of $6 for a 90-day supply of an inexpensive medication, such as aspirin, may cover more than VA’s total cost. Opportunities to Reduce Federal Expenditures A variety of actions could help reduce the level of federal resources devoted to the provision of OTC products. VA pharmacies could dispense considerably fewer OTC products. Also, savings could be achieved through more efficient OTC dispensing and copayment collection processes. Finally, the Congress could expand the copayment requirements to generate additional revenues. Dispensing Fewer OTC Products Could Cut VA’s Cost VA dispenses OTC products to veterans in several situations. In general, VA provides OTC products to treat veterans for service-connected disabilities. For the treatment of nonservice-connected conditions, VA provides OTC products for hospital-related as well as non-hospital-related situations. VA could save money by limiting the situations under which it dispenses OTC products. We identified many hospital-related situations in which VA provided OTC products. For example, veterans received phosphate enemas, magnesium citrate, and prep kits for barium enemas in preparation for colonoscopies and other diagnostic tests. Following hospital stays, veterans received ostomy supplies after some surgeries, wound-care supplies, aspirin for heart surgery or angioplasties, and decongestants after sinus surgery. We also identified situations in which VA physicians determined that a veteran would be likely to be hospitalized if OTC products were not used. These included diabetic veterans using insulin to control their blood sugar, veterans suffering renal failure using sodium bicarbonate tablets to balance their electrolytes, and veterans who have suffered heart attacks or strokes using aspirin to prevent secondary occurrences. We identified, however, some non-hospital-related situations in which VA provided OTC products. These included antacids for heartburn, preparations for dry skin, acetaminophen for arthritis pain, and cough medications for common colds. Given that VA pharmacies filled prescriptions for such products over 2 million times last year, VA facilities have an opportunity to reduce costs significantly. Increased Efficiency Could Reduce VA’s Costs VA pharmacies could more efficiently dispense OTC products by reducing the number of times staff handle these items or by restricting mail service. VA facilities could also reduce costs by collecting medication copayments at the time of dispensing. Reducing OTC Product Handling Costs VA pharmacies could significantly reduce their OTC product dispensing costs of $48 million by providing more economical quantities of medications and supplies. Dispensing larger quantities would reduce the number of times that VA pharmacists fill prescriptions for OTC products, saving about $3 for each time a product would have otherwise been dispensed. As previously discussed, VA physicians generally prescribe OTC products to treat acute or chronic conditions or to prevent future illness. While prescriptions for acute conditions are generally for periods of 30 days or less, OTC products used for chronic or preventive situations are generally prescribed for longer periods. For example, in fiscal year 1995, about 1,800 veterans received aspirin at the Baltimore pharmacy in quantities sufficient for at least 6 months. VA allows pharmacies to dispense most OTC products in quantities sufficient for a 90-day supply. Not all pharmacies dispense OTC products in such economical quantities, however; 15 reported that they dispense OTC products in 30-day or 60-day supplies. Limiting pharmacies to dispensing no more than a 90-day supply is uneconomical for certain high-volume OTC products used to treat chronic conditions or to prevent illness. Dispensing larger quantities in those instances seems to provide opportunities to reduce costs. For example, we estimate that VA’s Baltimore pharmacy could have saved over $8,000 if it had dispensed 180-day supplies of aspirin to certain veterans in fiscal year 1995. Assuming a prescribed usage of 1 aspirin tablet a day, supplying 180 tablets rather than 90 would be more consistent with the quantities veterans could purchase from local outlets, which generally stock packages containing between 100 and 500 tablets. Reducing OTC Mailing Costs VA pharmacies could also reduce dispensing costs by using mail service for only certain situations (such as for veterans who are housebound or must travel long distances to reach VA facilities) or requiring veterans to pay shipping charges. Last year, VA pharmacies spent about $7.5 million mailing OTC products to veterans. VA pharmacies generally encourage veterans to use mail service when refilling most prescriptions for OTC products. Almost all pharmacies mail OTC products, relying on mail service for almost 60 percent of the 15 million times that OTC products were dispensed last year. Some pharmacies have already transferred most of their OTC prescription refills to VA’s new regional mail service pharmacies, and others will do so when additional regional pharmacies become operational. While mailing costs vary, they can be particularly costly for liquid items or items that are dispensed in large packages or for long periods. For example, one facility reported that mailing a prescription of liquid antacid cost $2.88 and mailing a case of adult diapers cost $17.49. Mailing costs for a year’s supply of diapers could exceed $200. Some VA facilities cited high mailing costs as one of the principal reasons for eliminating OTC products from their formularies. Several facilities have attempted to reduce costs by prohibiting the mailing of certain OTC products, such as cases of liquid dietary supplements and diapers. In addition, some facilities reported switching from liquid products to powders to reduce the weight—and associated mailing costs—for particular OTC products. Streamlining Copayment Collections A third way to reduce federal costs is to streamline copayment collections for OTC products. VA primarily bills veterans for copayments, unlike other providers that generally require copayments to be made at the time that the products are dispensed. VA facilities incur administrative costs to prepare and mail bills for copayments related to OTC products, costs that are significant in relation to total collections. A VA-sponsored study estimated that VA facilities spend about 38 cents for every $1 collected to prepare medication copayment bills, mail them, and resolve questions. VA facilities generally send an initial bill and three follow-up bills to veterans who are delinquent in paying. For OTC products dispensed to veterans in fiscal year 1995, VA’s Baltimore pharmacy collected about 75 percent of the value of the copayments billed. The other 25 percent remained unpaid 5 months past the end of the fiscal year. The veterans who had not paid for these products had not applied for waivers and, as a result, VA officials view them as able to pay. If the Baltimore facility’s costs approximate the rate of 38 cents of every $1 collected, it incurred an estimated $26,000 to collect $67,000 for OTC products. The 25 percent of the medication copayments that were billed but went unpaid would have required additional costs to resolve. Because of the relatively small outstanding balances for most veterans, VA officials told us that they are reluctant to continue contacting nonpayers or to pursue legal or other actions to collect these debts. VA has the option of not providing OTC products if a veteran refuses to make a medication copayment at the time the product is dispensed. VA officials, however, told us that it is not their policy to withhold OTC products from nonpayers for this reason. Collecting the copayment at the time a product is dispensed could eliminate most administrative costs and increase revenues. Veterans requesting prescription refills by mail could enclose their copayments with their requests. VA Facilities Could Increase Restrictions on OTC Products VA facilities could adopt less generous policies for OTC products that would be more consistent with other health plans’ policies. This could be achieved by adopting such cost-containment measures as limiting the OTC products available or limiting quantities dispensed. As previously discussed, each VA facility offers a different assortment of OTC products. For example, the most generous OTC product assortment contains about 285 medications, 514 medical supplies, and 14 dietary supplements. In contrast, the least generous assortment includes about 124 medications, 114 medical supplies, and 4 dietary supplements. Over the last 3 years, 45 pharmacies have reduced the number of OTC products provided to veterans. The most commonly removed OTC products are medications such as soaps, skin lotions, and laxatives; dietary supplements such as Ensure, multiple vitamins, and mineral supplements; and medical supplies such as ostomy products and glucose test strips. As part of VA’s ongoing reorganization, each of the 22 network directors has developed a list of OTC products dispensed by facilities operating in the network. In general, each network’s formulary more closely approximates the more generous OTC product assortments available in each network rather than the less generous assortments. Some network directors plan to review their formularies to identify products that could be removed. Recently, 58 facilities told us that they are considering removing some OTC products from their formularies. Most are examining fewer than 10 products, although the number of products under review ranges from 1 to 205. Products most commonly mentioned include dietary supplements, antacids, diapers, aspirin, and acetaminophen. Ninety facilities are not contemplating changes at this time. Interestingly, wide disagreement exists within VA about providing OTC products on an outpatient basis. For example, 23 facilities suggested that all OTC products should be eliminated. In contrast, 57 suggested that all OTC products should remain available. The other 70 facilities provided no opinion regarding whether OTC products should be kept or eliminated. Many facilities pointed out that eliminating all OTC products could result in greater VA health care costs. This is because some OTC products are relatively cheap compared with prescription products that might be used or because they help prevent significant health problems that could be expensive for VA facilities to ultimately treat. Facilities reported that were they to remove certain OTC products from their formularies, greater costs to VA would result. Of those 21 products reported, the most frequently mentioned were aspirin, acetaminophen, antacids, and insulin. These facilities also reported that 14 of the 21 products had prescription substitutes, among them, aspirin, acetaminophen, and antacids (insulin has no prescription substitute). While 45 facilities removed OTC products during the last 3 years, only 6 of them said that they reinstated some products on their formularies. One facility stated that although it is commonly believed that limiting OTC medications would result in a higher use of more expensive prescription medications, it had not found this to be true. As OTC products are removed from formularies, veterans will have to obtain the products elsewhere. Some VA facilities reported that they are using VA’s Canteen Service to provide OTC products that have been eliminated from their formularies. The Canteen Service operates stores in almost every VA facility to sell a variety of items, including some OTC products. For example, the Baltimore pharmacy has asked the Canteen Service store to stock about 13 OTC products that were recently eliminated from its formulary. The Baltimore pharmacy has already shifted most dietary supplements to the store. VA Canteen Service stores do not use federal funds to operate and generally provide items at a discount, in large part because they do not have the expense of advertising. By allowing these stores to sell OTC products, VA may reduce both dispensing and ingredient costs for its pharmacies. At the same time, VA’s Canteen Service stores can provide many veterans with a convenient and possibly less costly option for obtaining these products than other local outlets. Expanding Veteran Copayment Requirements Would Enhance Revenues The Congress could reduce the federal share of VA pharmacies’ costs for filling OTC prescriptions by expanding copayment requirements. This could be achieved through (1) tightening exemption criteria, (2) requiring copayments for medical supplies, or (3) raising the copayment amount. An example using VA’s Baltimore facility shows the different degree of impact these changes would have. There, as previously discussed, veterans’ copayments cover only 7 percent of the pharmacy’s OTC costs. If the copayment were to remain at $2 for each 30-day supply, changes that expand the number of veterans required to make copayments could increase the veterans’ share of costs up to 31 percent and thereby reduce the pharmacy’s share from 93 to 69 percent. In contrast, a copayment of about $9 would be needed to achieve a comparable sharing rate if existing exemptions were maintained. Restricting OTC Copayment Exemptions Some veterans are required to make copayments, while others are not. When the Congress established medication copayments in 1990, veterans with service-connected disabilities rated at 50 percent or higher were exempted for any condition as were other veterans who receive medications for service-connected conditions. In 1992, the Congress exempted veterans from the copayment requirement for nonservice-connected conditions if their income was below a specified threshold. Veterans with service-connected conditions received about one-third of the 116,000 prescriptions filled at the Baltimore pharmacy. Of these, half had disability ratings of 50 percent or higher. Veterans without service-connected conditions received the remaining two-thirds, and about half of these veterans were exempt from making copayments because their incomes were below the statutory threshold. VA officials told us that while some low-income veterans may have difficulties making copayments, most had not seemed to have such a problem before the 1992 enactment of the low-income exemption. The Baltimore pharmacy could have recovered an additional 7 percent of its costs if all veterans without service-connected conditions were required to make copayments for OTC products and an additional 11 percent if veterans were required to make copayments for OTC products provided for service-connected and nonservice-connected conditions. Using a lower income level in determining which veterans are exempt from making copayments would also reduce the federal cost of providing OTC products. We found that VA facilities were inappropriately using an income level set at VA’s aid-and-attendance pension rate rather than at the regular pension rate. After we informed VA’s General Counsel of the practice, it issued a May 1996 opinion that the law requires VA facilities to use the regular pension rate as the income level. Using this lower income level should allow facilities to collect copayments from veterans who would not otherwise have been charged. (See app. V for VA’s General Counsel’s memorandum on the pension rate.) Requiring OTC Copayments for Medical Supplies Requiring copayments for medical supplies would enhance revenues. When the Congress established a copayment requirement for medications in 1990, it did not include a copayment requirement for medical supplies. VA officials told us that they know of no reason why medical supplies should be treated differently from other product categories in terms of copayments. Nationwide, VA pharmacies dispensed medical supplies about 4 million times to veterans in fiscal year 1995, including about 36,000 times at the Baltimore pharmacy. Baltimore provided most supplies for 30 days or less, generally preceding or following a VA hospital stay. Many kinds of supplies, however, were provided for longer term conditions such as diabetic and ostomy supplies or diapers for those suffering from incontinence. We estimate that the Baltimore facility could have recovered an additional 6 percent of its OTC product costs in fiscal year 1995 if veterans had been required to make copayments for medical supplies used to treat nonservice-connected conditions. Raising the OTC Copayment Amount If the exemptions and collection rates remain unchanged, facilities would need to charge a higher copayment to recover a larger share of their OTC product costs. For example, at the Baltimore facility, recoveries could be raised from 7 percent to 32 percent if the legislatively established copayment amount were $9 for a 30-day supply. If some changes are made to the exemptions, however, this target share could be achieved with a smaller increase in the copayment rate, as shown in table 2. Conclusions Most VA facilities provide more generous OTC product benefits than other health care plans. In addition, VA facilities provide other features, such as free OTC product mail service, that are not commonly available from other plans. As a result, VA facilities devote significant resources to the provision of OTC products that other plans have elected not to spend. VA should be commended for instructing network directors to consolidate formularies. This action, which is currently in progress, has not yet achieved an adequate level of consistency or cost-containment systemwide because the networks’ current formularies approximate the more generous coverage of OTC products at some VA facilities. Moreover, some networks are permitting facilities to provide less generous coverage of OTC products than these networks’ formularies allow. This is likely to perpetuate the uneven availability of OTC products. Given the disagreement among networks and facilities over the provision of OTC products, additional guidance may be needed to ensure that veterans have a consistent level of access to OTC products systemwide. In light of concerns about potential resource shortages at some facilities, tailoring the availability of OTC products for nonservice-connected conditions to be more in line with that at less generous facilities would seem desirable. This would essentially limit OTC products to those most directly related to VA hospitalizations. VA facilities could also reduce their costs if they restructured OTC product dispensing and copayment collection processes. In general, most facilities dispense OTC product refills too frequently, mail products too often, and allow veterans to delay copayments too frequently. Although some facilities have adopted measures to operate more efficiently, all facilities could benefit by doing so. VA facilities should be able to collect copayments for OTC products from more veterans if they use the appropriate income threshold to determine which veterans owe copayments. In May 1996, VA’s General Counsel concluded that the income threshold, as prescribed by law, should be the regular pension rate for most cases, not the higher aid-and-attendance rate. VA facilities had been using the higher aid-and-attendance rate. Expanding veterans’ share of the costs would also help reduce federal resource needs. This could be achieved by expanding copayment requirements to include medical supplies, reducing the income threshold for veterans with nonservice-connected conditions, or increasing the amount of copayment required. In addition to enhancing revenues, such changes could also act as important incentives for veterans to obtain only the OTC products from VA facilities that they expect to use. Finally, some VA facilities have had success using the Canteen Service stores to stock and sell OTC products that the facilities had removed from their formularies. This seems to be a reasonable alternative for providing OTC products to veterans at costs below those of other local outlets. Matters for Consideration by the Congress The Congress could reduce federal expenditures for OTC products provided to veterans by amending 38 U.S.C. 1722A to increase the medication copayment amount; expand the coverage of the medication copayment to include medical supplies; or lower the income threshold VA uses to determine which veterans owe medication copayments. Recommendations to the Secretary We recommend that the Secretary of Veterans Affairs require the Under Secretary of Health to limit OTC products for nonservice-connected conditions to those most directly related to VA hospitalizations or those considered most essential to prevent hospitalization; standardize the availability of OTC products to give veterans more consistent levels of access to them systemwide; reduce VA’s dispensing costs for OTC products by (1) providing, when appropriate, more economical quantities (more than a 90-day supply) of medications and supplies and (2) limiting mail service to certain situations; require veterans to make copayments at the time OTC products are direct facilities to apply the statutory income threshold to determine which veterans owe medication copayments. Agency Comments and Our Evaluation In commenting on a draft of our report, VA’s Under Secretary for Health agreed to standardize the availability of OTC products nationwide and estimated this will be done by May 1997. VA also agreed to use the statutory income threshold (the regular pension rate) instead of the aid-and-attendance rate to determine which veterans should be exempt from medication copayments. VA estimated that most veterans who were previously exempt from the medication copayment because of their income levels will now be required to make payments. However, VA expressed disagreement with our other recommendations. Our recommendations were intended to identify ways that VA could conserve OTC pharmaceutical resources so that they could be redirected to provide more essential health care services for veterans. VA faces serious budget challenges today and in the future. These challenges are forcing management to make choices about how to best use limited resources to maintain the present level of health care services for veterans. Nationwide, VA’s managers are faced with taking every reasonable action to ensure that they are providing high-quality medical care in a cost-effective manner. Our recommendations, for the most part, were based on actions certain VA pharmacies have already taken. Limiting OTC Products VA did not concur with our recommendation to limit OTC products for nonservice-connected conditions to those most directly related to VA hospitalizations or those considered most essential to prevent hospitalization. VA stated that its policy to provide patients with medications, medical supplies, and dietary supplements is based on the clinical determination that these items are medically necessary. VA pointed out that continuity of care is a cornerstone of primary care practice with emphasis on preventive care and asserted that implementation of this recommendation would probably lead to fragmented care. VA stated that fragmentation of care can lead to an overall increase in health care costs. Restriction of OTC products could also lead to a shift in prescribing patterns. To ensure that the patient will actually get the needed medication, physicians may order more expensive prescription items if OTC versions are not provided by VA pharmacies, a practice that would lead to increased overall expenditures. Our recommendation was designed to bring VA’s provision of OTC products into closer alignment with the practices of the vast majority of health care plans in this country. Generally, private health care plans provide primary care but exclude OTC products as a benefit for their participants—that is, they expect enrollees to obtain OTC products from other sources at their own expense. Furthermore, what we are recommending is that VA do on a systemwide basis what several of its own facilities have done. VA’s local facilities generally factor in drug substitution and potential health effects when making their decisions about which drugs to provide. Some of them have already made the tough choices about which OTC drugs were essential to provide, and they did not report encountering, to any great extent, the types of potential problems that VA expressed concern about in its comments. Limiting VA pharmacies’ provision of certain OTC items presumes that veterans will obtain the items from other local outlets if they share their physicians’ assessment of the products’ medical necessity. Dispensing More Economical Quantities VA agreed that OTC products should be provided in more economical quantities to reduce VA’s dispensing costs but only in those instances deemed clinically appropriate. VA stated that the current medication renewal process often serves as a good opportunity for the patient to have personal contact with the health care provider and to be reevaluated for medication compliance. VA also stated that quantity limitation must be based on quality of care considerations and the individual veteran’s ability to comply with his or her medication regimen. Also, consideration must be given to the stability of the drug in question. Our recommendation was intended to reduce the dispensing costs associated with OTC products and touches on prescription refills rather than prescription renewals. For chronic conditions, VA prescriptions are usually written for 6- or 12-month periods with refills. Renewing the prescriptions once or twice a year does provide opportunities for veterans to see VA health care practitioners, but refilling those prescriptions every 90 days in the interim does not. VA pharmacy officials told us that routine refills are generally handled by mail with no interaction between physicians and veterans. Analgesics, such as aspirin and acetaminophen, which VA dispensed almost 3 million times in fiscal year 1995, provide an example of how refill quantities influence costs. VA could save about $3 in dispensing costs each time it provided one 180-day supply instead of two 90-day supplies. When sold in local outlets, aspirin is commonly packaged in quantities of 100 to 500 tablets, making it possible for veterans and others to readily buy more than 180-day supplies without raising concerns about medical safety or product stability. OTC products are safe when the manufacturers’ labeling directions are followed and, as in the case of aspirin, are stable enough to be stored in users’ homes for 6 months or longer without adverse consequences. Limiting Mail Service VA did not concur with our recommendation to reduce VA’s dispensing costs for OTC products by limiting mail service to certain situations. VA stated that implementing this recommendation would undermine the important health care goals of patient satisfaction and customer service. Also, VA stated that mail service helps to reduce daily crowding and congestion in ambulatory care and parking areas of VA treatment facilities. When resources are limited, choices about whether to fund certain OTC products have to be made by local VA pharmacies. Some VA pharmacies reported to us that they continued to provide certain OTC products, such as cases of liquid dietary supplements or diapers, but did not mail them. Veterans needing such OTC products have to pick them up at the pharmacy (exceptions are made when warranted). Again, we are only recommending that VA do, on a systemwide basis, what several of its facilities have done independently. Collecting Copayments When Products Are Dispensed VA did not concur with our recommendation to require veterans to make copayments at the time OTC products are dispensed. VA stated that to the fullest extent possible, veterans are encouraged to make copayments at the time OTC products are dispensed. An estimated 35 percent of prescription copayments are collected at the time of dispensing. Because approximately 50 percent of all outpatient prescriptions are mailed, VA stated, it is obvious that copayment collection rates at the time of dispensing are already high. Collection decisions must be made on an individual basis, according to VA, which stated that a veteran will not be denied a medically necessary product if for some reason copayment cannot be made at the time the product is dispensed. During our examination of the copayment process at the VA facility we visited, however, we found that veterans were not presented a copayment bill or required to make payments at the time OTC products were dispensed at the pharmacy. Instead, the facility primarily mailed copayment bills to veterans, incurring additional administrative costs. Because VA’s records showed only total copayment collections, copayments received by mail or collected by the cashier could not be differentiated. Our work showed that about 25 percent of OTC copayments billed were uncollected. VA incurs additional administrative costs to pursue these uncollected copayments. Collecting the copayments for OTC products at the time of dispensing would eliminate the administrative costs to bill and rebill delinquent payers. Veterans could help conserve VA’s limited resources by making copayments when they pick up the OTC products at the pharmacy or by including their copayments when ordering refills by mail. Given current copayment rates of $2 for a 30-day quantity, our recommendation would not seem to be overly burdensome on veterans. The full text of VA’s comments is in appendix VI. We are sending copies to appropriate congressional committees; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please call me on (202) 512-7101 if you or your staff have any questions concerning this report. Contributors to this report are listed in appendix VII. GAO Questionnaire Results VA Facilities by the Total Number of Unique OTC Products Dispensed, Fiscal Year 1995 Pittsburgh (HD), PA Los Angeles OPC, CA (continued) American Lake and Seattle, WA Salt Lake City, UT Chicago (Lakeside), IL North Chicago (Downey), IL El Paso OPC, TX St. Cloud, MN (continued) West Palm Beach, FL Pittsburgh (UD), PA Chicago (Westside), IL Las Vegas OPC, NV (continued) White River Junction, VT Central Texas Health Care System, TX West Los Angeles (Wadsworth), CA (continued) Percentage of Pharmacy Workload Attributable to OTC Products, Fiscal Year 1995 West Palm Beach, FL Pittsburgh (HD), PA Salt Lake City, UT (continued) White River Junction, VT West Los Angeles (Wadsworth), CA (continued) American Lake and Seattle, WA Chicago (Lakeside), IL (continued) North Chicago (Downey), IL St. Cloud, MN Pittsburgh (UD), PA Chicago (Westside), IL (continued) HD = facility on Highland Drive, Pittsburgh; UD = facility on University Drive, Pittsburgh. 100 Commonly Dispensed OTC Products That Accounted for About 70 Percent of the OTC Workload, Fiscal Year 1995 Department of Veterans Affairs Office of General Counsel’s Opinion on the Low Income Exemption From the Pharmacy Copayment Comments From the Department of Veterans Affairs GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, the following individuals made important contributions to this report: Mike O’Dell, Mark Trapani, Paul Wright, Deena El-Attar, and Joan Vogel. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Veterans Affairs' (VA) provision of over-the-counter (OTC) medications, medical supplies, and dietary supplements to veterans, focusing on: (1) what OTC products VA pharmacies dispense; (2) how VA provision of OTC products compares with that of non-VA health care providers; (3) how much VA spends on OTC products and how much VA recovers through veterans' copayments; and (4) opportunities to reduce federal expenditures for OTC products. GAO found that: (1) all VA pharmacies provide some OTC products that are available through other local outlets; (2) the most frequently dispensed OTC products were medications, dietary supplements, and medical supplies; (3) individual VA pharmacies offer a different assortment of OTC products; (4) some pharmacies restrict which veterans may receive OTC products or in what quantity; (5) other public and private health care plans cover few, if any, OTC products for beneficiaries; (6) in fiscal year (FY) 1995, VA pharmacies dispensed OTC products more than 15 million times at an estimated cost of $165 million, including $48 million in handling costs; (7) VA recovered about $7 million through veterans' copayments; and (8) to reduce the resources devoted to dispensing OTC products, VA could more narrowly define when to provide OTC products, more efficiently dispense OTC products and collect copayments, and further reduce the number of OTC products dispensed on an outpatient basis.
Background The Immigration Reform and Control Act of 1986 created the Visa Waiver Program as a pilot program. It was initially envisioned as an immigration control and economic promotion program, according to State. Participating countries were selected because their citizens had a demonstrated pattern of compliance with U.S. immigration laws, and the governments of these countries granted reciprocal visa-free travel to U.S. citizens. In recent years, visa waiver travelers have represented about one- half of all nonimmigrant admissions to the United States. In 2002, we reported on the legislative requirements to which countries must adhere before they are eligible for inclusion in the Visa Waiver Program. In general, to qualify for visa waiver status, a country must: maintain a nonimmigrant refusal rate of less than 3 percent for its citizens who apply for business and tourism visas. certify that it issues machine-readable passports to its citizens; and offer visa-free travel for U.S. citizens. Following the 9-11 attacks, Congress passed additional laws to strengthen border security policies and procedures, and DHS and State instituted other policy changes that have affected the qualifications for countries to participate in the Visa Waiver Program. For example, all passports issued after October 26, 2005, must contain a digital photograph printed in the document, and passports issued to visa waiver travelers after October 26, 2006, must be electronic (e-passports). In addition, the May 2002 Enhanced Border Security and Visa Reform Act required that participating countries certify that the theft of their blank passports is reported to the U.S. government in a timely manner. Visa Waiver Program Has Benefits and Risks The Visa Waiver Program has many benefits. The program was created to facilitate international travel without jeopardizing the welfare or security of the United States, according to the program’s legislative history. According to economic and commercial officers at several of the U.S. embassies we visited, visa-free travel to the United States boosts international business travel and tourism, as well as airline revenues, and creates substantial economic benefits to the United States. Moreover, the program allows State to allocate its resources to visa-issuing posts in countries with higher-risk applicant pools. In 2002, we reported that eliminating the program would increase State’s resource requirements as millions of visa waiver travelers who have benefited from visa-free travel would need to obtain a visa to travel to the United States if the program did not exist. Specifically, if the program were eliminated, we estimated that State’s initial costs at that time to process the additional workload would likely range between $739 million and $1.28 billion and that annual recurring costs would likely range between $522 million and $810 million. In addition, visa waiver countries could begin requiring visas for U.S. citizens traveling to the 27 participating countries for temporary business or tourism purposes, which would impose a burden of additional cost and time on U.S. travelers to these countries. Visa Waiver Program Can Pose Risks to U.S. Security, Law Enforcement, and Immigration Interests The Visa Waiver Program, however, can also pose risks to U.S. security, law enforcement, and immigration interests because some foreign citizens may exploit the program to enter the United States. First, visa waiver travelers are not subject to the same degree of screening as travelers who must first obtain a visa before arriving in the United States (see fig. 1). Visa waiver travelers are first screened in person by a DHS Customs and Border Protection (CBP) inspector once they arrive at the U.S. port of entry, perhaps after having already boarded an international flight bound for the United States with a fraudulent travel document. According to the DHS OIG, primary border inspectors are at a disadvantage when screening Visa Waiver Program travelers because they may not know the alien’s language or local fraud trends in the alien’s home country, nor have the time to conduct an extensive interview. In contrast, non-visa-waiver travelers, who must obtain a visa from a U.S. embassy or consulate, receive two levels of screening before entering the country—in addition to the inspection at the U.S. port of entry, these travelers undergo an interview by consular officials overseas, who conduct a rigorous screening process when deciding to approve or deny a visa. As we have previously reported, State has taken a number of actions since 2002 to strengthen the visa issuance process as a border security tool. Moreover, consular officers have more time to interview applicants and examine the authenticity of their passports, and may also speak the visa applicant’s native language, according to consular officials. Therefore, inadmissible travelers who need visas to enter the United States may attempt to acquire a passport from a Visa Waiver Program country to avoid the visa screening process. Another risk inherent in the program is the potential exploitation by terrorists, immigration law violators, and other criminals of a visa waiver country’s lost or stolen passports. DHS intelligence analysts, law enforcement officials, and forensic document experts all acknowledge that misuse of lost and stolen passports is the greatest security problem posed by the Visa Waiver Program. Lost and stolen passports from visa waiver countries are highly prized travel documents, according to the Secretary General of Interpol. Moreover, Visa Waiver Program countries that do not consistently report the losses or thefts of their citizens’ passports, or of blank passports, put the United States at greater risk of allowing inadmissible travelers to enter the country. Fraudulent passports from Visa Waiver Program countries have been used illegally by travelers seeking to disguise their true identities or nationalities when attempting to enter the United States. For example, from January through June 2005, DHS reported that it confiscated, at U.S. ports of entry, 298 fraudulent or altered passports issued by Visa Waiver Program countries that travelers were attempting to use for admission into the United States. Although DHS has intercepted some travelers with fraudulent passports at U.S. ports of entry, DHS officials acknowledged that an undetermined number of inadmissible aliens may have entered the United States using a lost or stolen passport from a visa waiver country. According to State, these aliens may have been inadmissible because they were immigration law violators, criminals, or terrorists. For example: In July 2005, two aliens successfully entered the United States using lost or stolen Austrian passports. DHS was not notified that these passports had been lost or stolen prior to this date; the aliens were admitted, and there is no record of their departure, according to CBP. In October 2005, CBP referred this case to DHS’s Immigration and Customs Enforcement for further action. In June 2005, CBP inspectors admitted into the United States two aliens using Italian passports that were from a batch of stolen passports. CBP was notified that this batch was stolen; however, the aliens altered the passport numbers to avoid detection by CBP officers. DHS has no record that these individuals departed the United States. Process for Assessing Program Risks Has Weaknesses DHS has taken several steps to assess the risks of the Visa Waiver Program. However, we identified problems with the country review process by which DHS assesses these risks, namely a lack of inclusiveness, transparency, and timeliness. Furthermore, OIE is unable to effectively monitor the immigration, law enforcement, and security risks posed by visa waiver countries on a continuing basis because of insufficient resources. Initial Steps Taken To Assess Risk of Visa Waiver Program In April 2004, the DHS OIG identified significant areas where DHS needed to strengthen and improve its management of the Visa Waiver Program. For example, the OIG found that a lack of funding, trained personnel, and other issues left DHS unable to comply with the mandated biennial country assessments. In response to these findings, DHS established OIE’s Visa Waiver Program Oversight Unit in July 2004, and named a director to manage the office. The unit’s mission is to oversee Visa Waiver Program activities and monitor countries’ adherence to the program’s statutory requirements, ensuring that the United States is protected from those who wish to do it harm or violate its laws, including immigration laws. Since the unit’s establishment, DHS, and particularly OIE, has made strides to address concerns raised by the 2004 OIG review. For example, DHS completed comprehensive assessments of 25 of the 27 participating countries and submitted a six-page report to Congress in November 2005 that summarized the findings from the 2004 assessments. DHS Lacks a Clearly- Defined, Consistent, and Timely Process to Assess Risks of Visa Waiver Program Despite these steps to strengthen and improve the management of the program, we identified several problems with the mandated biennial country assessment process, by which DHS assesses the risks posed by each of the visa waiver countries’ continued participation in the program. For the 2004 assessments, we found the following: Some key stakeholders were excluded from the process. After conducting the site visits and contributing to the reports on the site visits, DHS and the interagency working group did not seek input from all site visit team members while drafting and clearing the final country assessments and subsequent report to Congress. For example, DHS’s forensic document analysts, who participated in the site visits in 2004, told us that they did not see, clear, or comment on the draft country assessments, despite repeated attempts to obtain copies of them. Additionally, at the time of our visits, ambassadors or deputy chiefs of mission in each of the six posts told us that they were not fully aware of the extent to which assessments for the country where they were posted discussed law enforcement and security concerns posed by the continued participation of the country in the program. Without this information, key stakeholders could not be effective advocates for U.S. concerns. The reviews lacked clear criteria to make key judgments. We found that DHS did not have clear criteria to determine at what point security concerns uncovered during their review would trigger discussions with foreign governments about these concerns and an attempt to resolve them. State officials agreed that qualitative and/or quantitative criteria would be useful when making these determinations, although DHS stated that the criteria should be flexible. DHS and its interagency partners neither completed the 25 country assessments nor issued the summary report to Congress in a timely manner. The interagency teams conducted site visits as part of the country assessments from May through September 2004, and transmitted the final summary report to Congress more than 1 year later, in November 2005. OIE, State, and Justice officials acknowledged that the assessments took too long to complete. The teams collecting information about the visa waiver countries’ risks in 2004 used, in some cases, information that was two years old; by the time the summary report was issued in November 2005, some of the data was over 3 years old. As a result of this lengthy process, the final report presented to Congress did not necessarily reflect the current law enforcement and security risks posed by each country, or the positive steps that countries had made to address these risks. DHS Cannot Effectively Monitor Ongoing Concerns in Visa Waiver Countries OIE is limited in its ability to achieve its mission because of insufficient staffing. The office has numerous responsibilities, including conducting the mandated biennial country reviews; monitoring law enforcement, security, and immigration concerns in visa waiver countries on an ongoing basis; and working with countries seeking to become members of the Visa Waiver Program. In 2004, the DHS OIG found that OIE’s lack of resources directly undercut its ability to assess a security problem inherent in the program—lost and stolen passports. The office received funding to conduct the country reviews in 2004 and 2005; however, OIE officials indicated that a lack of funding and full-time staff has made it extremely difficult to conduct additional overseas fieldwork, as well as track ongoing issues of concern in the 27 visa waiver countries—a key limitation in DHS’s ability to assess and mitigate the program’s risks. According to OIE officials, the unit developed a strategic plan to monitor the program, but has been unable to implement its plan with its current staffing of two full- time employees, as well as one temporary employee from another DHS component. Without adequate resources, OIE is unable to monitor and assess participating countries’ compliance with the Visa Waiver Program’s statutory requirements. In addition to resource constraints, DHS has not clearly communicated its mission to stakeholders at overseas posts, nor identified points of contact within U.S. embassies, so it can communicate directly with field officials positioned to monitor countries’ compliance with Visa Waiver Program requirements and report on current events and issues of potential concern. In particular, within DHS’s various components, we found that OIE is largely an unknown entity and, therefore, is unable to leverage the expertise of DHS officials overseas. A senior DHS representative at one post showed us that her organizational directory did not contain contact information for OIE. Additionally, a senior DHS official in Washington, D.C., told us that he may find out about developments—either routine or emergent—in visa waiver countries by “happenstance.” Due to the lack of outreach and clear communication about its mission, OIE is limited in its ability to monitor the day-to-day law enforcement and security concerns posed by the Visa Waiver Program, and the U.S. government is limited in its ability to influence visa waiver countries’ progress in meeting requirements. DHS Faces Difficulties in Mitigating Program Risks DHS has taken some actions to mitigate the risks of the Visa Waiver Program. However, though the law has required the timely reporting of blank passport thefts for continued participation in the Visa Waiver Program since 2002, DHS has not established and communicated time frames and operating procedures to participating countries. In addition, DHS has sought to expand this requirement to include the reporting of data, to the United States and Interpol, on lost and stolen issued passports; however, participating countries are resisting these requirements, and DHS has not yet issued guidance on what information must be shared, with whom, and within what time frame. Furthermore, U.S. border inspectors are unable to automatically access Interpol’s data on reported lost and stolen passports, which makes detection of these documents at U.S. ports of entry more difficult. DHS Has Taken Some Actions to Mitigate Risks of the Visa Waiver Program As previously mentioned, during the 2004 assessment process, the working group identified security concerns in several participating countries, and DHS took actions to mitigate some of these risks. Specifically, DHS determined that several thousand blank German temporary passports had been lost or stolen, and that Germany had not reported some of this information to the United States. As a result, as of May 1, 2006, German temporary passport holders are not allowed to travel to the United States under the Visa Waiver Program without a visa. In addition, DHS has enforced an October 26, 2005, deadline requiring travelers under the Visa Waiver Program to have digital photographs in their passports. DHS Lacks Standard Procedures for Obtaining Stolen Blank Passport Data A key risk in the Visa Waiver Program is stolen blank passports from visa waiver countries, because detecting these passports at U.S. ports of entry is extremely difficult, according to DHS. Some thefts of blank passports have not been reported to the United States until years after the fact, according to DHS intelligence reports. For example, in 2004, a visa waiver country reported the theft of nearly 300 stolen blank passports to the United States more than 9 years after the theft occurred. The 2002 Enhanced Security and Visa Entry Reform Act provides that the Secretary of Homeland Security must terminate a country from the Visa Waiver Program if he and the Secretary of State jointly determine that the country is not reporting the theft of its blank passports to the United States on a timely basis. However, DHS has not established time frames or operating procedures to enforce this requirement. While the statute requires visa waiver countries to certify that they report information on the theft of their blank passports to the United States on a timely basis, as of June 2006, DHS has not defined what constitutes “timely” reporting. Moreover, the United States lacks a centralized mechanism for foreign governments to report all stolen passports. In particular, DHS has not defined to whom in the U.S. government participating countries should report this information. Some Participating Visa Waiver Program Countries are Resisting Additional Reporting to United States and Interpol In addition to blank passports, lost or stolen issued passports also pose a risk because they can be altered. In June 2005, DHS issued guidance to participating Visa Waiver Program countries requiring that they certify their intent to report lost and stolen passport data on issued passports by August 2005. However, DHS has not yet issued guidance on what information must be shared, with whom, and within what time frame. Moreover, some visa waiver countries have not yet agreed to provide this information to the United States, due in part to concerns over the privacy of their citizens’ biographical information. In addition, several consular officials expressed confusion about the current and impending requirements about sharing this data, and felt they were unable to adequately explain the requirements to their foreign counterparts. In June 2005, the U.S. government also announced its intention to require visa waiver countries to certify their intent to report information on both lost and stolen blank and issued passports to Interpol. In 2002, Interpol developed a database of lost and stolen travel documents to which its member countries may contribute on a voluntary basis. While most visa waiver countries use and contribute to Interpol’s database, four do not. Moreover, some countries that do contribute do not do so on a regular basis, according to Interpol officials. Participating countries have expressed concerns about reporting this information, citing privacy issues; however, Interpol’s database on lost and stolen travel documents does not include the passport bearers’ biographical information, such as name and date of birth. According to the Secretary General of Interpol, in light of the high value associated with passports from visa waiver countries, it is a priority for his agency to encourage countries to contribute regularly to the database. Inefficient Access to Interpol’s Database on Lost and Stolen Passports Though information from Interpol’s database could potentially stop inadmissible travelers from entering the United States, CBP’s border inspectors do not have automatic access to the database at the primary inspection point at U.S. ports of entry—the first line of defense against those who might exploit the Visa Waiver Program to enter the United States. The inspection process at U.S. ports of entry can include two stages—a primary and secondary inspection. If, during the primary inspection, the inspector suspects that the traveler is inadmissible either because of a fraudulent passport or other reason, the inspector refers the traveler to secondary inspection. At secondary inspection, border inspectors can contact officials at the National Targeting Center, who can query Interpol’s stolen-travel-document database to determine if the traveler’s passport had been previously reported lost or stolen, but is not yet on CBP’s watch list. However, according to DHS, State, and Justice officials, because Interpol’s data on lost and stolen travel documents is not automatically accessible to border inspectors at primary inspection, it is not currently an effective border screening tool. Moreover, according to the Secretary General of Interpol, until DHS can automatically query Interpol’s data, the United States will not have an effective screening tool for checking passports. According to Interpol officials, the United States is working actively with Interpol on a potential pilot project that would allow for an automatic query of aliens’ passport data against Interpol’s database at primary inspection at U.S. ports of entry. However, DHS has not yet finalized a plan to do so. Recommendations to Improve Program Oversight and DHS’s Response In our report, we made a series of recommendations to improve the U.S. government’s process for assessing the risks in the Visa Waiver Program, including recommending that DHS provide additional resources to strengthen OIE’s visa waiver monitoring unit; finalize clear, consistent, and transparent protocols for the biennial country assessments and provide these protocols to stakeholders at relevant agencies at headquarters and overseas; create real-time monitoring arrangements for all 27 participating countries; and establish protocols for direct communication between overseas posts and OIE’s Visa Waiver Program Oversight Unit. In addition, we made recommendations to improve U.S. efforts to mitigate program risks, including requiring that all visa waiver countries provide the United States and Interpol with non-biographical data from lost or stolen issued passports, as well as from blank passports; developing clear standard operating procedures for the reporting of lost and stolen blank and issued passport data; and developing and implementing a plan to make Interpol’s stolen travel document database automatically available during primary inspection at U.S. ports of entry. Given the lengthy time it took for DHS to issue the November 2005 summary report to Congress, and to ensure future reports contain timely information when issued, we also proposed that Congress establish a biennial deadline by which DHS must complete the country assessments and report to Congress. DHS either agreed with, or stated that it is considering, all of our recommendations. Regarding our matter for congressional consideration, DHS did not support the establishment of a deadline for the biennial report to Congress. Instead, DHS suggested that Congress should require continuous and ongoing evaluation. With continuous review, DHS stated that it would be able to constantly evaluate U.S. interests and report to Congress on the current 2-year reporting cycle on targeted issues of concern, rather than providing a historical evaluation. We agree that continuous and ongoing evaluation is necessary, and that is why we recommended that DHS create real-time monitoring arrangements and provide additional resources to the Visa Waiver Program Oversight Unit to achieve this goal. Regarding the mandated biennial country assessments, we believe that they can serve a useful purpose if they are completed in a timely fashion. In closing, the Visa Waiver Program aims to facilitate international travel for millions of people each year and promote the effective use of government resources. Effective oversight of the program entails balancing these benefits against the program’s potential risks. To find this balance, the U.S. government needs to fully identify the vulnerabilities posed by visa waiver travelers, and be in a position to mitigate them. It is imperative that DHS commit to strengthen its ability to promptly identify and mitigate risks to ensure that the Visa Waiver Program does not jeopardize U.S. security interests. This is particularly important given that many countries are actively seeking to join the program. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or Members of the Subcommittee may have. Contacts and Staff Acknowledgments For questions regarding this testimony, please call Jess T. Ford, (202) 512- 4128 or fordj@gao.gov. Individuals making key contributions to this statement include John Brummet, Assistant Director, and Kathryn H. Bernet, Joseph Carney, and Jane S. Kim. Related GAO Products and Ongoing Reviews Issued Reports Border Security: More Emphasis on State’s Consular Safeguards Could Mitigate Visa Malfeasance Risks. GAO-06-115. October 6, 2005 Border Security: Strengthened Visa Process Would Benefit From Improvements in Staffing and Information Sharing. GAO-05-859. September 13, 2005 Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program. GAO-05-801. July 29, 2005. Border Security: Streamlined Visas Mantis Program Has Lowered Burden on Foreign Science Students and Scholars, but Further Refinements Needed. GAO-05-198. February 18, 2005. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. September 9, 2004. Border Security: Additional Actions Needed to Eliminate Weaknesses in the Visa Revocation Process. GAO-04-795. July 13, 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicate Visas for Science Students and Scholars. GAO-04-371. February 25, 2004. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-798. June 18, 2003. Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. November 22, 2002. Technology Assessment: Using Biometrics for Border Security. GAO-03- 174. November 15, 2002. Border Security: Visa Process Should Be Strengthened as an Antiterrorism Tool. GAO-03-132NI. October 21, 2002. Ongoing Reviews Review of International Aviation Passenger Prescreening. Requested by the Chairman and Ranking Member, Committee on the Judiciary, and the Ranking Member, Committee on Homeland Security, House of Representatives. Report expected in the fall of 1006. Review of the Department of State’s Measures to Ensure the Integrity of Travel Documents. Requested by the Chairman, Committee on the Judiciary, House of Representatives; Chairman John N. Hostettler and member Darrell E. Issa, Subcommittee on Immigration, Border Security and Claims, Committee on the Judiciary, House of Representatives; and, Chairman Lamar S. Smith, Subcommittee on Courts, the Internet, and Intellectual Property, Committee on the Judiciary, House of Representatives. Report expected in the spring of 2007. Review of the Department of State’s Effort to Address Delays in Visa Issuance. Requested by the Chairman and Ranking Member of the Committee on Government Reform, House of Representatives. Report expected in the spring of 2007. Review of Immigrant Visa Processing. Requested by the Ranking Member, Committee on Homeland Security, House of Representatives. Report expected in mid-2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Visa Waiver Program enables citizens of 27 countries to travel to the United States for tourism or business for 90 days or less without obtaining a visa. In fiscal year 2005, nearly 16 million people entered the country under the program. After the 9-11 terrorist attacks, the risk that aliens would exploit the program to enter the United States became more of a concern. This testimony discusses our recent report on the Visa Waiver Program. Specifically, it (1) describes the Visa Waiver Program's benefits and risks, (2) examines the U.S. government's process for assessing potential risks, and (3) assesses the actions taken to mitigate these risks. We met with U.S. embassy officials in six program countries and reviewed relevant procedures and reports on participating countries The Visa Waiver Program has many benefits as well as some inherent risks. It facilitates travel for millions of people and eases consular workload, but poses challenges to border inspectors, who, when screening visa waiver travelers, may face language barriers or lack time to conduct in-depth interviews. Furthermore, stolen passports from visa waiver countries are prized travel documents among terrorists, criminals, and immigration law violators, creating an additional risk. While the Department of Homeland Security (DHS) has intercepted many fraudulent documents at U.S. ports of entry, DHS officials acknowledged that an undetermined number of inadmissible aliens may have entered the United States using a stolen or lost passport from a visa waiver country. DHS's process for assessing the risks of the Visa Waiver Program has weaknesses. In 2002, Congress mandated that, every 2 years, DHS review the effect that each country's continued participation in the program has on U.S. law enforcement and security interests, but did not set a reporting deadline. In 2004, DHS established a unit to oversee the program and conduct these reviews. We identified several problems with the 2004 review process, as key stakeholders were not consulted during portions of the process, the review process lacked clear criteria and guidance to make key judgments, and the final reports were untimely. Furthermore, the monitoring unit cannot effectively achieve its mission to monitor and report on ongoing law enforcement and security concerns in visa waiver countries due to insufficient resources. DHS has taken some actions to mitigate the program's risks; however, the department has faced difficulties in further mitigating these risks. In particular, the department has not established time frames and operating procedures regarding timely stolen passport reporting--a program requirement since 2002. Furthermore, DHS has sought to require the reporting of lost and stolen passport data to the United States and the International Criminal Police Organization (Interpol), but it has not issued clear reporting guidelines to participating countries. While most visa waiver countries report to Interpol's database, four do not. Further, DHS is not using Interpol's data to its full potential as a border screening tool because U.S. border inspectors do not automatically access the data at primary inspection.
Background The National Mall in Washington, D.C., traces its history in part to plans developed by Pierre Charles L’Enfant and the U.S. Senate’s Park Commission of the District of Columbia—commonly known as the McMillan Commission. The L’Enfant Plan of 1791 envisioned the National Mall as a grand avenue beginning at the U.S. Capitol and extending west to the current site of the Washington Monument. The McMillan Commission Plan of 1901-1902 extended the National Mall further west and south to the future sites of the Lincoln and Jefferson Memorials. Multiple geographic definitions of the National Mall exist. For example, the narrowest definition of the National Mall encompasses the area between 1st and 14th Streets and Constitution and Independence Avenues. Broader definitions of the National Mall extend its boundaries to include the grounds of the Washington Monument and the grounds of the Lincoln and Jefferson Memorials, while other definitions also include the U.S. Capitol, the White House, the Ellipse, and West Potomac Park. For the purposes of our report, we defined the National Mall as the area extending from the foot of the U.S. Capitol grounds west to the Washington Monument and proceeding further west and southeast to include the Lincoln and Jefferson Memorials. It also includes the area between Constitution and Independence Avenues between 1st and 14th Streets (see fig. 1). The open spaces of the National Mall, along with the Washington Monument, the Lincoln and Jefferson Memorials, and other memorials, are (1) administered and maintained by the National Capital Parks unit of the National Park Service (Park Service), which is within the Department of the Interior (Interior), and (2) patrolled by the U.S. Park Police. In addition, other federal agencies control and maintain various facilities located on the National Mall, as described below: Smithsonian Institution (Smithsonian): Created as a trust instrumentality of the United States by an act of Congress in 1846, the Smithsonian is considered the world’s largest museum and research complex, featuring 11 facilities on the National Mall—that is, the Smithsonian Castle, Arts and Industries Building, Freer Gallery of Art, Hirshhorn Museum and Sculpture Garden, National Air and Space Museum, National Museum of African Art, National Museum of American History, National Museum of the American Indian, National Museum of Natural History, Arthur M. Sackler Gallery, and S. Dillon Ripley Center. National Gallery of Art (National Gallery): With the gift of Andrew W. Mellon’s collection of paintings and works of sculptures, the National Gallery was created by a joint resolution of Congress in 1937. Located at the northeast corner of the National Mall, the National Gallery today maintains two buildings—the West and East Buildings, opened in 1941 and 1978, respectively—and an outdoor Sculpture Garden, opened to the public in 1999. Department of Agriculture (USDA): The only cabinet-level agency building located on the National Mall is the USDA’s Whitten Building. In 1995, this building was named for former U.S. Representative Jamie L. Whitten. U.S. Botanic Garden (USBG): Tracing its origins as far back as 1816, USBG is managed under the direction of the Joint Committee on the Library, with the Architect of the Capitol responsible for the garden’s operations and maintenance. USBG’s Conservatory and the adjacent outdoor National Garden (currently under construction) are situated on the southeast corner of the National Mall. Security for USBG is provided by the U.S. Capitol Police. Along with the federal agencies that manage facilities on the National Mall, several governmental and other entities have an oversight, advisory, or advocacy role related to the construction, renovation, or modification of facilities, including the implementation of security enhancements, on the National Mall and throughout Washington, D.C. These entities include the following: National Capital Planning Commission (NCPC): NCPC, which is the federal government’s central planning agency for the National Capital Region, provides planning guidance for the development of federal land and buildings in the city. NCPC and federal agencies must comply with both the National Environmental Policy Act (NEPA) and the National Historic Preservation Act (NHPA). These laws require that federal agencies consider the effects of their undertakings on environmental quality and historic properties, respectively, and allow for public participation and comment. NCPC’s policies and procedures are meant to ensure compliance with these laws during its review process. NCPC also reviews the design of federal construction projects, oversees long- range planning for development, and monitors capital investment by federal agencies. Commission of Fine Arts (CFA): CFA provides advice to federal and D.C. government agencies on matters of art and architecture that affect the appearance of the capital city. D.C. State Historic Preservation Officer (SHPO) and Advisory Council on Historic Preservation (ACHP): Federal agencies that undertake the construction or renovation of properties in Washington, D.C., are required by law to assess whether there may be effects to designated historic properties, engage in consultation with the SHPO on effects to historic properties, and provide ACHP with an opportunity to comment. ACHP promotes the preservation, enhancement, and productive use of the nation’s historic resources and reviews federal programs and policies to promote effectiveness, coordination, and consistency with national preservation policies. National Coalition to Save Our Mall: Founded in 2000, the coalition is made up of professional and civic organizations and concerned artists, historians, and citizens to provide a national constituency dedicated to the protection and preservation of the National Mall in Washington, D.C. The coalition’s mission is to “defend our national gathering place and symbol of Constitutional principles against threats posed by recent and ongoing proposals—for new memorials, security barriers, service buildings and roads—that would encroach on the Mall’s historical and cultural integrity, its open spaces and sweeping vistas, and its significance in American public life.” The physical security of federal facilities, including those on the National Mall, has been a more urgent governmentwide concern since the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma. The vulnerability of our nation’s infrastructure was further highlighted after the terrorist attacks of September 11. Since the September 11 attacks, actions have been taken to better protect our critical infrastructure and key assets from future attacks of terrorism. In 2002, the Administration’s Office of Homeland Security issued The National Strategy for Homeland Security, which recognized the potential for attacks on national monuments and icons and identified Interior as the lead federal agency with jurisdiction over these key assets. The Administration outlined actions that Interior should take to protect national icons and monuments in The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets in 2003. Furthermore, the Administration issued Homeland Security Presidential Directive 7 in December 2003, establishing a national policy for federal agencies to identify and prioritize U.S. critical infrastructure and key resources and to protect them from terrorist attacks. In response to the effects of what were viewed as makeshift security measures that affected the historic design and streetscape of Washington, D.C., NCPC’s Interagency Task Force issued a report in October 2001— Designing for Security in the Nation’s Capital—identifying design strategies to improve mobility and aesthetic conditions throughout Washington, D.C. The following year, NCPC released a design framework and implementation strategy for Washington’s “monumental core” and downtown area, National Capital Urban Design and Security Plan, which provided a summary of building perimeter security considerations; streetscape design concepts that incorporate security components; and an implementation strategy for the design, construction, funding, maintenance, and operations of security installations in Washington, D.C. (See the bibliography for additional reports related to this topic.) Likewise, improving the physical security of federal facilities has been the subject of several GAO reports, including our November 2004 report. In that report, we assessed the actions of the federal government’s Interagency Security Committee in coordinating federal facility protection efforts and delineated a set of six key practices emerging from the collective practices of federal agencies to provide a framework for guiding agencies’ facility protection efforts (see fig. 2). As previously mentioned, these key practices are allocating resources using risk management, leveraging technology, information-sharing and coordination, performance measurement and testing, aligning assets to mission, and strategic management of human capital. Federal Agencies Have Obligated about $132 Million for Physical Security Enhancements on the National Mall since September 11, and Additional Measures Are Planned Since the terrorist attacks of September 11, about $132 million has been obligated for physical security enhancements by federal agencies for facilities on the National Mall. Overall, the Park Service and the Smithsonian have incurred higher levels of obligations for physical security enhancements than other agencies because they manage most of the facilities on the National Mall (see table 1). Federal agencies obligated funds for physical security enhancements from funds made available through annual and supplemental appropriations. The implementation of physical security enhancements on the National Mall is shaped, in part, by the availability of funds and the costs of enhancements. Federal agencies often adjust their security plans on the basis of available funding. The remaining text in this section describes the physical security enhancements for which these agencies told us they have obligated funds, as well as some of the costs associated with implementing these enhancements. Additional planned physical security enhancements for each of the agencies are also discussed. National Park Service and U.S. Park Police The Park Service and the Park Police told us they obligated over $57 million for physical security enhancements, including security personnel, on the National Mall during fiscal years 2002 through 2004, primarily at the Washington Monument and the Lincoln and Jefferson Memorials. For each of these monuments and memorials, the Park Service incurred such obligations to conduct site surveys; develop security proposals; comply with environmental, historical, and design guidelines; hire construction managers; and replace temporary security measures with permanent security enhancements. Perimeter security construction was under way at both the Washington Monument and the Lincoln Memorial during our review, while designs for perimeter security at the Jefferson Memorial have not been finalized. The following text provides some examples of perimeter security enhancements implemented and planned at each of these national icons. The Washington Monument: After September 11, the Park Service installed closed-circuit television cameras, in addition to temporary security measures, such as a ring of jersey barriers and a visitor screening facility at the Washington Monument. During our review, the Monument was closed to the public because of construction to replace these temporary security features with permanent security enhancements. The Monument reopened in April 2005, and the grounds are expected to reopen in early summer. The grounds will be regraded, and 30-inch retaining walls, serving as both vehicle barriers and visitor seating, will surround the Monument. In addition, pedestrian pathways, upgraded lighting, and seating benches are expected to be installed on the Monument grounds. The total cost of constructing these permanent physical security enhancements is estimated at $12.2 million. The Park Service also told us it is considering the installation of a remote visitor screening facility; however, implementation of this security enhancement had not been approved or scheduled. The Lincoln Memorial: After September 11, concrete jersey barriers and planters were installed around the Lincoln Memorial ring and the circular drive east of the memorial was closed to all traffic. Construction is expected to be completed in spring 2006, at which time a 35-inch retaining wall will serve as a perimeter vehicle barrier around the north, west, and south sides of the memorial. In addition, bollards (short posts) will be installed on the east side of the circle to complete the vehicle barrier system. Construction costs for the vehicle barrier system are estimated at $5.1 million. The Lincoln Memorial is currently surrounded by temporary security enhancements that were installed shortly after September 11, 2001. These enhancements include the placement of jersey barriers along the circumference of the circular roadway surrounding the memorial. The chain-link fence shown in the above photograph has since been removed. The Park Service currently proposes to connect the approved and under construction retaining wall that will protect the north, west, and south sides of the Lincoln Memorial with a line of bollards on the inner curb of the east side of the circular roadway. These bollards will be at the foot of the steps leading from the memorial to the circular roadway. The choice of materials, metal or stone, as well as the design, has not been finalized. The Jefferson Memorial: Since the September 11 terrorist attacks, temporary concrete jersey barriers have been in place around the Jefferson Memorial, and the U-shaped drive on the south side of the memorial has been closed to traffic. In addition, the parking lot adjacent to the memorial has been closed to the public. The construction of a permanent vehicle barrier system, still in the design stage, is expected to begin in the winter of 2005 and to be completed in the winter of 2006 at an estimated cost of $4.1 million. In addition to funds specifically obligated at these national icons, the Park Service obligated funds in fiscal year 2002 for closed-circuit television cameras at various memorials located within the National Mall. Furthermore, the Park Police obligated funds during this time for security personnel and equipment support, such as X-ray machines, body armor, and vehicles. The Park Service told us the completion of permanent vehicle barriers and the installation of equipment and technology upgrades, such as permanent security cameras at each monument and memorial, were the only additional physical security enhancements planned on the National Mall at the time of our review. The proposed security barrier around the Jefferson Memorial consists of a combination of freestanding walls, reinforced decorative fencing, and bollards. Where possible, the barrier system will run along the park road to the south of the memorial. Thirty percent of the parking lost with the closure of the existing parking lot will be added just outside the barrier system, and an additional 200 parking spaces are available within 600 yards of the memorial. In fiscal years 2002 through 2004, the Smithsonian obligated approximately $42 million for numerous physical security enhancements, such as additional security personnel, periodic risk assessments, perimeter vehicle barriers, blast mitigation film, closed-circuit television cameras, emergency voice systems, and electronic screening of the public and mail at its National Mall facilities. Some of these security enhancements were already completed at the time of our review. In other cases, enhancements already existed in a facility or are planned to be implemented during future renovations. Smithsonian officials noted that they have established priorities for the implementation of physical security enhancements, identifying as their top priorities the installation of perimeter security barriers and of blast protection film on their facilities’ windows. The Smithsonian plans to obligate an additional $72 million to implement these and other security enhancements between fiscal years 2006 through 2012. Perimeter vehicle barriers: Permanent barriers around the exterior of each of the Smithsonian’s National Mall facilities will replace existing temporary barriers to provide protection from vehicle bombs. According to the Smithsonian, this security measure, which is to be implemented in three phases, is one of its highest priorities. The first phase, the construction of a perimeter barrier around the National Air and Space Museum, has already begun and is expected to be completed in February 2006. The second phase, the construction of perimeter barriers around the Smithsonian’s National Museum of American History and National Museum of Natural History is expected to begin in July 2006 and to be completed in June 2008. The final phase, addressing perimeter security for the remaining Smithsonian facilities on the National Mall, will be implemented between April 2008 and April 2010. Smithsonian officials told us that $11 million was obligated for this project in fiscal years 2002 and 2003, and that an additional $24.7 million is planned for obligation through fiscal year 2008. Blast-resistant window system enhancement: For this enhancement, which is designed to prevent or reduce the number of deaths or injuries from flying glass, the Smithsonian obligated a total of $1.8 million in fiscal years 2003 and 2004 and plans to obligate an additional $44.9 million through fiscal year 2012 to implement this measure. Perimeter closed-circuit television cameras: Providing surveillance of the grounds adjacent to the Smithsonian’s National Mall facilities to detect suspicious activities, this enhancement has been implemented by the Smithsonian at 3 of its facilities on the National Mall, resulting in obligations totaling $660,000 in fiscal year 2002. The Smithsonian canceled the implementation of this security enhancement at some of its other National Mall facilities but plans to implement the measure during future security upgrades or capital renovation projects. Emergency voice systems: This enhancement, intended to enable emergency response staff to broadcast disaster- or emergency-related information to affected Smithsonian staff and visitors, was in place at three museums on the National Mall prior to September 11. To implement this enhancement at the remainder of its facilities, the Smithsonian obligated $2.9 million in fiscal year 2002. Electronic screening of the public and mail: According to the Smithsonian, this enhancement is designed to prevent a terrorist from carrying an explosive device or firearm into a Smithsonian facility, or to mitigate the effects of such a weapon’s use. The enhancement also is designed to detect explosives or biological agents delivered through the mail system. Although lack of space for screening equipment will limit the use of this security enhancement at its National Mall facilities, the Smithsonian does plan to implement this measure at some of its facilities. However, in some cases, renovations are required to install an adequate number of screening stations. The Smithsonian has deferred renovations to fully implement this measure until it can address higher priority security enhancements. In the meantime, several facilities have received full magnetometer screening and bag searches to limit the potential for explosive devices or firearms to enter a Smithsonian facility. The Smithsonian obligated $2.2 million in fiscal year 2002 for this enhancement. Besides funding the enhancements previously identified, the Smithsonian obligated about $20 million for additional security personnel and $1 million for risk assessments for its facilities during fiscal years 2002 through 2004. Furthermore, the Smithsonian has requested $700,000 for electronic access control measures and $2 million to deter, detect, or prevent the introduction of chemical, biological, or radiological agents into air intakes at its National Mall facilities. National Gallery of Art Officials from the National Gallery told us it has obligated over $7 million to implement physical security enhancements at its East and West Buildings and Sculpture Garden since September 11. Funds have been obligated at both the East and West Buildings and for equipment and technology, such as magnetometers, X-ray machines, closed-circuit television cameras, and body armor. In addition, the National Gallery installed streetscape and landscape barriers, such as trees and boulders, along the exterior of the East Building; constructed a security guardhouse and modified the service entrance at the West Building; and deployed temporary barricades to be used during heightened security alerts. Finally, the National Gallery has obligated funds for an Integrated Security Management System, the review of its disaster management plan, and the review of vulnerability assessments for security against explosive devices. Although implementation of future security enhancements is subject to available funding, the following text describes some examples of security enhancements planned by the National Gallery: The National Gallery plans to conduct additional studies to evaluate its camera system and the need for an Emergency Operations Center (EOC). By determining the number and location of cameras currently in use throughout the National Gallery, this study will provide the gallery with the most comprehensive surveillance system possible. The EOC study will determine the National Gallery’s need for an off-site space to conduct security operations in the event of a large-scale emergency affecting the National Mall. The estimated cost of the studies is $350,000. The National Gallery plans to upgrade perimeter security through additional protections against explosions and hazardous agents. These measures include erecting bollards and retractable steel plates around the perimeter of the East and West Buildings and Sculpture Garden to protect against unauthorized vehicles, adding window film to windows in the entire East Building and part of the West Building, and installing air intake protection sensors in the West Building to protect against biological agents or other materials. The estimated cost of implementing these enhancements is $1.4 million. The National Gallery plans to install additional equipment and technology, such as improved access controls and biometrics, perimeter cameras, and screening devices. For example, new employee identification badges (smart cards) will be authenticated and electronically tracked through the National Gallery’s Integrated Security Management System to protect against fraud. In addition, the National Gallery intends to improve security and access controls through the use of biometric systems. Additional external cameras will improve surveillance of the East and West Buildings and Sculpture Garden. Finally, X-ray machines and magnetometers that are already in use at some public entrances will be added at closed entrances at the West Building to improve visitor access during heightened security. The estimated cost of implementing these enhancements is $580,000. Department of Agriculture USDA has obligated about $25 million for physical security enhancements for its facilities on or adjacent to the National Mall since September 11. USDA conducted blast assessment studies, hired additional security personnel, and began installing window protection measures and a public address system at each of its Washington, D.C., facilities, in addition to developing a perimeter streetscape security master plan for the four- building headquarters complex. USDA also obligated funds for a situation room and a heating, ventilating, and air-conditioning (HVAC) air intake study at the Whitten Building located on the National Mall. USDA plans to continue installing blast resistant windows for the South Building under its overall modernization project and safety drapes in additional locations in the four-building headquarters complex; it also plans to undertake major HVAC improvements against bioterrorism. However, the implementation of these measures is dependent on available funding and the priority given to these measures by USDA. In some cases, the security enhancements will be coordinated with major renovations of its facilities. Beginning in fiscal year 2006, USDA also plans to improve security around its facilities by implementing perimeter security barriers that it developed for the Whitten Building and adjacent facilities. USDA plans to implement this project in four phases based on funding availability and USDA’s assessment of each building’s location, vulnerability, and other factors (see fig. 3). Each phase can be subdivided and adjusted according to funding availability. The proposed security elements include a combination of bollards, fences, planters, tree well enclosures, and retaining and freestanding walls located primarily at the buildings’ roadways, curbs, and driveways. Specifically, at the Whitten Building facing the National Mall, USDA plans to install a combination of bollards and planters to create a 50-foot stand-off distance from the facility. The overall estimated cost of implementing these perimeter security enhancements is between $13 million and $14 million. U.S. Botanic Garden The U.S. Capitol Police is responsible for security at USBG. The physical security enhancements implemented at USBG include a visitor screening facility at the entrance of the Conservatory to detect weapons and explosives, security cameras, card readers throughout the Conservatory, an alarm system, and the addition of four security officers when the Conservatory is open to the public. The U.S. Capitol Police obligated $600,000 in fiscal year 2003 to implement these enhancements. U.S. Capitol Police officials told us they do not anticipate a need for additional funding for security enhancements at USBG. Security Enhancements Have Incorporated Considerations of Public Access and Aesthetics and Have Been Generally Accepted by Visitors Public access and aesthetics are vital to the design and approval of physical security enhancements to sites on the National Mall. Agencies are required to coordinate with reviewing organizations and consider aesthetics, historic preservation, urban design, urban planning, and environmental impacts when implementing physical security enhancements. Reports from federal agencies, along with responses to our own survey of National Mall visitors, indicate that visitors have found the current level of public access and the aesthetics of temporary and permanent physical security enhancements acceptable. The majority of survey respondents also indicated that aesthetics and public access should be given high priorities when adding security enhancements to the National Mall. Access and Aesthetics Are Critical to the Design and Approval of Physical Security Enhancements on the National Mall Agency officials told us that they consider public access and aesthetics in developing and designing physical security enhancements for their facilities on the National Mall. These officials noted that maintaining the cultural and historic character of their facilities is important, and that providing visitors with access to their facilities is fundamental to their educational and commemorative missions. For example, officials of the Smithsonian and National Gallery stated the importance of ensuring the public’s access to their collections and exhibits when implementing security enhancements. Park Service officials noted that they want visitors to be able to access the monuments and memorials as they did before security enhancements were implemented. Similarly, in terms of aesthetics, officials of the Smithsonian and National Gallery told us that in designing smaller security projects, they use exhibit and design specialists to ensure that the security projects are implemented according to consistent standards throughout their facilities. For larger security projects, they also work with security consultants, design specialists, and architecture and engineering firms to ensure that aesthetics are incorporated into their security designs. USBG works with the U.S. Capitol Police to incorporate aesthetics into security enhancements. For example, additional surveillance cameras were reinstalled in less visible sites, while maintaining their overall security function. In the case of a facility that is under construction, such as the Smithsonian’s National Museum of the American Indian, security features can be integrated directly into the design of the structure without the need for the subsequent installation of potentially more conspicuous and obtrusive features (see fig. 4). After September 11, the Smithsonian altered the landscaping plan for the National Museum of the American Indian to integrate additional security enhancements into the design of the facility. Specifically, four substantial “grandfather rocks” were repositioned to locations where they could serve as a vehicle barrier, while maintaining the cultural and aesthetic significance of these objects. In most cases, however, agencies have had to develop and design physical security enhancements for facilities already in place on the National Mall. Still, officials of these agencies told us that public access and aesthetics are critical elements in the design of security enhancements. For example, officials of the Smithsonian noted that the perimeter vehicle barriers that will be constructed around each of its museums on the National Mall have been designed with an eye toward integrating the architectural design and characteristics of the museums into the barriers. In addition, they noted that the height of the barriers will be adjusted in certain locations to achieve a better appearance and scale, improve pedestrian movement and accessibility, and provide space for visitors to sit on the barriers themselves. Similarly, the physical security enhancements to the Washington Monument that were under construction during our review were designed to ensure consistency in the historical landscaping of the grounds and in the spaces for visitors’ recreation. Although the Park Service developed alternative design proposals, including the one depicted in the figure below (right), the selected design includes a regrading of the Monument grounds and the construction of retaining walls that are intended to disappear into the landscape (see fig. 5). Multiple Organizations Work with National Mall Agencies to Design and Review Security Enhancements Several organizations work with the agencies that have facilities on the National Mall to ensure that security enhancements reflect access and aesthetic concerns. Specifically, the SHPO and ACHP, as well as NCPC and CFA, coordinate with the agencies that have facilities on the National Mall. Such coordination is designed to ensure that architecture, urban design, urban planning, aesthetics, historic preservation, and environmental impacts are considered when implementing physical security enhancements. For example, federal agencies must prepare an environmental assessment to determine the effects of proposed security enhancements on the human environment as part of the NEPA process. In addition, because security enhancements may affect the historic character of properties on the National Mall, federal agencies are required to follow the NHPA’s Section 106 review process. This process has federal agencies consider the effects of their actions on historic property and address “adverse effects” that could diminish the integrity of the property. Federal agencies are responsible for initiating the review process and for consulting with the SHPO on measures to deal with any adverse effects. In addition, ACHP is given a reasonable opportunity to comment as part of the NHPA process. Federal agencies are also required to solicit public input as part of both the NEPA and NHPA review processes. Finally, agencies must submit those designs that fall under the NCPC and CFA statutory authorities to these review organizations before security enhancements can be implemented. NCPC officials told us that they examine security projects comprehensively from a broad design and urban planning perspective to ensure the project’s consistency with the commission’s comprehensive urban design and planning documents, such as the Comprehensive Plan for the National Capital and the Urban Design and Security Plan. NCPC must give approval before a security enhancement project can be implemented. CFA officials told us they focus on visual appearance and on how security enhancements can be physically integrated into the urban environment. Although agencies must submit security designs to CFA, the commission plays an advisory role in reviewing security projects and cannot enforce agencies to implement its recommendations. Projects are generally submitted to NCPC and CFA after the completion of most, if not all, of the NEPA and NHPA processes. These processes must be completed before NCPC approves the final design. National Mall Agencies and Review Organizations Identified Challenges in Designing and Approving Security Enhancements Although aesthetic and public access considerations are seen as critical elements in the design and approval of physical security enhancements to facilities on the National Mall, agency officials also told us that the process applicable to all construction and renovation projects in Washington, D.C.—requiring consultation, review, and approval with multiple review organizations—adds to project costs and can be both time-consuming and inefficient. Of particular concern, officials of these agencies noted the seeming overlap in consultations and reviews of projects required among the review organizations. For example, Park Service officials told us that in submitting a security proposal, one review organization might request a particular change to the design, and another organization might request an entirely different change. Sometimes, consensus on the design of a security project had been reached at the staff level within a review organization, but the commissioners within that organization then had different ideas about the project’s design. For example, designs for security enhancements for the eastern portion of the Lincoln Memorial have gone before the CFA’s commissioners several times for their review. Furthermore, some agency officials noted that the commissioners from CFA and NCPC might disagree on a particular security design. According to officials from the Park Service, there is currently no guidance available to assist agencies in moving forward on proposals that receive contradictory direction. These officials suggested that in such cases, commissioners, rather than staff, from both review organizations should consult with one another to resolve their differences and provide guidance to the agency on moving forward. While CFA officials acknowledged that there is no formal process for resolving disagreements between commissions, they noted several options for reconciling such differences. For example, in some cases, agencies may be able to circulate revised drawings to the commissions in between formal meetings, or the commissions might delegate approval authority to the staff level, pending modifications. Finally, the public can comment on security proposals affecting the National Mall. As a result of competing stakeholder interests, it can take months or even years to go through the review process. The perimeter security designs for the Washington Monument illustrate the effects multiple stakeholders can have on a proposed security project’s design and schedule. Officials from the Park Service told us that a preliminary design for the Washington Monument was selected in December 2001. The design consisted primarily of landscape barriers that would provide perimeter security and an underground visitor screening facility. The Park Service submitted its design to CFA at this time, and, according to both parties, CFA approved the vehicular barrier portion of the design with only minor changes. In addition, Park Service officials told us that they submitted the security design to NCPC in January 2002 and received final approval for the perimeter security portion of the design in June 2003. Park Service officials noted the approval process for the Washington Monument design was relatively quick. However, the design for the underground screening facility did not receive final approval from CFA and received only preliminary approval from NCPC before the underground screening facility project was canceled. According to CFA officials, the screening facility as planned would have drastically changed how visitors accessed the Monument, and it was not an effective security proposal. CFA officials told us they proposed a number of alternatives for this portion of the project, but the Park Service rejected them. According to CFA officials, they have not recently discussed this project with the Park Service. Park Service officials told us that the concept for the underground screening facility was abandoned because of significant resistance from a number of stakeholders and because Congress never approved funding for the measure. Park Service officials told us the temporary screening facility that was in place before the Washington Monument was under construction will be put back until a permanent screening facility is designed. Review organizations also identified challenges in the review process for implementing security enhancements on the National Mall. Review organizations said they have concerns about their budgets and staff resources. Officials from these organizations told us that the number of security projects submitted for their review has greatly increased since the September 11 terrorist attacks. However, officials noted that they have not received additional funding or staff to respond to the increase in proposals. In addition, officials from CFA and NCPC noted that some agencies do not always justify the need for a particular security enhancement or identify the threat that the agency is trying to protect against. Officials from CFA noted that this type of information is helpful in developing a design that meets the needs of both the agency and the review organization. Furthermore, officials from CFA also noted that when applicants come to them after a project already has been designed, the applicant is often reluctant to make any changes or consider alternative approaches because of the time and money already invested. Finally, both federal agencies and the review organizations noted that the limited number of security designs available to secure facilities in an urban environment presents a challenge in implementing security enhancements. Park Service officials noted that the technology available for perimeter security consists primarily of vehicle barrier systems (e.g., bollards, walls, and strengthened street furniture). However, these officials noted that the review organizations often do not approve security designs that exclusively consist of bollards. National Mall Agencies and Review Organizations Identified Steps That Can Make the Review Process More Efficient Several agency officials, along with the review organizations, stated that early and frequent consultation helps to ensure a smoother, more efficient review process. Both the agencies and the review organizations noted that informal consultations between all parties should continue throughout the design of the security project. Informal consultations can begin before “putting pen to paper” and should occur during the project’s preliminary design phase. According to these officials, security proposals, in particular, benefit from these early consultations because of their importance and sensitivity. Both the review organizations and the federal agencies identified the following additional actions that could lead to a more efficient review process: Consult early and frequently with all relevant stakeholders: Consulting with all of the review organizations that play a role in the design and approval of security enhancements at the same time not only facilitates a more efficient review process, but doing so can also improve relations between agencies and review organizations over time. In addition, consulting with all stakeholders allows for the expression of everyone’s views and concerns up front. Moreover, consultation with the staff and, in some cases with the commissioners of the review organizations, allows them to react informally to a proposed design, thereby giving agencies the opportunity to incorporate their opinions into the proposal. Officials from NCPC told us that their commissioners and CFA’s commissioners might disagree on a design proposal because they are providing a first reaction to a design that was not previously discussed during informal consultations. In such cases, agencies may have to go back through the review process to meet everyone’s needs, which can take several additional months or even years, in addition to costing the agency financial and staff resources. However, officials from the review organizations noted that disagreements between the two commissions occur infrequently, perhaps once a year. According to the Park Service, disagreements between the two commissions seem to occur more often with security projects that include some of our nation’s memorials. For example, Park Service officials noted that they have received different direction from the two commissions on the Washington Monument, Lincoln Memorial, and Jefferson Memorial security projects. In considering a design for its perimeter security projects, the Smithsonian consulted with all of the review organizations before developing a concept design. The parties discussed different design options, and the Smithsonian was able to incorporate the review organizations’ comments and suggestions into its proposal. According to CFA, the Smithsonian also selected a designer that considered the needs of the agency and the balance between security and access and urban design. Smithsonian officials believe that the success of their efforts hinged on bringing to the table experts from their offices of Protection Services; Historic Preservation; and Engineering, Design, and Construction who were willing to engage in dialogue and answer questions from the review organizations. As a result, the Smithsonian received favorable reviews of their preliminary design for security enhancements from all of the stakeholders. According to Smithsonian officials, the Smithsonian continues to consult with the SHPO, NCPC, and CFA during the ongoing development of its final perimeter security designs. Be flexible and open to the review process and possible changes: Officials from some of the agencies and the review organizations discussed the importance of being open and flexible to alternatives throughout the design process for security enhancements. In particular, some officials stressed the importance of taking time to develop a security solution built on the opinions and consensus of all stakeholders. According to these officials, this approach will ultimately result in stronger working relationships and a design solution that takes both security and urban design issues into consideration. Officials from CFA told us that the Departments of Energy and Education developed successful security designs because they consulted early and were open to considering alternative proposals. For example, according to CFA, Energy’s ideas for security designs at one of its Washington, D.C., facilities were not appropriate for an urban environment. However, through consultations with the review organizations, Energy was able to design a better security project that will be less costly than the one it originally designed. Similarly, Education developed a proposal for renovating its plaza but did not incorporate any security enhancements into the design. However, because Education consulted with the review organizations before going too far in the design process, it was able to incorporate security features into the design. As a result, Education avoided later costly revisions to the project. Consult urban planning documents such as NCPC’s submission guidelines and Urban Design and Security Plan: Agencies submitting project proposals to NCPC for review and approval are required to follow NCPC’s submission guidelines. The guidelines include NCPC’s requirements for various phases of project proposals as well as NCPC’s environmental and historic preservation procedures. The submission guidelines also outline suggestions for coordinating stages of the review process. For example, agencies can initiate the NEPA and NHPA review processes simultaneously and plan their public participation, analysis, and review so as to meet the purposes and requirements of both statutes in a timely and efficient manner. The Security Plan provides a framework for planning, designing, and implementing security enhancements and focuses exclusively on incorporating perimeter security measures into existing streetscape or landscape features. The Security Plan also identifies security design solutions that are appropriate to the character of areas within the Monumental Core, including the National Mall and the Washington Monument and Lincoln and Jefferson Memorials. Several of the agencies on the National Mall told us they actively participated in the development of the Security Plan, and they are using the plan to help them balance perimeter security issues with considerations of aesthetics and access to the National Mall. For example, Park Service officials told us they used the plan to develop concept designs for the Washington Monument as well as the Lincoln and Jefferson Memorials. Similarly, the Smithsonian developed plans to replace planter pots, industrial-looking vehicle barriers, and other temporary security measures with custom-designed elements, including benches, light poles, urns, and bollards, that complement the historic surroundings of the National Mall (see fig. 6). Smithsonian officials noted that the Security Plan provides constructive ideas for what NCPC does and does not look for in designs for security enhancements. As a result, NCPC has praised the Smithsonian on its efforts to balance necessary security enhancements with public access and aesthetics. Furthermore, according to USDA, its proposed security project was designed to address both minimum USDA perimeter security requirements and the goals of the NCPC plan. Proposed security enhancements for the Whitten Building include landscape bollards that sit well within the generous “front lawn” of the building, and that are designed to respect the significant and historic open character of the National Mall. Effects of Enhancements on Access and Appearance Are Generally Acceptable to Visitors Visitors value access to and the appearance of the National Mall and generally find security enhancements acceptable. A number of agencies on the National Mall told us that they have received very few complaints about difficulty in accessing sites on the National Mall. Officials from the Smithsonian further told us that a survey they conducted of visitors to their museums in fiscal year 2002 suggests that visitors do not consider the time standing in line to pass security checkpoints at museum entrances problematic, provided the wait is less than 15 minutes. Moreover, some agencies we interviewed also reported very few complaints about the appearance of sites that are being or have been modified to accommodate physical security enhancements. Our survey of about 300 visitors to the National Mall found that these visitors did not view the security enhancements on the National Mall, which included both temporary and permanent enhancements, as having unacceptable effects on access or appearance. Seventy-eight percent of respondents indicated that security enhancements had no effect on public access to sites on the National Mall, or made access easier. In addition, 64 percent of those surveyed said the security enhancements had no effect or a positive effect on the appearance of the National Mall (see fig. 7). The majority of survey respondents also said the security enhancements they encountered would have no effect on whether they will return for a visit. However, results differed between residents of the Washington, D.C., metropolitan area and those who reside in other areas. Washington, D.C., metropolitan-area residents were almost twice as likely as U.S. residents from outside the Washington, D.C., metropolitan area to report that security measures have had a negative effect on access to and appearance of sites on the National Mall. Furthermore, although visitors reported that current levels of public access and appearance are satisfactory, the survey results also suggest that visitors regard access and aesthetics as important priorities when adding security measures to the National Mall. The majority of respondents (85 percent) said both access and aesthetics should be considered a medium to high priority when implementing additional security enhancements. Overall, these results suggest that in terms of public access and aesthetics, visitors to the National Mall find the existing temporary and permanent security enhancements acceptable. Federal Agencies Report Using Most Key Practices, but Balancing Mission Priorities with the Need for Physical Security Enhancements Poses Common Challenge Agencies Report Using Most Key Practices to Implement Physical Security Enhancements In our November 2004 report, we identified six key practices that have emerged from the increased attention to facilities protection given by federal agencies in recent years. We noted that, collectively, these key practices could provide a framework for guiding federal agencies’ ongoing facility protection efforts. These practices are allocating resources using risk management; leveraging security technology; sharing information and coordinating protection efforts with other stakeholders; measuring program performance and testing security initiatives; implementing strategic human capital management to ensure that agencies are well- equipped to recruit and retain high-performing security professionals; and aligning assets to mission, thereby reducing vulnerabilities. Throughout our review, agencies with facilities on the National Mall reported using all but one of these key practices when implementing security enhancements. For example, the Smithsonian told us it leverages technology by using closed-circuit television cameras to extend the capabilities of its security staff. Closed-circuit television cameras enable security staff to quickly identify and respond to a security incident for investigative purposes. In addition, the Smithsonian told us it conducts periodic risk assessments of all its properties to determine how to allocate resources to mitigate the greatest risks first. The Park Service told us that it is including performance measures in its draft strategic plan, and that it conducts regular security inspections of national icons. The Park Service also told us that it is providing new training programs for security personnel, including in-service training for officers of the Park Police. To attract a more qualified pool of applicants for security positions, the National Gallery reported strengthening its recruitment process and reported a new emphasis on antiterrorism training for its security personnel. The National Gallery also told us it has implemented, or plans to implement, a number of advanced security technologies to provide a more comprehensive security assessment of its facilities. Finally, federal agencies also reported meeting periodically to discuss upcoming events, intelligence information, and criminal activities. However, none of the federal agencies reported using one key practice—aligning assets to mission—to implement physical security enhancements because they do not believe that they have excess or underutilized facilities on the National Mall or elsewhere or consider the practice applicable to properties under their jurisdiction. Allocating Resources Using Risk Management Allocating resources using risk management entails the systematic and analytical process of considering the likelihood that a threat will endanger an asset—that is, a structure, individual, or function—and identifying actions that can reduce the risk and mitigate the consequences. As part of its Disaster Management Program, the Smithsonian performs risk assessments of all its properties every 3 to 5 years to determine the need for security enhancements. Smithsonian officials told us that their last risk assessment was performed in fiscal year 2002, but another multihazard risk assessment—addressing both man-made and natural disasters—was occurring during our review. According to Smithsonian officials, the current effort will update the last risk assessment and provide a ranked listing of risks, with proposed mitigation actions and costs, across the entire portfolio of the Smithsonian’s facilities. In accordance with the intent of this key practice, Smithsonian officials said the updated risk assessment will allow the institution to use resources more efficiently to mitigate the greatest risks first. Park Service officials also told us that risk management is a key practice used to determine the need for physical security enhancements to their facilities on the National Mall. They noted that risk assessments were completed in the late 1990s by three outside entities, and internal reviews were performed by Park Police and Park Service officials. After September 11, the Park Service worked with a private security firm to assess the risk of terrorist attacks at monuments on the National Mall. This assessment examined potential threats—including the distance from which explosives could potentially destroy any of the National Mall’s structures—and alternative methods of both prevention and protection. Additionally, the Park Service identified specific protection criteria and designated key areas with the highest vulnerability as priorities, including areas of the National Mall. The Park Service told us it has used the security firm’s report findings to determine where to allocate appropriated funds and implement security upgrades for high-risk structures. Park Service officials also told us that they rely on risk assessments as well as intelligence assessments, reviews of latest terror trends, visitor needs, and reviews of criminal and service incidents to allocate resources to respond to identified risks. Since June 2004, Interior has applied its National Monuments and Icons Assessment Methodology (NM&I Methodology) to assets that fall under the purview of the Park Service. The NM&I Methodology provides a uniform risk assessment and ranking methodology and was developed in response to the Homeland Security Presidential Directive 7’s requirement that Interior formulate a plan for identifying, assessing, prioritizing, and developing protective programs for critical assets within the national icons and monuments sector. According to information from Interior, the NM&I Methodology is specifically designed to quantify risk, identify needed security enhancements, and measure risk-reduction benefits at icon and monument assets. National Gallery officials told us that it assesses potential risks to the physical security of its facilities through the use of technical consultants with specialized experience in security areas, such as blast analysis. The National Gallery uses the results of such studies to form a basis for developing specific projects or operational policies to mitigate the identified risks. For example, National Gallery officials told us that targeted risk assessments, such as the blast analysis on the exterior wall of the East Building, identified the need for window security film and various types of physical barriers. Leveraging Security Technology By efficiently using technology to supplement and reinforce other security measures, agencies can more effectively apply the appropriate countermeasures to vulnerabilities identified through the risk management process. Our previous work reported that prior to a significant investment in a project, a detailed analysis should be conducted to determine whether the benefits of a technology outweigh its costs. In addition, we reported that agencies should decide how a technology will be used and whether to use a technology at all to address vulnerabilities before implementation. The implementation costs of technologies in facilities protection can be high, particularly if infrastructure modifications are necessary. Therefore, in some cases, a lesser technological solution may be more effective and less costly than more advanced technologies. Several of the agency officials we spoke with identified steps they have taken to make efficient use of technology to supplement and reinforce other security enhancements. For example, the Smithsonian uses closed- circuit television cameras in several of its museums on the National Mall. These cameras are low-cost security technologies that extend the capabilities of the Smithsonian’s security staff by providing an immediate assessment of information for investigative purposes. The Smithsonian also identified the need for electronic screening facilities at some of its facilities on the National Mall. However, because the museums would need to undergo costly renovations to make enough space for the screening equipment, these museums are using magnetometer screening and bag searches until other, higher priority security enhancements have been implemented. The National Gallery has also implemented, and plans to implement, a number of security technologies at its facilities on the National Mall. Currently, the National Gallery uses magnetometers, X-ray machines, and closed-circuit television cameras to improve its perimeter protection. The National Gallery plans to undertake a risk analysis of its security camera configuration to determine whether the number of cameras currently in use provides the most comprehensive surveillance system possible. In addition, the National Gallery plans to improve its access control through new employee identification badges that can be rapidly authenticated and tracked electronically through an Integrated Security Management System. According to the National Gallery, comprehensively integrating a number of new technologies provides more complete security for its facilities and improves its operating efficiencies. Finally, Park Service officials stated that closed-circuit television cameras are in extensive use at the national icons on the National Mall and are a critical component to the security of the area. Park Service officials also noted that they are constantly reviewing developing security technologies to determine the most cost-effective methods for upgrades. Information-Sharing and Coordination All agencies said they obtain and share information on potential threats to facilities to better understand risks and more effectively determine preventive measures. Among the agencies with facilities on the National Mall, meetings are held quarterly to discuss upcoming events, intelligence information, and criminal activities. Numerous other forums of information-sharing and coordination also occur: Park Service officials told us that Park Police officers are assigned to the Federal Bureau of Investigation’s (FBI) Joint Task Force and participate in meetings with the U.S. Attorneys, the D.C. Metropolitan Police Department, and their own intelligence unit. In addition, we were told that the Park Service relies on information gathered from officers and rangers assigned to the National Mall area, who relay such information to other entities as appropriate; and that coordination routinely occurs between the Park Police and the Department of Homeland Security (DHS). Smithsonian officials said that they meet with the Park Police twice per month to discuss security issues, and again monthly to receive crime and terrorism intelligence from the Park Police, and on a daily basis to coordinate police activities on the National Mall. In addition, Smithsonian security officials meet and coordinate with the FBI and receive daily general information on terrorist and other disaster-related activity from DHS. According to officials of the National Gallery, they attend meetings and briefings with the FBI, the Mayor’s Special Events Task Group, and the U.S. Park Police. Further, National Gallery officials said they coordinate regularly with these entities, as well as the Federal Emergency Management Agency (FEMA), D.C. Metropolitan Police Department, DHS, U.S. Attorneys Office, U.S. Secret Service, Smithsonian, Library of Congress, National Archives, Federal Trade Commission, Federal Protective Service, and the John F. Kennedy Center for the Performing Arts. USDA officials noted they share information and coordinate with the Smithsonian, their immediate neighbor on the National Mall. USDA officials also told us they coordinate with the Federal Protective Service and the Park Police for general physical security and law enforcement activities. In addition, USDA officials noted they coordinate matters pertaining to national security, threats and emergency response directly with DHS, FEMA, the FBI, and the U.S. Secret Service, as applicable. Dignitary protection and the security of high-risk personnel are coordinated with the U.S. Secret Service and the Department of State. Finally, USDA officials told us they participate on the Southeast Area Security Chiefs Council and other forums to exchange and develop information pertaining to security and law enforcement. As previously noted, another source of coordination on physical security enhancements occurred through the NCPC Interagency Security Task Force. Made up of representatives of 75 stakeholder agencies, the task force’s efforts resulted in two reports that have guided agencies throughout the city in devising and implementing physical security enhancements. Both the Smithsonian and USDA’s perimeter security projects relied heavily on the task force’s National Capital Urban Design and Security Plan. Performance Measurement and Testing This key practice encompasses two components to ensure the effectiveness of physical security enhancements implemented by agencies: linking security goals to broader agency mission goals, and inspecting and assessing physical security enhancements. Park Service officials indicated that they use both parts of this key practice because they (1) include performance measures in the U.S. Park Police’s draft strategic plan and (2) conduct regular and frequent inspections of the national icons by the Park Police and routinely update and discuss security issues with Park Police officials. Smithsonian officials also told us they use both parts of this key practice in performing risk assessments of their facilities; implementing risk assessment recommendations for facility upgrades, adding staff, adding equipment, and using operational procedures as performance metrics; and including physical security measures in the Smithsonian’s broader performance measurements. USDA also said it uses both parts of this key practice by linking security goals to the broader agency goal of providing a safe and functional workplace to support staff in carrying out their public service missions and through an established program to inspect and periodically reassess the physical security stature of all USDA properties, including the properties near the National Mall, and to effect corrective actions as appropriate. Strategic Human Capital Management Strategic management of human capital involves implementing strategies to help individuals maximize their full potential, having the capability to recruit and retain high-performing security and law enforcement professionals, and ensuring that personnel are well exercised and exhibit good judgment in following security procedures. We found that most of the agencies on the National Mall are implementing this key practice primarily by offering new training programs for security personnel. Specifically, Park Service officials told us that they have sponsored training for employees of all affected parks as well as in-service training for officers of the Park Police. Similarly, the Smithsonian has instituted training courses on terrorism awareness, emergency procedure, and shelter-in-place procedures, among others, for its security staff. The National Gallery has also focused its efforts on training, with particular emphasis on antiterrorism training, such as shelter-in-place and evacuation drills. In addition, to attract a more qualified pool of applicants for security positions, the National Gallery reported strengthening its recruitment process. USDA constructed an emergency operations center, which is staffed 24 hours a day, 7 days a week, to monitor and respond to emergencies. Aligning Assets to Mission Aligning assets to mission involves the reduction of underutilized or excess property at federal agencies in order to better reflect agencies’ missions and reduce vulnerabilities by decreasing the number of assets that need to be protected. Our previous work reported that to the extent that agencies are expending resources to maintain and protect facilities that are not needed, funds available to protect critical assets may be lessened. In addition, we noted that funds no longer spent securing and maintaining excess property could be put to other uses, such as enhancing protection at critical assets that are tied to agencies’ missions. For example, we reported in January 2003 that the Department of Defense estimates it is spending $3 billion to $4 billion each year maintaining facilities that are not needed. In another example, costs associated with excess Energy facilities, primarily for security and maintenance, were estimated by Energy’s Office of the Inspector General in April 2002 to exceed $70 million annually. One building that illustrates this problem is the former Chicago main post office. In October 2003, we testified that this building, a massive 2.5 million square foot structure located near the Sears Tower, is vacant and costing USPS $2 million annually in holding costs. It is likely that agencies that continue to hold excess or underutilized property are also incurring significant holdings costs for services, including security and maintenance. Finally, we recently recommended that the Chair of the Interagency Security Committee consider our work as a starting point for establishing a framework of key practices that could guide agencies’ efforts in the facility protection area. None of the federal agencies reported using this key practice to implement physical security enhancements on the National Mall because they do not believe that they have excess or underutilized facilities or consider this practice applicable to property under their jurisdiction. For example, Smithsonian officials told us that they do not have any excess property on the National Mall or elsewhere. Officials stated that all of the Smithsonian’s facilities, including its gardens, are needed for research, education, and exhibition purposes to execute its mission of increasing and diffusing knowledge. The Smithsonian believes that any closures of its facilities would therefore be inconsistent with its mission. Similarly, according to the Park Service, land reserved or dedicated for national park purposes, including land under its jurisdiction, by law is not considered excess or underutilized property. Balancing Mission Priorities with the Need for Physical Security Enhancements Poses Common Challenge Although we found that agencies on the National Mall are using most of the key practices we identified for the protection of facilities, officials from most of these agencies identified a common challenge in using these practices and, in fact, in implementing all types of physical security enhancements. That common challenge is balancing their ongoing mission priorities with the emergent need to implement physical security enhancements. Some officials described the challenge as inadequate funding for security enhancements, or as competition for limited resources between any new requirements for security enhancements and more traditional functions and operations. Officials described the challenge as a more subtle need to ensure that physical security enhancements are not inconsistent with the agencies’ mission. For example, one official told us that planning for security enhancements necessitates the involvement of key facilities personnel to ensure that part of the agency’s mission—public access—is maintained. Another official we spoke with noted that careful planning and coordination for implementing physical security enhancements is essential to avoid compromising both programs and public access. Similarly, some officials suggested that the multiple levels of consultation and review required for projects that involve construction or renovation on federal property could be an obstacle to the use of key practices. Finally, officials from one agency noted that a lack of reliable, quantitative risk assessment data and little consistency in interpreting information and intelligence obtained from various sources create a challenge in using key practices to implement security measures. Concluding Observations The security of our nation’s critical infrastructure remains a heightened concern in the aftermath of the September 11 terrorist attacks. On the National Mall, federal agencies are in the early stages of designing and implementing permanent perimeter security barriers to protect their facilities and the visiting public. In doing so, agencies have coordinated with a number of review organizations that consider the impact of proposed security designs on the urban environment and the symbolic nature of the National Mall, its icons, and its museums. Multiple stakeholder viewpoints on the design of security enhancements present a challenge for an efficient review process. In some cases, agencies involved stakeholders after investing time and resources in a particular security design. As a result, these agencies sometimes had to go through multiple iterations of the review process, which can strain the already limited financial and staff resources of all stakeholders. As agencies continue developing security proposals for their facilities on the National Mall, several steps, such as early and frequent consultation with all stakeholders, can result in a more efficient review process. Specifically, consultation in the preliminary design phase allows for the consideration of multiple viewpoints and alternative design solutions, thereby mitigating the potential for later costly and time-consuming revisions. Such early consultation could also expedite the implementation of security enhancements to protect facilities and visitors on the National Mall. Key practices, such as allocating resources using risk management, coordinating protection efforts with other stakeholders, and aligning assets to mission, have clear implications for the facility protection area. As we have recently recommended, it is important that agencies give attention to these practices and consider them collectively as a framework for guiding their ongoing efforts in implementing security measures on the National Mall and in their overall facility protection areas. Agency Comments We provided draft copies of this report to the Smithsonian, Interior, USDA, and National Gallery for their review and comment. USDA officials generally agreed with the report’s findings and concluding observations and provided clarifying comments. Officials from the other agencies also provided clarifying and technical comments, which we incorporated into this report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees; the Secretaries of Agriculture, the Interior, and Smithsonian; and the Director of the National Gallery. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me on (202) 512-2834 or at goldsteinm@gao.gov or Susan Fleming, Assistant Director, on (202) 512-4431 or at flemings@gao.gov. Objectives, Scope, and Methodology Our objectives were to assess (1) physical security enhancements that have been implemented on the National Mall since September 11, 2001, the additional enhancements planned, and the costs of these enhancements; (2) the considerations given to incorporating access and aesthetics in designing and approving physical security enhancements on the National Mall, and how issues of access and aesthetics are perceived by visitors in relation to these enhancements; and (3) examples of how federal agencies are using key practices to implement physical security enhancements on the National Mall, and any challenges the agencies are experiencing in using these key practices. For all of these objectives, we researched historical plans for the design, expansion, and maintenance of the National Mall; appropriations acts and accompanying legislative material; statutory and regulatory provisions related to security enhancements of the National Mall grounds; and proposals for implementing physical security enhancements on the National Mall. We also interviewed officials of the National Park Service (Park Service), U.S. Park Police, Smithsonian Institution (Smithsonian), National Gallery of Art (National Gallery), Department of Agriculture (USDA), U.S. Botanic Garden (USBG), U.S. Capitol Police, National Capital Planning Commission, U.S. Commission of Fine Arts, Advisory Council for Historic Preservation, District of Columbia’s Historic Preservation Office, Department of Homeland Security, and National Coalition to Save Our Mall. While multiple geographic definitions of the National Mall exist, we defined the area of the National Mall, for purposes of our report, as extending from the foot of the U.S. Capitol grounds west to the Washington Monument and proceeding farther west and southeast to include the Lincoln and Jefferson Memorials. It also includes the area between Constitution and Independence Avenues between 1st and 14th Streets. We did not include the White House or the U.S. Capitol Building because security enhancements for these buildings fall under the jurisdiction of the U.S. Secret Service and the U.S. Capitol Police, respectively. In addition, for our first objective, we reviewed federal appropriations law and accompanying legislative materials, budget reports, and federal agencies’ and entities’ budget submissions related to physical security enhancements on the National Mall; we also received information about obligations and costs associated with physical security enhancements on the National Mall since the terrorist attacks of September 11. Agencies on the National Mall provided us with obligation data only for their facilities located on the National Mall, where possible. In some cases, obligations incurred for facilities on the National Mall could not be separated from obligations incurred for an agency’s facilities located adjacent to the National Mall. To assess the reliability of the obligation and cost data received by these agencies, we developed a template for agencies on the National Mall to obtain consistency in the data provided by each of the agencies; interviewed knowledgeable agency officials to clarify any questions; provided the agencies with a spreadsheet we developed that organized obligations for security enhancements by fiscal year to make sure that we accurately used the data provided and asked agencies to identify the source of the obligations incurred; and further clarified any discrepancies in these data. From this assessment, we determined that these data are sufficiently reliable for purposes of this report. For our second objective, we also reviewed the law, planning and review criteria, reports, and documentation related to specific proposals for physical security enhancements on the National Mall. In addition, we conducted a 3-minute intercept survey of visitors to the National Mall to determine (1) the extent to which visitors to the National Mall feel that security measures on the National Mall affect access to sites on the National Mall and the appearance of the National Mall; (2) the extent to which visitors to the National Mall feel that additional security measures are needed; (3) the priority that National Mall visitors would assign access to the National Mall and the appearance of the National Mall, in the event that additional security measures are added; and (4) whether security measures affect the likelihood that National Mall visitors will return. To develop the questions for the 3-minute survey, we identified the key information necessary to gain a general understanding of (1) how visitors to the National Mall assess the effects of security measures on access to and the appearance of the National Mall and (2) the priority that visitors assign to the National Mall’s accessibility and appearance. After initially developing, reviewing, and modifying the survey questions, we conducted a total of nine pretests—four cognitive pretests with GAO employees who were not associated with this review and five with visitors to the National Mall. We provided GAO employee pretest participants (internal participants) with an overview of the engagement and the intercept survey methodology to be utilized. Subsequently, we showed internal participants the map of the National Mall and then asked them to respond to the survey questions. Upon completion of the survey, we asked for specific comments on each question and encouraged participants to share their thoughts and ideas regarding the structure of the survey and the extent to which the questions seemed clear and easy to answer. The five external pretests were conducted by GAO team members on the National Mall, near the Smithsonian Metro Station. Following the intercept survey protocol, our interviewers approached respondents asking if they would like to answer a short survey on physical security measures on the National Mall area. Five out of 15 potential respondents approached participated in the survey. Nonrespondents consisted of those unwilling to participate, those who had not yet seen anything on the National Mall because they had just arrived, and those unable to speak the English language. Respondents were first shown the map of the National Mall and then were asked to respond to the survey questions. Interviewers noted questions, comments, and any lack of clarity to the questions on the part of external pretest respondents. The final changes to the survey were made on the basis of the combined observations from the pretests with GAO employees and pretests with visitors to the National Mall. The population for the survey was National Mall visitors. We chose survey sites to cover the geographic range of the National Mall and conducted interviews between 1:30 p.m. and 4:00 p.m. on Monday, October 18; Monday, October 25; Tuesday, October 26; Friday, November 5; and Sunday, November 7, 2004. We chose to interview National Mall visitors during these hours for two reasons: (1) to make it more likely that visitors stopped for the survey had been on the National Mall long enough to visit one or more sites on the National Mall and (2) to reduce the chances of surveying government employees on the National Mall during their lunch break. We identified 300 as the target size for our sample, on the basis of balancing the advantages and costs associated with a larger sample size, considering that a sample of this size allows for some analysis of subgroups but is small enough to limit survey costs. We stratified the sample by choosing survey sites to cover the geographic range of the National Mall. To avoid any bias by gender, ethnicity, or other individual differences, we systematically approached the fifth person who passed by a particular landmark (e.g., a park bench, tree, or light pole); first, from the time interviewing commenced and, thereafter, immediately following the completion of an interview. In counting potential respondents, we excluded several types of individuals as out of scope. Specifically, we excluded persons who did not speak English, who appeared to be younger than 18 years old, who were exercising on the National Mall, who were talking on a cell phone, who were leading a group of people on the National Mall, or who had just arrived on the National Mall and had not yet visited any sites. Of 667 National Mall visitors approached and asked to complete the survey, 537 were found to be in scope. Of these 537 visitors, 229 declined to complete the survey, yielding a 57 percent response rate. Although we took measures to avoid sample bias, our survey sample is a nonprobability sample. Results from nonprobability samples cannot be used to make inferences about a population because in a nonprobability sample, some elements of the population being studied have no chance or an unknown chance of being selected as part of the sample. GAO employees conducted the interviews. A GAO employee showed respondents a map of the National Mall, asked the survey questions, and marked responses on the survey. The survey first asked respondents to specify which sites and what types of security measures they had seen in their visit to the National Mall. To help with site identification, the map that the respondents received clearly labeled the museums and monuments. The survey then posed a series of questions about the effects of the security measures on access to National Mall sites and the appearance of the National Mall, the extent to which additional security is needed on the National Mall, and the priority respondents would assign to the accessibility and appearance of National Mall sites, in the event that further security measures are added. The survey concluded by asking whether the security measures affect respondents’ likelihood of returning to visit the National Mall. For our third objective, we also reviewed and analyzed GAO and other governmental reports on the protection of federal facilities and homeland security. We also developed a structured interview guide with questions about the key practices for implementing security enhancements and sent the guide to the Smithsonian, Park Service, USDA, and National Gallery. We then incorporated their responses into the report without independent verification. We conducted our review from August 2004 through May 2005 in accordance with generally accepted government auditing standards. Federal agency officials provided much of the data and other information used in this report. Overall, we found no discrepancies with these data and, therefore, determined that the data were sufficiently reliable for the purpose of this report. We requested official comments on this report from the Smithsonian, the Department of the Interior, USDA, and the National Gallery. Results of National Mall Visitor Survey I’m ______ from the GAO. Would you have a few minutes today to answer a short survey for Congress about security measures at the National Mall? (IF NEEDED: This survey asks for your thoughts about whether security measures put in place since 9/11 have affected the mall’s appearance or your ability to access buildings, monuments, memorials, and public places.) 1. Looking at this map of the National Mall, which monuments, museums, or other parts of the Mall have you visited today or recently? (73) (44) (119) (18) (48) (16) (48) (16) (85) (8) (88) (44) (163) (35) (66) (16) (78) (26) Other (SPECIFY) (75) (23) (11) (88) 2. I’m going to read through a list of security measures that you may or may not have encountered or seen today or recently. For each measure, please answer yes or no, as to whether or not you encountered these measures. A) Fences that limited access (212) (94) (2) (213) (89) (6) (259) (48) (1) (189) (117) (2) (34) (273) (1) (183) (125) (0) 2a. Did you see any other types of security measures, anything that’s not on our list? Other (SPECIFY) (26) (282) (0) 3. Did these security measures make it easy or difficult to access sites on the Mall, or did they have no effect at all? Would that be very difficult or somewhat easy? or somewhat difficult? 4% (10) Very difficult 6% (18) 3% (8) 16% (46) 71% (205) 4. Did these security measures have a positive or negative effect on the overall appearance of the National Mall, or did they have no effect at all? positive? negative? Very positive 10% (29) Very negative 9% (27) 9% (25) 27% (77) 45% (130) 5. Overall, do you think additional security measures on the National Mall . . . are definitely needed may be needed may not be needed are definitely not needed? 11% (29) 37% (101) 31% (84) 22% (60) (Note to interviewer: if respondent asks “compared to what,” say “compared to what you’ve 6. If additional security measures are added to the National Mall, what priority - low medium or high - would you give the following: A) Overall public access to the National 16% (47) 27% (82) 58% (175) B) Overall appearance of the National Mall 15% (45) 31% (93) 54% (162) 7. Do you currently live in the Washington metropolitan area, another state, or another country? Washington D.C. area Another state, please list: Another country, please list: 21% (65) 72% (221) 7% (22) 8. Would you say that the security measures you encountered today make it less likely that you will return for a visit, more likely that you will return for a visit, or do they have no effect at all? Less likely to return for a visit More likely to return for a visit No effect at all 5% (15) 12% (38) 83% (255) END: Thank you so much for participating. 23% (70) 20% (61) 17% (53) 14% (42) 27% (82) Survey Location: Museum of Natural History 14% (44) 23% (70) 23% (72) 23% (72) 11% (33) 6% (17) GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Dennis J. Amari, Virginia Chanley, Sandra J. DePaulis, Robert V. Dolson, Colin Fallon, Denise M. Fantone, H. Brandon Haller, Anne Izod, Jason Kelly, Nancy J. Lueke, David Sausville, and Susan Michal-Smith made key contributions to this report. Bibliography Advisory Council on Historic Preservation Advisory Council on Historic Preservation. Protecting Historic Properties: A Citizen’s Guide to Section 106 Review. Washington, D.C.: 2002. Department of the Interior Department of the Interior, Office of the Inspector General. Homeland Security: Protection of Critical Infrastructure Systems – Assessment 2: Critical Infrastructure Systems (2002-I-0053). Washington, D.C.: September 2002. Department of the Interior, Office of the Inspector General. Homeland Security: Protection of Critical Infrastructure Facilities and National Icons—Assessment 1: Supplemental Funding – Plans and Progress (2002- I-0039). Washington, D.C.: June 2002. Department of the Interior, Office of the Inspector General. Progress Report: Secretary’s Directives for Implementing Law Enforcement Reform in Department of the Interior (2003-I-0062). Washington, D.C.: August 28, 2003. Department of the Interior, Office of the Inspector General. Review of National Icon Park Security (2003-I-0063). Washington, D.C.: August 2003. National Capital Planning Commission National Capital Planning Commission. Comprehensive Plan for the National Capital: Federal Elements. Washington, D.C.: August 2004. Interagency Task Force of the National Capital Planning Commission. Designing for Security in the Nation’s Capital. Washington, D.C.: October 2001. National Capital Planning Commission. National Capital Urban Design and Security Plan. Washington, D.C.: July 2002. National Capital Planning Commission. Memorials and Museums Master Plan. Washington, D.C.: September 2001. National Coalition to Save Our Mall National Coalition to Save Our Mall. First Annual State of the Mall Report: The Current Condition of the National Mall. Rockville, MD: October 2002.
The National Mall in Washington, D.C., encompasses some of our country's most treasured icons and serves as a public gathering place for millions of visitors each year. The National Air and Space Museum, for example, was the most visited museum worldwide in 2003, hosting 9.4 million visitors. Federal agencies with facilities on the National Mall have begun implementing physical security enhancements to protect their facilities and the visiting public. This report responds to Congressional interest in the efforts and expenditures pertaining to these security enhancements and discusses (1) the physical security enhancements that have been implemented on the National Mall since September 11, 2001, the additional enhancements planned, and the costs of these enhancements; (2) the considerations given to incorporating access and aesthetics into the design and approval of these security enhancements, and how issues of access and aesthetics are perceived by visitors in relation to these enhancements; and (3) examples of how federal agencies are using key practices to implement the enhancements, and any challenges the agencies are experiencing in using these key practices. In commenting on a draft of this report, the Smithsonian Institution, Department of the Interior, Department of Agriculture, and National Gallery of Art provided clarifying and technical comments, which were incorporated into this report where appropriate. Since September 11, 2001, federal agencies on the National Mall have obligated about $132 million for physical security enhancements, with the National Park Service and the Smithsonian accounting for about 75 percent of the total obligations. Security enhancements include additional security personnel, facility upgrades, and equipment and technology. Planned enhancements include the installation of permanent security barriers to protect against vehicle bombs. Public access and aesthetic considerations are integral to the design and approval of security enhancements on the National Mall. Federal agencies must coordinate with reviewing organizations, such as the National Capital Planning Commission, and consider aesthetics, historic preservation, urban design, urban planning, and environmental effects when implementing security enhancements. Although federal agencies reported that the review process can be time-consuming, review organizations noted that early and frequent consultation with them helps to ensure a smoother, more efficient, and expeditious review process. GAO's survey of about 300 visitors to the National Mall, and reports from federal agencies, indicate that visitors value access to and the appearance of the National Mall and generally find the current level of security enhancements acceptable. GAO's survey results also suggest that visitors regard access and aesthetics as important priorities when adding security enhancements to the National Mall. Federal agencies on the National Mall reported using five of the six key practices identified by GAO--allocating resources using risk management, leveraging technology, information-sharing and coordination, performance management and testing, and strategic management of human capital--in implementing physical security enhancements. However, none of the federal agencies on the National Mall reported using the key practice of aligning assets to mission in implementing security measures because they believe they do not have excess or underutilized facilities or consider the practice applicable to property under their jurisdiction. Agencies identified balancing ongoing mission priorities with the need for security as a common challenge in using key practices to implement physical security enhancements.
Background DOE manages the disposal of cleanup wastes that come from remediation, decontamination, and demolition at sites where operations have been discontinued. Cleanup wastes are primarily subject to three laws: the Atomic Energy Act of 1954, as amended; the Resource Conservation and Recovery Act of 1976 as amended (RCRA); and the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended (CERCLA). DOE is responsible for the management of its own radioactive wastes under the Atomic Energy Act. Under RCRA, the Environmental Protection Agency (EPA) or states with programs authorized by EPA regulate the hazardous components of mixed wastes. The Congress enacted CERCLA to clean up the nation’s most severely contaminated hazardous waste sites. Under CERCLA, which is administered by EPA, the parties responsible for the contamination are responsible for conducting or paying for the cleanup. The statute makes federal facilities subject to the same cleanup requirements as private industry. For CERCLA projects, EPA has established a decision process designed to involve the public and EPA in identifying, evaluating, and choosing cleanup approaches. This process requires parties responsible for the cleanup (in this case DOE) to consider a range of cleanup alternatives. EPA uses nine specific criteria, including the estimated costs, feasibility, and risks for each alternative, to evaluate, compare, and balance tradeoffs among these alternatives (see fig. 1). Under these criteria, any selected cleanup alternative must adequately reduce long-term risks to human health and the environment. The chosen alternative—including the plan for disposing of the waste—is documented in a Record of Decision (ROD), which EPA must approve. Sites Made Decisions Using Preliminary Information About Disposal Needs Officials at the three sites we reviewed considered detailed estimates of the costs and risks associated with on-site and off-site waste disposal. Among other things, these estimates were based on preliminary determinations on the extent and type of contamination present at the site. In accordance with the CERCLA decision process, site officials also assessed how well each cleanup alternative addressed the nine CERCLA criteria. After balancing the tradeoffs among the criteria for each alternative, site officials selected an on-site disposal alternative based, at least in part, on their estimation that on-site disposal would cost less than off-site disposal (see table 1). To meet the CERCLA requirement that human health and the environment be adequately protected, DOE sites adopted accepted strategies, such as limiting the level of contamination allowed in the disposal facility, to mitigate long-term risks. DOE, EPA, and other stakeholders agreed that the benefits of on-site disposal, including cost savings, outweighed the remaining long-term risks. Sites Identified the Extent of Cleanup Needed and Developed Alternative Cleanup Actions In accordance with the first steps in EPA’s CERCLA decision process, all three sites conducted remedial investigations to confirm and to quantify the nature and extent of contamination. They examined site background and historical data, and used limited sampling to project the volumes and types of wastes that could be generated by cleanup activities. Based on this limited information, officials at each site developed a preliminary model describing the sources of contamination (such as soil or groundwater), possible ways the contaminants could be released, and whether human exposure would be likely. Using this model, they assessed the cancer and non-cancer risks to humans. Officials at each site also prepared a feasibility study that established cleanup goals, identified possible cleanup technologies and actions, and analyzed alternative cleanup approaches. For the contaminants of concern, officials set cleanup goals at contamination levels that posed acceptable risks according to their exposure model. The waste excavation and disposal approach, either on-site or off-site, was only one of many approaches available to officials to meet these cleanup goals. For example, each site considered leaving at least some waste in place and limiting human exposure to it by either capping the waste with clean cover materials or restricting access to the waste areas. Site officials developed their alternative cleanup approaches using the results of their remedial investigations and working closely with EPA and state reviewing officials. Officials in Oak Ridge and Idaho determined that feasible cleanup approaches were likely to generate more waste than the existing disposal facilities at those sites could accommodate, and the Fernald site had no existing disposal facility. Therefore, when conducting their feasibility studies, officials at each of the three sites considered whether to dispose of their respective wastes in a new on-site facility or to ship them to an off- site disposal facility. Specifically, each site used the Envirocare facility in Utah as its representative off-site disposal facility. DOE and commercial generators of radioactive waste use this facility, which is located 80 miles west of Salt Lake City, to dispose of mildly contaminated soils and debris. In addition to the Envirocare facility, DOE sites that do not have existing on-site disposal facilities are now authorized to dispose of their low-level and mixed low-level wastes at DOE’s disposal facilities for these types of wastes at its Hanford Reservation in southeastern Washington and its Nevada Test Site in southern Nevada. Sites Used Nine CERCLA Criteria to Evaluate Cleanup Alternatives Site officials assessed each proposed cleanup alternative against the cleanup goals, as well as nine decision criteria specified in EPA’s CERCLA regulations and guidance. Following this guidance step-by-step, officials first considered the two threshold criteria, then evaluated qualifying alternatives against the five balancing criteria, and then applied the two modifying criteria. CERCLA threshold criteria require a cleanup approach to (1) achieve overall protection of human health and the environment and (2) meet all legal requirements, referred to as “applicable or relevant and appropriate requirements.” Site officials discarded some alternatives, such as capping some contaminated areas in place, because they did not meet these threshold criteria. In some instances, waivers were needed to develop on-site disposal facilities. For example, the Fernald site obtained a waiver from the state of Ohio’s prohibition on developing a disposal facility over a drinking water aquifer. Similarly, the Oak Ridge site obtained a waiver from EPA‘s minimum required distance from the bottom of a landfill that contained toxic chemicals to the underlying groundwater. Without these waivers, the sites would not have been able to develop on- site disposal facilities. (See appendix I.) In both instances, the host states and EPA agreed with site officials that the proposed facilities could be designed to meet equivalent safety standards. After screening cleanup alternatives against the two threshold criteria, site officials developed more detailed feasibility studies to demonstrate how well the various surviving alternatives met each of five CERCLA balancing criteria. The sites used measures that employed varying degrees of data and subjectivity to evaluate how effectively an alternative met each criterion (see table 2). All three sites then used these evaluations to balance the criteria for on- site and off-site disposal alternatives. Generally, each alternative approach had strengths and weaknesses in some of the criteria, and the sites had to make tradeoffs according to their unique conditions and priorities. Table 3 lists the key tradeoffs each site cited in its comparative analysis between on-site and off-site alternatives of similar cleanup scope. (See appendix 1 for a description of each site’s comparative analysis.) Officials at each site applied the two CERCLA modifying criteria—state and community acceptance—to its preferred alternative of on-site disposal for most of its waste. Each site involved state and community stakeholders early in the decision process. State environmental agencies participated in preliminary reviews and informal discussions from the start of the Remedial Investigation throughout the final cleanup decision. Generally, by the time site officials issued their Final Record of Decision, they had addressed, or had a plan to address, environmental concerns raised by the states. For example, the Ohio Environmental Protection Agency supported the development of an on-site facility in Fernald contingent upon specific restrictions on the source of and radioactivity in any waste accepted for disposal. The Record of Decision incorporated an approach to meet these restrictions. Site officials also involved and informed community stakeholders, and received support for their decisions from groups such as the Fernald Citizens Task Force, the Oak Ridge Reservation Environmental Management Site Specific Advisory Board, and the INEEL Citizens Advisory Board. The Fernald Citizens Task Force, which is comprised of individuals with diverse interests in the future of the site, convened in 1993 to provide focused input on central cleanup issues at Fernald. A Task Force report issued in 1995 included recommendations on the site’s future use, waste disposal options, and cleanup objectives and priorities. DOE’s selected alternative mirrored these recommendations. All three sites held public hearings on their Proposed Plan and ROD, and accepted comments for the time periods required under CERCLA. The resulting comments and DOE responses were incorporated in the decision documents. In each case, host state environmental agencies concurred with the proposed decisions. Each DOE site and its respective EPA region then signed a Final ROD that documented the decision for an on-site disposal facility. This final decision allowed DOE sites to move forward with planning for site excavation and construction. DOE Has Not Used Updated Information to Reassess Disposal Decisions Before Making Major Investments in On-site Facilities After deciding to build new on-site disposal facilities, site officials continued to refine disposal needs and develop specific plans for these facilities for one or more years. During this time, significant changes occurred in site assumptions regarding the types and volume of wastes needing disposal, detailed design of on-site facilities, duration of the cleanup, and cost of off-site transportation and disposal. Under such circumstances, good business practice suggests that earlier cost estimates should be confirmed before construction begins. Likewise, 1997 guidance issued by the Office of Management and Budget (OMB) states that agencies should validate their earlier planning decisions with updated information before finalizing capital investments. However, the three sites conducted little further evaluation of off-site disposal options, despite changed circumstances that could narrow the cost difference between on- site and off-site disposal. At Oak Ridge, for example, a simple update of the projected waste volumes, transportation rates, and costs for off-site disposal of some types of waste effectively reduced the difference between on-site and off-site cost estimates by 51 percent. Such changes in relative costs could also affect the balancing of costs and other factors considered while making cleanup decisions. In particular, uncertainties about long-term stewardship needs become more significant as cost differences narrow. The elapsed time between the preparation of the initial cost estimates that were used to support the disposal decision and the commencement of construction of on-site disposal facilities argues for validating the initial cost comparisons before committing funds to construction of new facilities. DOE has not taken advantage of this time to update their cost comparisons at the three sites. Assumptions Changed as Sites Refined Cleanup Plans A year or more can elapse between the time the costs are estimated and the commencement of actual cleanup activities. During this period, officials at the three sites we reviewed continued to determine the extent and nature of contamination needing cleanup, and often changed their assumptions about waste volumes, waste types, cleanup duration, and the type of disposal facility needed. Although such changes can have major implications on cost estimates for both on-site and off-site disposal, officials at the sites applied the CERCLA process in a manner that discouraged re-examination of costs for alternatives other than their previously selected approaches. Waste Volume and Types Have Changed At all three sites, the waste volumes used to compare on-site and off-site disposal costs were significantly less than the waste volume currently projected for on-site disposal. At two of the sites, site-wide cleanup plans and waste projections were not well defined when the cost estimates were prepared. Officials at those sites now expect to dispose of much more waste. Officials at the Fernald site noted that, although the site’s cost estimate was based on 1.4 million cubic meters of waste from one operable unit, the overall decision making process was based on the site- wide estimate of 1.9 million cubic meters. (See table 4.) As the volume of waste grows, the potential need to construct additional disposal capacity to accommodate the waste also grows. At the time of our review, Oak Ridge officials stated that they would need to obtain further geologic surveys and regulatory approval before expanding the disposal facility to accommodate the larger waste volume now projected. Because the cost comparisons were largely limited to an earlier set of assumptions about waste volumes, without preparing updated cost estimates DOE is not in a position to assess whether these changes will have a substantial effect on the comparative costs of on-site and off-site disposal. Further investigation of the contaminated areas at the sites also changed assumptions about the types of waste that will be generated. This is especially important because the disposal requirements—and therefore, the cleanup costs—vary by waste type. For example, mixed waste—waste that is radioactive and also contains hazardous substances—must be disposed of in facilities that meet more stringent RCRA standards. Because meeting RCRA standards increases disposal costs, the proportion of mixed waste in cleanup waste will affect overall cost estimates. Disposal fees at the Envirocare facility, for example, are much higher for mixed waste than for low-level waste. Also, cost estimates can be affected by how much of the waste is building debris, such as concrete or metal, and how much is soil. Building debris can cost more for disposal due to its awkward sizes and shapes. Sites may also need to obtain additional fill material to properly dispose of debris, or they may need to adjust their disposal schedules to ensure a proper mix of the two types of waste. On- site facilities that need to increase their disposal capacity, purchase additional fill, or adjust disposal schedules will probably face higher costs than originally estimated. Cleanup Schedules Remain in Flux Since developing their cost comparisons, the three sites have continued to change their assumptions about the length of the cleanup. After finalizing their cleanup decisions and selecting on-site disposal, site officials revised their on-site cost estimates to provide justification for their annual budget proposals over the next few years. These revisions often resulted in changed assumptions about the time needed for cleanup operations. The revised on-site disposal estimates reflected project life cycles that accelerated cleanup schedules according to DOE’s 1998 plan to complete cleanup at most of its sites by 2006. The abbreviated schedules assumed that facilities would operate for fewer years, tending to reduce the original on-site estimates. For example, since preparing their first cost estimates, Oak Ridge officials have shortened their projected schedule for on-site disposal from about 30 years to about 10 years and officials at Fernald decreased their operating schedule from about 20 years to 13 years. Officials at these sites did not update comparable estimates for off-site disposal because they no longer considered off-site disposal to be a viable option. The sites’ cleanup schedules remain in flux. The current operating schedules and related disposal cost estimates appear optimistic. Fernald officials, for example, state that funding constraints are already forcing a slowdown. In fiscal year 2001, Fernald plans to dispose of 60,000 cubic yards (after compaction), or 36 percent, of the 168,000 cubic yards called for in the project’s baseline. Schedules at Oak Ridge and INEEL could face similar pressures. For example, the INEEL site estimated the operating costs for on-site disposal of site-wide cleanup wastes for approximately 10 years, even though site cleanup could be much longer, because cleanup schedules had not been finalized for all waste areas around the site. If current schedules prove unworkable, then the costs for on-site disposal will change. However, there will be no comparable analysis for off-site disposal. Facility Designs Are Still Being Developed As on-site and off-site cost comparisons were originally made, plans for on-site facilities were purely conceptual: design details, engineering drawings, and even the exact locations of the facilities were still being determined. Concurrent with improving information on the projected waste volume and types following their on-site disposal decisions, officials at the three sites also developed and refined engineering designs for their respective planned facilities. These refinements reflected changes in assumptions about such things as geologic features at the proposed facility location and the exact nature and level of contamination the disposal facility could safely accept. For example, additional geological surveys were needed at INEEL to determine how deep the cell could be built without hitting bedrock. Ultimately the cell depth will affect the area of land covered by the facility and thus the amount of material needed for the final cap. Another facility design feature that continues to evolve is the proper soil to debris ratio that was discussed above. DOE officials’ opinions on the optimal ratio have varied from 1:1 to 8:1, and the final ratio will depend upon the physical condition of the debris. As disposal facility plans become better defined, the resulting decisions are likely to have cost implications. For example, when INEEL developed its cost estimate, the tentative plans did not include a facility for sizing, sorting or treating the wastes. INEEL officials have since added plans to construct an on-site treatment facility, which they currently estimate will cost $15 million. Similarly, since Fernald developed its on-site estimate, the site has added considerable costs to implement waste acceptance oversight activities, in response to stakeholder concerns. These increases in on-site disposal costs cannot be compared to any rigorous analysis of off-site disposal costs, however, because the sites dismissed off-site disposal alternatives several years ago. Off-Site Disposal Costs Could Decrease Since the three sites made their cost comparisons, some off-site disposal fees have decreased and volume discounts might be available for the higher waste volumes now projected. The three sites relied upon the best available--though preliminary--information and assumptions in preparing their original off-site cost estimates. For off-site disposal fees, the sites relied on historical rates, such as those in DOE’s existing contract with Envirocare. Their estimates for off-site disposal ranged from $242 to $312 per cubic meter of waste disposed. Such fees change over time, and the sites’ estimates now appear unrealistically high, when compared with current fees for off-site disposal at Envirocare. That company now prices disposal of bulk rail shipments of soils classified as low-level wastes for as low as $180 per cubic meter. In addition, DOE’s year 2000 contract with Envirocare provides for significant discounts—a price drop from $519 to $176 per cubic meter—for disposal of specified shipments of debris. Envirocare officials told us that, because the historical DOE contract rates for disposal of soils and debris had been negotiated for relatively small waste volumes, additional volume discounts might be available for the larger volumes of soil and debris now projected by the sites. For their off-site cost estimates, site officials also used rail transportation rates that appeared high in some cases, but they have not revisited transportation options. DOE had little historical data on rail costs for low- level radioactive waste shipments, and each site used a different approach to estimate these costs. Because of the preliminary nature of the cost estimates, site officials made simplified assumptions about shipping configurations and rates. However, once they had better information regarding the amounts and timetables for waste disposal, officials did not fully reconsider alternative configurations or schedules to determine whether rail costs could be reduced. For example, they did not attempt to adjust rail costs for possible use of “dedicated” trains. At Fernald, dedicated trains now carry waste that is not qualified for on-site disposal directly to the Envirocare facility. These trains make fewer stops and complete the trip in much less time. If DOE rents rail cars by the day, the overall cost for a train dedicated to low-level cleanup waste could be considerably less. Envirocare officials suggested that further savings were possible if DOE would consider proposals that bundle the rail transportation and disposal services into one package agreement. These officials stated that they have negotiated similar agreements with other customers. Good Business Practice and Federal Guidance Suggest Reevaluation of Disposal Options Good business practice suggests that early cost comparisons that are susceptible to uncertainties should be updated before major capital investments are made. This concept is embedded in recent OMB guidance that advocates such revalidation of planning estimates for capital investment decisions. OMB seeks to improve agency planning, budgeting for, and acquiring capital assets through guidance issued in Circular A-11, Part 3. This guidance states that agencies should make effective use of competition and consider alternative solutions. In this instance, the competition is between disposal options as well as potential contractors. For these sites, competition between on-site and off-site disposal options could provide several incentives. First, it provides an incentive to keep on-site disposal costs as low as possible. If off-site disposal is eliminated completely as an option, sites have less incentive to ensure that on-site disposal plans are as economical as possible. Second, it provides incentives for off-site disposal facility contractors to reduce rates and create more competition with on-site disposal. OMB’s 1997 supplement to Part 3 of Circular A-11, the Capital Programming Guide, provides even more definitive guidance. It states that once a capital project has been funded, an agency’s first action is to validate that the planning phase decision is still appropriate. It further states that, because a year or more can elapse between the planning decision and commitment, agencies should review their needs and the capabilities of the market. DOE’s own order implementing this guidance, issued in October 2000, calls for independent review of cost estimates and verification of mission need prior to final approval for construction funding. However, the order does not require the sites to re-validate, using independent reviews, the cost comparisons between on-site and off-site disposal alternatives. Once site officials have refined their disposal project scope to the point where they can request contract proposals for construction, it appears reasonable for them to consider ways that the off-site disposal services market could compete with the on-site proposals. The CERCLA process allows for selection of acceptable alternatives when the business environment changes, as long as these alternatives satisfy the regulatory standards for the cleanup. Moreover, the three sites left open the possibility for changes in their selected remedies. For example, the INEEL ROD calls for further evaluation of cost effectiveness of on-site or off-site disposal prior to excavation of contaminated areas, but does not specify that this should occur prior to major construction phases. EPA’s CERCLA guidelines specifically address how agencies need to document changes they make from the alternative selected in the ROD. In some of EPA’s examples, the guidelines suggest that large increases in the waste volumes, disposal costs, or a change in disposal location from on-site to off-site, should be documented in an Explanation of Significant Difference. EPA’s guidelines state that more fundamental changes, such as the discovery that additional costly waste treatment will be needed prior to disposal, may require an amendment to the ROD that must reconsider the nine criteria and invite public comments. Both examples show that the built-in flexibility of the CERCLA process accommodates more cost-effective business decisions as well as improved cleanup technologies. Changes in Cost Could Greatly Affect Earlier Balance of Costs and Risks Changes in both on-site and off-site cost assumptions mean that the balance of costs and risks at each site may now be much different than when the comparisons were made. As a result, updated comparisons may show that, on a cost basis alone, off-site disposal is now a much more competitive alternative. However, because cost is only one factor that is considered when making disposal decisions, off-site disposal costs do not necessarily need to drop below on-site disposal costs for off-site disposal to emerge as the better alternative. To determine the relative advantages of the two alternatives, officials must also assess their respective long- term risks, the stewardship activities that will address these risks, and the estimated costs of these activities. These long-term stewardship risks are highly uncertain. As the gap between on-site and off-site disposal costs narrows, this uncertainty becomes relatively more significant to the balancing among CERCLA criteria. The elapsed time from the ROD until bidding and construction of an on-site disposal facility argues for DOE sites to use current information and ensure that the balance of cost and long-term risk remains favorable. Comparison Updates Substantially Narrow Cost Gap Changes in cost assumptions for off-site disposal indicate considerable potential for narrowing the cost gap between these disposal alternatives. Of the three sites, only Oak Ridge has updated its off-site cost analysis to reflect more recent circumstances or volume discounts, and even this estimate has been superceded by additional developments. Table 5 shows how much the gap between on-site and off-site disposal closed when off- site estimates were adjusted to reflect changes in commercial prices for some off-site disposal fees and transportation costs, and in one case, changes in waste-type. When on-site and off-site disposal costs become more comparable, other factors begin to assume increased significance. Among these factors is the issue of retaining the waste on site, where it will pose a potential threat to human health and the environment, for all practical purposes, forever. The sites have attempted to incorporate the costs of long-term stewardship into their on-site estimates, but these cost estimates are based on extremely limited information. Expected long-term stewardship costs are uncertain for several reasons. First, the sites develop these estimates before specific plans are drawn up for protecting the waste. Second, there is little historical information on which to base the preliminary estimates, because DOE has closed very few sites. Finally, the preliminary estimates at the three sites did not appear to provide any contingency amounts for non-routine problems that might arise, and some long-term issues are open-ended. For example, the post- closure plan for the Fernald site, issued in May 1997, states that the post- closure leachate collection and monitoring must continue until leachate is no longer detected or ceases to pose a threat, with no mention of how long that might be. These limitations are likely to persist. In its October 2000 report on long- term stewardship, DOE states: “Given the limitations of available data, considerable uncertainty will be associated with any long-term stewardship cost estimates.” In another recent study, the National Research Council noted that long-term stewardship cost estimates have significant uncertainties due to controversies over such matters as discount rates and hidden costs. DOE is in the process of developing standardized guidance for estimating long-term stewardship costs, and anticipates that sites will include such estimates in their fiscal year 2003 budget process. DOE is also examining alternative financing approaches for long-term stewardship. However, these approaches may not adequately cover the potentially high costs associated with any disposal facility failure and the consequent release of contamination into the environment. Furthermore, alternative financing may not be sufficient to cover all of the estimated post-closure costs. For example, according to site officials, the Oak Ridge site and the Tennessee Department of Environment and Conservation entered into an administrative agreement (Consent Order) to establish the Tennessee Perpetual Care Investment Fund. The Consent Order requires DOE to annually deposit $1 million into the fund for 14 years. The state will use fund income to cover costs of annual post-closure surveillance and maintenance of the disposal facility. Site officials had previously estimated these annual costs would range from about $684,000 to about $922,000 in year 2000 dollars. To generate income in this range, the fund principal—which is equivalent to about $11.3 million in year 2000 dollars— will need to earn an average return of roughly 6 to 8 percent annually. Considering that the average real treasury rate over the past decade was about 3.6 percent, the fund may not generate enough income to cover estimated post-closure costs. Site officials pointed out that uncertainties surrounding long-term stewardship costs also affect the Envirocare facility. Envirocare maintains a trust fund, as required by Utah state rules implementing Nuclear Regulatory Commission requirements, to cover future closure and long- term stewardship costs in case the firm goes out of business. Under CERCLA, according to site officials, the federal government, which disposes of large quantities of waste at the Envirocare facility, would probably be liable in the event that these funds were insufficient. In our view, however, this point does not diminish the importance of evaluating the risk for on-site disposal. For several reasons, potential increases in stewardship costs to DOE at the Envirocare facility are less likely than at the planned on-site disposal facilities, especially those in wetter climates. First, the Envirocare facility is located in a dry climate, which would restrict movement of contaminants from the facility to the underlying groundwater. Second, the groundwater beneath the site is not suitable for human consumption or even for watering livestock because of its high mineral content. Finally, the facility is in a location that is remote from population centers. DOE Should Use Current Information to Validate Planning Decisions The CERCLA decision process, culminating with the ROD, represents planning and agreement for remediation activities at the three sites. After the ROD is signed, project assumptions and timeframes are subject to change for an extended period, allowing DOE sites time to confirm their earlier conclusions that on-site disposal remains advantageous despite long-term cost and risk uncertainties. DOE sites could validate the early cost comparisons by re-estimating the off-site disposal costs using current disposal and transportation prices combined with baseline assumptions (about waste volumes and characteristics, for example) for the proposed on-site disposal facility. Another approach would be to solicit proposals for off-site disposal along with proposals requested for construction of an on-site facility. Generally, DOE sites plan to award several contracts over the life of the disposal project, each covering a specific construction phase. For example, Fernald site officials expect the final disposal facility to consist of 6 to 8 sub-units called cells. As of November 2000, the site has awarded three separate construction contracts covering construction for various phases of three cells. At Oak Ridge, the baseline budget for the on-site facility calls for two construction phases, with the second phase proceeding in six expansion steps. INEEL officials have stated that their planned on-site disposal facility may be expanded in a second phase to accommodate the large quantity of waste generated after its chemical plant—located adjacent to the on-site facility—is dismantled after 2035. Site officials stated that they will re-evaluate cost effectiveness at that time in accordance with ROD requirements. When sufficient time elapses between such contract phases, DOE could benefit from reevaluating the market for off-site disposal at each phase. Such competition could provide incentives for both on-site and off-site proposals to be as economical as possible. Once the DOE sites have these “real world” estimates in hand, they would be in a better position to evaluate the extent to which cost savings for on-site disposal continue to balance the long- term uncertainties. Conclusion Unless DOE revisits its disposal needs and its current options for disposing of wastes off-site, it could miss opportunities to reduce cleanup costs at the three sites and at other sites, such as Paducah, that might propose the development of new on-site facilities. Building in a decision checkpoint before major investment decisions are finalized could identify instances when the use of off-site disposal would be less expensive, or when the cost difference no longer outweighs the long-term risks associated with on-site disposal. Such validation of the cost comparison is especially important in instances where DOE is aware that the scope or timeframe of the cleanup effort has changed dramatically. Remaining open to new proposals for off-site disposal would also inject an element of competition into this process. Thus, even if the validation did nothing more than confirm the original decision to dispose of the wastes on-site, it has the potential to ensure that costs are kept to a minimum. Recommendation We recommend that, before constructing new or expanding existing facilities for disposal of cleanup waste at the Fernald, INEEL, and Oak Ridge sites, the Secretary of Energy revisit the cost comparisons for on- site and off-site disposal to determine if the cost estimates used to support the ROD remain valid. If cost advantages for on-site disposal have decreased, the Secretary should reassess whether expected cost savings from on-site disposal facilities outweigh the long-term risks associated with these proposed disposal facilities. We also recommend that DOE validate cost comparisons at any other sites that may decide to develop an on-site disposal facility. Agency Comments We provided a draft of this report to DOE for review and comment. DOE generally agreed with the report’s conclusion and recommendation that assumptions used to select on-site disposal need to be re-validated before constructing or expanding on-site disposal facilities. DOE pointed out that reassessments are already planned for the disposal cell at the INEEL site in Idaho, which is currently in an early design phase. The Department also stated that it will consider whether to revisit plans to proceed with expansion of existing or construction of new disposal facilities as part of a comprehensive assessment of its Environmental Management program. Appendix III presents DOE’s comments on the report. DOE also suggested several technical clarifications which we have incorporated into the report as appropriate. DOE’s technical comments included the observation that another factor to be considered when evaluating off-site disposal is the receiving facility’s capacity to accommodate incoming waste volumes. GAO agrees that the coordination of multiple waste shipments to an off- site facility would be a challenge that would need to be addressed during any contract negotiations. Scope and Methodology We performed our review at DOE’s Fernald, INEEL, and Oak Ridge sites. We interviewed DOE and contractor officials at each site who are familiar with the sites’ decisions to develop on-site disposal facilities. To understand how site officials evaluated disposal alternatives, we reviewed each site’s Record of Decision, Feasibility Study and other supporting documentation. To determine the extent of EPA and state participation in the decision process, we interviewed officials from regional EPA offices and state environmental agencies that reviewed and concurred with DOE’s decision at each site. We also reviewed pertinent legislation and implementing regulations and guidance on disposal of radioactive and hazardous wastes, including planning for capital investments in new disposal facilities, and discussed waste disposal issues with officials at DOE headquarters and at the Defense Nuclear Facility Safety Board. To evaluate off-site disposal alternatives used for comparison at each site, we obtained and reviewed information on DOE’s use of the Envirocare commercial disposal facility, and interviewed officials of that company to assess the availability of commercial facilities that dispose of low-level radioactive wastes. We also determined the extent to which DOE’s cost comparisons depended upon the rates assumed for off-site transportation and commercial disposal fees. (See app. II for a further discussion of our scope and methodology.) We conducted our review from May 2000 through May 2001 in accordance with generally accepted government auditing standards. This report contains a recommendation to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and to the House Committee on Government Reform not later than 60 days from the date of this letter, and to the House and Senate Committees on Appropriations with the Agency’s first request for appropriations made more than 60 days after the date of this letter. Appendix I: DOE Sites’ Analyses of the Primary Tradeoffs Between On-Site and Off- Site Disposal Alternatives To select a cleanup alternative, officials at the Fernald, Oak Ridge and INEEL sites weighed the various cleanup approaches and made tradeoffs according to the site’s unique conditions and priorities. Using CERCLA’s five balancing criteria (see Table 6), site officials compared the advantages and disadvantages of their on-site and off-site disposal alternatives. Their analyses relied on site-specific information developed in their feasibility studies, and varied in depth according to the availability of data and the importance of each criterion at the site. Each site issued a Proposed Plan that summarized the comparative analysis and designated on-site disposal as the preferred alternative for the cleanup approach. After considering public comments on the Proposed Plan, each of the three sites issued a Record of Decision selecting an on-site disposal approach. The following brief summaries describe each site’s analysis of the primary tradeoffs that were considered between its on-site and off-site alternatives of similar cleanup scope. Fernald According to the Fernald site’s 1995 Proposed Plan, officials preferred the on-site disposal alternative after determining that this approach: 1) was reliable over the long term, 2) offered the lowest overall short-term risks, 3) was less costly in comparison to other alternatives, and 4) employed technologies that could be implemented. Although officials concluded that on-site disposal was reliable over the long term, their comparative analysis showed that the off-site alternative held an advantage for long- term effectiveness. This analysis pointed out that off-site disposal left the least amount of contamination at the site and did not require engineering and institutional controls to be reliable over the long term. In contrast, any on-site disposal facility at Fernald would need a design that ensured protection of the Great Miami Aquifer for thousands of years. Furthermore, Ohio’s solid waste disposal restrictions prohibit building such a landfill over the aquifer, which was designated as a sole source aquifer under the Safe Drinking Water Act. EPA and the Ohio EPA agreed to waive this restriction if the proposed on-site facility could be designed to meet equivalent safety standards. To apply CERCLA criteria to the Fernald site, officials weighed the long- term advantage of disposing of all waste off-site against disadvantages of this approach, some of which were of significant concern to various stakeholders. These disadvantages appeared under three CERCLA criteria: Site officials judged short-term risks for the off-site disposal option to be higher overall based on increased risks associated with shipping large quantities of waste by rail across country. Officials quantified the increased transportation risks for the comparable off-site alternative in their site’s feasibility study as approximately 10 injuries and 3 fatalities (for approximately 20,000 rail cars travelling to Utah and back). The site’s comparison of life cycle costs showed that cleanup approaches depending mainly on off-site disposal were more expensive than approaches with an on-site disposal facility. Its detailed comparison of alternatives showed that, for disposal of similar waste volumes, the estimated cost for off-site disposal was 34 percent more than the on-site estimate. In their proposed plan, site officials noted that the accuracy of the cost estimates typically varied between –30 to +50 percent because of underlying uncertainties in the available information used to develop them. Site officials stated that other criteria, particularly the plan’s implementability and community concerns about off-site rail transportation, played a more significant role in the site’s final decision. Site officials questioned whether the off-site alternative could be successfully implemented if off-site disposal facilities became unavailable over the projected 22-year duration of the cleanup. Furthermore, they feared that opposition to shipping large volumes of radioactive waste to western states could hinder Fernald’s access to off-site disposal for its more concentrated wastes, which cannot safely remain at the Fernald location. Oak Ridge The Oak Ridge site’s proposed plan (January 1999) stated a preference for the on-site disposal alternative after showing that on-site disposal offered comparable protection at lower cost and less transportation risk than its off-site alternative. The plan noted that the cost advantage was only significant for estimates that used the high end of the projected range of the anticipated waste volumes requiring disposal. Similar to the Fernald plan, the Oak Ridge plan also notes that concerns by states receiving the wastes for off-site disposal could hinder access to off-site disposal for large volumes of waste from the Oak Ridge Reservation. Site officials concluded that an on-site disposal facility would provide adequate long-term protection if engineering barriers were designed to contain waste indefinitely. To ensure the long-term integrity of the facility, they adopted the following three strategies to: 1) design the disposal facility to meet or exceed long term safety requirements, 2) limit the level of contamination allowed in the facility so that any leaks would pose no unacceptable risks, and 3) provide for long-term monitoring and facility maintenance. The facility’s design also addresses the need to provide groundwater protection equivalent to that required for landfills under the Toxic Substances Control Act of 1976. That act, as implemented by federal regulations, requires the bottom of a landfill liner to be 50 feet above the historical high groundwater table. Based on the protection afforded by the facility’s location and design (predominantly aboveground), EPA agreed to waive this technical requirement. Unlike the Fernald site, where the cleanup is expected to render most of the site accessible to the public, the Oak Ridge Reservation expects to restrict public access to many areas indefinitely and leave significant contamination on the site, including areas near the proposed on-site facility location. For various technical and safety reasons, DOE does not plan to excavate these areas. As a result, some contaminated areas around the Oak Ridge site will pose long-term risks regardless of whether an on- site disposal facility is constructed. Site officials performed a site-wide (composite) analysis of health risks, and estimated that the radiation from the proposed on-site facility would amount to approximately 1.1 millirem per year (after 1000 years). This amount represents roughly one-quarter of the estimated radiation dose from all sources within Bear Creek Valley after remediation, and according to site officials, is well within the established values for protection of human health and the environment. Along with their conclusion that on-site disposal provided comparable protection to the off-site alternative, site officials found that two other CERCLA criteria gave the advantage to the on-site alternative: The comparison of estimated costs for on-site and off-site disposal showed that on-site disposal cost significantly less only under the high volume scenario. This high volume scenario envisioned more extensive site-wide cleanup at the Oak Ridge Reservation than DOE’s baseline assumptions. By the time the ROD was issued in November 1999, site officials considered the high volume scenario to be the most realistic and selected the on-site disposal alternative based, in part, on cost comparisons estimated for the higher waste volumes. Based on calculations in their feasibility study, site officials concluded that the on-site disposal alternative had significantly less transportation risk than the off-site disposal alternative. The feasibility study reported that the risk of transportation accident-related injuries or fatalities was highest for off-site scenarios that used trucks (111 injuries and 10 fatalities). For rail transport of the high-end waste volume to the off-site facility in Utah, the risks were 8.2 injuries and .07 fatalities, compared to 0.41 injuries and 0.003 fatalities for the small number of rail shipments required for the on- site alternative. According to the study, the risks from radiological exposure during transportation were very small for either alternative. The INEEL proposed plan (October 1998) proposed on-site disposal as the preferred alternative, stating that the on-site approach ensures long-term protection of human health and the environment, complies with applicable legal requirements, and is a permanent and cost-effective solution. According to the summary comparative analysis, three criteria differentiated between on-site and off-site disposal alternatives: short term effectiveness, implementability, and cost. The proposed plan does not differentiate between the long-term effectiveness for on-site and off-site disposal. It concludes that, when compared to alternatives that capped waste in place, the two cleanup approaches provided equivalent long-term protection because each excavated contaminated soils and disposed of them in an engineered disposal facility—regardless of the facility’s location. The plan, and the subsequent ROD issued one year later, further noted that the on-site disposal facility would be designed to protect groundwater quality in the subterranean Snake River Plain Aquifer, as well as to prevent external exposure to radiation. Similar to the analysis by Oak Ridge officials, INEEL officials relied upon the adequacy of the facility’s design, as well as other strategies intended to maintain protectiveness over the long-term, to reach its conclusion that on-site disposal is as protective as off-site disposal in the long-term. When site officials evaluated three other CERCLA criteria, they found that the off-site disposal alternative had the following disadvantages when compared with the on-site alternative: In the short term, officials found that both on-site and off-site disposal alternatives posed minor risks to workers or the environment, and that the off-site alternative posed an additional minor risk to communities. The site’s feasibility study stated more specifically that the off-site alternative would pose some increased risk to communities from transport and potential railroad accidents. However, the study further noted that the rail lines passed through very rural communities, and stated that potential risk should be minimal. In the proposed plan, site officials concluded that the off-site disposal alternative would be the most difficult to implement because it would require the transport of “large volumes of contaminated soils great distances and depends upon the availability of off-site disposal capability.” The feasibility study did not provide support for this concern, and stated that “off-site disposal…has been previously performed; therefore this alternative should be administratively feasible.” In their proposed plan INEEL officials concluded that the off-site disposal alternative was the most expensive. They compared the estimated costs for excavation and disposal of 63,000 cubic meters of waste projected for the cleanup area under the Record of Decision. The off-site estimate was $221 million, 160 percent more costly than on-site estimate of $85 million. In the proposed plan, officials also noted that the on-site disposal facility would be constructed to accept contaminated cleanup materials from sites located throughout the INEEL site. They estimated that off-site disposal for the projected 356,000 cubic meters of site-wide waste would cost 224 percent more than an on-site alternative ($605 million versus $187 million). Site officials stated that they developed their site-wide cost estimates by modifying the original estimates for 63,000 cubic meters. Appendix II: Scope and Methodology In February 2000, DOE adopted a new policy allowing all DOE sites to dispose of low-level and mixed radioactive wastes at its facilities located at the Nevada Test Site and the Hanford Reservation. Sites can also use commercial off-site disposal facilities under certain circumstances. DOE’s policy was aimed at containing low-level and mixed wastes generated from its past or ongoing operations. However, the Department expects to generate significantly larger quantities of low-level and mixed wastes from its cleanup operations. In 1996, the Hanford site opened a facility for disposal of its on-site cleanup wastes under the CERLA program. Since 1996, three other DOE sites have made decisions to develop new, on-site disposal facilities for their low-level cleanup wastes governed by CERCLA, and are in various stages of planning, constructing, and filling these facilities. These sites are: the Fernald Environmental Management Project (Ohio); the Oak Ridge Reservation (Tennessee); and the Idaho National Engineering and Environmental Laboratory (INEEL) (Idaho). Plans for the new facilities at these sites entail permanent on-site disposal of significant quantities of wastes that would otherwise qualify for disposal off-site under DOE’s policies. We reviewed the sites’ decisions to determine (1) the extent that site officials considered the comparative costs and risks of off-site disposal options and (2) the extent that site officials revisited these cost and risk assessments as circumstances warranted. In addition, at least one other site, the Paducah Gaseous Diffusion Plant (Kentucky), is currently considering proposals to develop a new on-site facility. Our review covered the decisions already made at Fernald, Oak Ridge, and INEEL. We did not review the decision at Hanford because DOE’s recent policy designates Hanford as one of two preferred sites for acceptance of DOE-wide low-level wastes. We visited the three sites to observe the locations of the new disposal facilities and to determine what alternatives, if any, each site considered for disposal of their cleanup wastes. We interviewed site officials and reviewed decision documents to determine the factors that each site considered, including risks and costs of various disposal alternatives. We also interviewed officials from the state and Environmental Protection Agency offices that reviewed and concurred with DOE’s decision at each site. To understand Departmental and legal influences for the sites’ waste disposal decisions, we consulted legislative and executive guidance on radioactive waste disposal and capital investment planning. We also interviewed federal officials at DOE headquarters as well as the Defense Nuclear Facility Safety Board. In order to determine current off-site disposal prices for low-level radioactive wastes, we reviewed information on recent uses of commercial disposal by various DOE sites. We also reviewed DOE’s disposal contracts with Envirocare and interviewed company officials. We conducted a limited analysis to determine the extent that each site’s cost comparison depended upon the rates used for off-site transportation and commercial disposal fees. To illustrate how much the gap between on-site and off-site disposal estimates can close when off-site rates are adjusted to reflect changes in commercial prices (and in one case, changes in projected waste-type), we adjusted off-site costs as follows: For Fernald and INEEL, we substituted the latest contract prices for disposing of low-level bulk soil waste off-site in place of the rates used by the sites’ for low-level waste in their original estimates for cost comparison. (Neither Fernald nor INEEL had an updated version of the off-site estimate that we could have used to compare to current on-site estimates.) For INEEL, we also substituted transportation rates that were more in line with current prices. This exercise decreased the difference between on-site and off-site disposal costs by 36 percent at Fernald and 22 percent at INEEL. For Oak Ridge, we used the site’s most recent cost comparison analysis, and substituted updated estimates of the type of wastes, as well as current prices for low-level waste disposal and commercial estimates of transportation rates. When Oak Ridge officials prepared their most recent off-site estimate in 1999, they assumed that 44 percent of the waste would be classified as hazardous for off-site disposal. They have since revised the figure to less than 1 percent. The combined effect of reducing the proportion of hazardous waste and applying the lower contract and transportation prices decreased the gap between on-site and off-site disposal cost estimates by 51 percent. We conducted our review from May 2000 through May 2001 in accordance with generally accepted government auditing standards. Appendix III: Comments From the Department of Energy Appendix IV: GAO Contacts and Staff Acknowledgements GAO Contacts Acknowledgements In addition to those named above, John Cass, Linda Chu, Christine Colburn, Daniel Feehan, Hova Risen-Robertson and Stan Stenerson made key contributions to this report. Related GAO Products Low-Level Radioactive Wastes: Department of Energy Has Opportunities to Reduce Disposal Costs (GAO/RCED-00-64, Apr. 12, 2000). Low-Level Radioactive Wastes: States Are Not Developing Disposal Facilities (GAO/RCED-99-238, Sept. 17, 1999). Nuclear Waste: DOE’s Accelerated Cleanup Strategy Has Benefits but Faces Uncertainties (GAO/RCED-99-129, Apr. 30, 1999). Nuclear Waste: Corps of Engineers’ Progress in Cleaning Up 22 Nuclear Sites (GAO/RCED-99-48, Feb. 26, 1999). Department of Energy: Alternative Financing and Contracting Strategies for Cleanup Projects (GAO/RCED-98-169, May 29, 1998). Radioactive Waste: Interior’s Continuing Review of the Proposed Transfer of the Ward Valley Waste Site (GAO/RCED-97-184, July 15, 1997). Department of Energy: Management and Oversight of Cleanup Activities at Fernald (GAO/RCED-97-63, Mar. 14, 1997). Radioactive Waste: Status of Commercial Low-Level Waste Facilities (GAO/RCED-95-67, May 5, 1995).
Unless the Department of Energy (DOE) revisits its disposal needs and its current option for disposing of wastes off-site, it could miss opportunities to reduce cleanup costs at the Fernald, Oak Ridge, and the Idaho National Engineering and Environmental Laboratory (INEEL) sites and at other sites, such as Paducah, that might propose the development of new on-site facilities. Building in a decision checkpoint before major investment decisions are finalized could identify instances in which the use of off-site disposal would be less expensive, or when the cost difference no longer outweighs the long-term risks associated with on-site disposal. Such validation of the cost comparison is especially important in instances in which DOE is aware that the scope or timeframe of the cleanup effort has changed dramatically. Remaining open to new proposals for off-site disposal would also inject an element of competition into this process. Thus, even if the validation did nothing more than confirm the original decision to dispose of the wastes on-site, it has the potential to ensure that costs are kept to a minimum.
Background Disability Claims Process VA pays monthly disability compensation to veterans with service- connected disabilities (i.e., injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VA also pays additional compensation for certain dependent spouses, children, and parents of veterans. VA’s disability compensation claims process starts when a veteran submits a claim to VBA (see fig. 2). A claim folder is created at 1 of VA’s 57 regional offices, and a Veterans Service Representative (VSR) then reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical examinations, and treatment records from Veterans Health Administration (VHA) medical facilities and private medical service providers. Also, if necessary to provide support to substantiate the claim, VA will provide a medical examination for the veteran. Once VBA has gathered the supporting evidence, a Rating Veterans Service Representative (RVSR)—who typically has more experience at VBA than a VSR— evaluates the claim and determines whether the veteran is eligible for benefits. If so, the RVSR assigns a percentage rating. Later, the veteran can reopen a claim to request an increase in disability compensation from VA if, for example, a service-connected disability has worsened or a new disability arises. If the veteran disagrees with VA’s decision regarding a claim, he or she can submit a written Notice of Disagreement to the regional office handling the claim. In response to such a notice, VBA reviews the case and provides the veteran with a written explanation of the decision if VBA does not grant all appealed issues. Appendix II contains more information regarding VBA’s notifications to veterans throughout the disability compensation claims and appeals processes. If additional evidence is provided, VBA reviews the case again and if this new evidence does not result in a grant of all appealed issues, VBA produces another written explanation of the decision. If the veteran further disagrees with the decision, he or she may appeal to the Board of Veterans’ Appeals (the Board). Before transferring the appeal to the Board, VBA reviews the case again and then certifies that the appeal is ready for review by the Board. After the appeal has been certified, the Board conducts a hearing if the veteran requests one, then grants benefits, denies the appeal, or returns the case to VBA to obtain additional evidence necessary to decide the claim. If the veteran is dissatisfied with the Board’s decision, he or she may appeal, in succession, to the U.S. Court of Appeals for Veterans Claims, to the Court of Appeals for the Federal Circuit, and finally to the Supreme Court of the United States. VA’s Duty to Assist Requirements Congress clarified VA’s duties with regard to assisting in the development of claims in the Veterans Claims Assistance Act of 2000 (VCAA). VCAA eliminated the requirement that a veteran submit a “well-grounded” claim before VA could assist in developing the claim and instead obligated the agency to assist a claimant in obtaining evidence that is necessary to establish eligibility for the benefit being sought. Specifically, VA must: (1) notify claimants of the information necessary to complete the application; (2) indicate what information not previously provided is needed to substantiate the claim; (3) make reasonable efforts to assist claimants in obtaining evidence to substantiate claimants’ eligibility for benefits, including relevant records; and (4) notify claimants when VA is unable to obtain relevant records. According to VA regulations, VA efforts to obtain federal records should continue until the records are obtained or until VA has deemed it reasonably certain that such records do not exist or that further efforts to obtain those records would be futile. Timeliness of Claims and Appeals Processing Timeliness of VA compensation rating claims and appeals processing has worsened in recent years. As a key indicator of VBA’s performance in claims and appeals processing, timeliness is measured in various ways. To measure overall claims processing timeliness, VBA uses two measures: (1) the number of days the average pending claim has been awaiting a decision (Average Days Pending) and (2) the average number of days that VBA took to complete a claim where a decision has been reached (Average Days to Complete). Both measures of claims processing timeliness have worsened substantially over the last several years (see fig.3). VBA also collects data on the timeliness of the different phases of the claims process, which is used to identify trends and bottlenecks throughout the process. In fiscal year 2011, each phase took longer on average than its stated agency timeliness target (see fig. 4). The evidence gathering phase is the most time-intensive phase, taking over 5 months (157 days) on average in fiscal year 2011 and continuing to grow throughout fiscal year 2012. The timeliness of appeals processing at VA regional offices has worsened as well. The average timeframes in VBA’s response to Notices of Disagreement and the certification of appeals to the Board have increased since fiscal year 2009 (see fig. 5). Rising Workloads, along with Program Rules and Inefficient Processes, Contribute to Lengthy Processing Time Frames Rise in Claims Submitted Is Outpacing Claims Production In recent years, VA’s claims processing production has not kept pace with the increase in incoming claims. In fiscal year 2011, VA completed over 1 million compensation rating claims, a 6 percent increase from 2009. However, the number of VA compensation rating claims received has grown 29 percent—from 1,013,712 in fiscal year 2009 to 1,311,091 in fiscal year 2011 (see fig. 6). As a result, the number of backlogged claims—defined as those claims awaiting a decision for more than 125 days—has increased substantially since 2009. As of August 2012, VA had 856,092 pending compensation rating claims, of which 568,043 (66 percent) were considered backlogged. Similar to claims processing, VA regional office appeals processing has not kept pace with incoming appeals received. The number of Notices of Disagreement—the first step in the appeals process when the veteran provides a written communication to VBA that he or she wants to contest the claims decision—received by VBA fluctuated over the last 4 years, yet those awaiting a decision grew 76 percent over that time period (see fig. 7). Moreover, the number of Statements of the Case—an explanation of VBA’s decision on the appellant’s case—that were mailed by VBA decreased 24 percent over the last 4 years—from 100,291 in 2009 to 76,685 in 2012. In addition, the time it took to mail a Statement of the Case increased 57 percent over that time period—from 293 days to 460 days on average. A number of factors have contributed to the substantial increase in claims received. One factor was the commencement in October 2010 of VBA’s adjudication of 260,000 previously denied and new claims when a presumptive service connection was established for three additional Agent Orange-related diseases.and assigned experienced claims staff to process and track them. VBA officials said that 37 percent of its claims processing resources nationally were devoted to adjudicating Agent Orange claims from October 2010 to March 2012. VBA officials in one regional office we spoke to said that all claims processing staff were assigned solely to developing and rating Agent Orange claims for 4 months in 2011, and that no other new and pending claims in the regional office’s inventory were processed during that time. Also during this time period, special VBA teams—known as brokering centers—which previously accepted claims and appeals from regional offices experiencing processing delays, were devoted to processing Agent Orange claims exclusively. According to VBA, other factors that contributed to the growing number of claims include an increase in the number of veterans from the military downsizing after 10 years of conflict in Iraq and Afghanistan, improved outreach activities and transition services to servicemembers and veterans, and difficult financial conditions for veterans during the economic downturn. In conjunction with an increase in claims received, VBA officials said that claims today are more complex than in the past. As we reported in 2010, VBA said it is receiving more claims for complex disabilities related to combat and deployments overseas, including those based on environmental and VBA gave these claims a high priority infectious disease risks and traumatic brain injuries. Claims with many conditions can take longer to complete because each condition must be evaluated separately and then combined into a single percentage rating. According to VA, in 2011, the number of medical conditions claimed by veterans who served in Iraq and Afghanistan averaged 8.5, an increase from 3-4 conditions per claim for Vietnam veterans. As we reported in 2010, VBA’s goal is for newly hired VSRs to be proficient within 18 months and new RVSRs to be proficient within 2 years. See GAO-10-213. However, becoming proficient often takes longer—about 3 to 5 years for RVSRs. While VBA hired additional temporary staff using American Recovery and Reinvestment Act of 2009 funds, they were given limited training and less complex claims processing tasks. According to VBA officials, in 2011, VA received authority to convert temporary employees into permanent staff, which required additional training and mentoring. officials at one regional office said the number of claims processing staff assigned to outreach activities has increased. Specifically, at the time of our review, 37 out of 302 claims processing staff were conducting outreach activities to servicemembers and veterans, such as giving briefings and distributing materials at military bases about pre-discharge and transition assistance programs. According to VBA officials, a primary reason that appeals timeliness at VA regional offices has worsened is a lack of staff focused on processing these appeals. VBA officials at each of the five regional offices we met with stated that over the last several years appeals staff have also had to train and mentor new staff, conduct quality reviews, as well as develop and rate disability claims to varying degrees. For example, at one regional office, all staff on the appeals team focused exclusively on rating disability claims for a 9-month period in 2010 instead of processing appeals. Officials at another regional office stated that until 2012, their appeals staff spent up to 2 weeks per month on non-appeals tasks. In addition, we reported in 2011 that regional office managers estimated that Decision Review Officers (DRO) spent on average 36 percent of their time on non- appeals processing tasks.office managers did not assign enough staff to process appeals, diverted staff from processing appeals, and did not ensure that appeals staff acted A 2012 VA OIG report noted that VA regional on appeals promptly because, in part, they were assigned responsibilities to process initial claims, which were given higher priority. The VA OIG recommended that VBA identify staffing resources needed to meet their appeals processing goals, conduct DRO reviews on all appeals, and revise productivity standards and procedures to emphasize processing appeals in a timely manner, such as implementing criteria requiring appeals staff to initiate a review or develop for Notices of Disagreement and certified appeals within 60 days of receipt. VBA agreed with the VA OIG’s findings and is conducting a pilot to assess the feasibility of addressing these recommendations. Program Requirements Contribute to Long Processing Times According to VA officials, federal laws and court decisions over the past decade have expanded veterans’ entitlement to benefits but have also added requirements that can negatively affect claims processing times. For example, the VCAA requires VA to assist a veteran who files a claim in obtaining evidence to substantiate the claim before making a decision. This requirement includes helping veterans obtain all relevant federal records and non-federal records. VA is required to continue trying to obtain federal records, such as VA medical records, military service records, and Social Security records, until they are either obtained or the associated federal entity indicates the records do not exist. VA may continue to process the claim and provide partial benefits to the veteran, but the claim cannot be completed until all relevant federal evidence is obtained. While VA must consider all evidence submitted throughout the claims and appeals processes, if a veteran submits additional evidence or adds a condition to a claim late in the claims process it can require rework and may subsequently delay a decision, according to VBA central office officials. VBA officials at regional offices we spoke to said that submitting additional evidence may add months to the claims process. New evidence must first be reviewed to determine what additional action, if any, is required. Next, another notification letter must be sent to the veteran detailing the new evidence necessary to redevelop the claim and additional steps VA will take in light of the new evidence. Then, VA may have to obtain additional records or order another medical examination before the claim can be rated and a decision can be made. Furthermore, while VA may continue to process the claim and provide partial benefits to the veteran, a claim is not considered “complete” until a decision is made on all conditions submitted by the veteran. Moreover, a veteran has up to 1 year, from the notification of VA’s decision, to submit additional evidence in support of the claim before the decision is considered final. In addition, a veteran may submit additional evidence in support of their appeal at any time during the process. If the veteran submits additional evidence after VA completes a Statement of the Case, VA must review the new evidence, reconsider the appeal, and provide another written explanation of its decision—known as a Supplemental Statement of the Case. Congress recently passed a law allowing VA to waive review of additional evidence submitted after the veteran has filed a substantive appeal and instead have the new evidence reviewed by the Board to expedite VA’s process of certifying appeals to the Board. While federal law requires veterans to use an application form prescribed by VA when submitting a claim for original disability compensation benefits, VBA central office officials said they accept reopened claims or claims requesting an increase in disability compensation benefits in any format, which can contribute to lengthy processing times. VBA will accept an original disability claim informally if it is submitted in a non- standard format, but within 1 year the veteran must submit a VA Form 21- 526, Veteran’s Application for Compensation and/or Pension. VBA does not track the number of claims submitted in non-standard formats; however, officials at three regional offices we met with said they receive claims submitted in various formats, including hand-written letters. Officials at these three regional offices said that when such claims are submitted, there is a risk that claims staff may not be able to identify all the conditions the veteran would like to claim during initial development. For example, officials at one regional office stated that if these conditions are discovered later in the process, then VA must redevelop the claim— which could include sending another letter to the veteran, obtaining additional records, and conducting another medical exam—before the claim can be rated and a benefit amount determined and disbursed. VBA officials said they expect the number of non-standard applications for disability claims to decrease as more veterans file claims electronically through the Veterans On Line Application (VONAPP), which is available at VA’s eBenefits website. Similar to processing for reopened claims, VA’s procedures allowing veterans to submit appeals in any format can negatively affect appeals processing times, according to VBA officials. For example, a veteran’s intention to appeal a prior decision may be overlooked initially by staff because there is no standard appeals submission form and a veteran’s statement to appeal a prior decision may be included along with other written correspondence for other purposes, such as submitting a new claim, according to VBA officials. When appeals are overlooked and later found, it can delay recording Notices of Disagreement in appeals data systems and result in longer processing times, according to VBA officials. Gathering Records from Federal Agencies and Others Can Take Months According to VBA officials, delays in obtaining military service and medical treatment records, particularly for National Guard and Reserve members, is a significant factor lengthening the evidence gathering phase. According to VBA officials, 43 percent of Global War on Terror veterans are National Guard and Reserve members. According to a VA official, Department of Defense (DOD) Instruction 6040.45 requires military staff to respond to VA requests for National Guard and Reserve records in support of VA disability compensation claims. However, VBA area directors and officials at all five regional offices we met with acknowledged that delays in obtaining these records are a system-wide challenge. Military records of National Guard or Reserve members can often be difficult to obtain, in particular, because these servicemembers typically have multiple, non-consecutive deployments with different units and their records may not always be held with their reserve units and may exist in multiple places. Moreover, according to VBA officials, National Guard and Reserve members may be treated by private providers between tours of active duty and VA may have to contact multiple military personnel and private medical providers to obtain all relevant records, potentially causing delays in the evidence gathering process. Difficulties in obtaining timely and complete medical information, especially from private medical providers, can also contribute to a lengthy evidence gathering phase. For example, officials at one regional office said the process may be delayed if veterans are slow to return their consent forms that allow VA to pursue private medical records. Also, according to VBA officials, private medical providers may not respond to VA records requests in a timely fashion. In addition, officials at one regional office we met with mentioned that time frames can also be affected if veterans fail to show up for scheduled examinations. Officials at two regional offices we met with said that even when medical records are obtained, medical exams and opinions may include erroneous information or be missing necessary evidence, which then requires VA officials to follow-up with medical providers to clarify information. In some cases, another examination must be ordered before a decision can be made on the claim, which can add months to the process. VBA area directors acknowledged that obtaining complete and sufficient medical information is a system-wide challenge. Difficulties obtaining Social Security Administration (SSA) medical records, as one specific example, can also lengthen the evidence gathering phase. Currently, an interagency agreement exists that establishes the terms and conditions under which SSA discloses information to VA for use in determining eligibility for disability benefits, according to VBA officials. Although VBA regional office staff have direct access to SSA benefits payment histories, they do not have direct access to medical records held by SSA. If a veteran submits a disability claim and reports receiving SSA disability benefits, VA is required to help the veteran obtain relevant federal records, including certain SSA medical records, to process the claim. VBA’s policy manual instructs claims staff to fax a request for medical information to SSA and if no reply is received, to wait 60 working days before sending a follow-up fax request. If a response to the follow-up request is not received after 30 days, the manual instructs claims staff to send an email request to an SSA liaison. VBA officials at four of the five regional offices we reviewed told us that when following this protocol, they have had difficulty obtaining SSA medical records in a timely fashion. Moreover, they reported having no contact information for SSA, beyond the fax number, to help process their requests. In complying with VA’s duty to assist requirement, VBA staff told us they continue trying to retrieve SSA records by sending follow-up fax requests until they receive the records or receive a response that the records do not exist. VBA area directors said some regional offices have established relationships with local SSA offices and have better results, but obtaining necessary SSA information has been an ongoing issue nationally. For example, officials at one regional office said a response from SSA regarding a medical records request can sometimes take more than a year to receive. Some Work Processes Are Inefficient VBA’s work processes, stemming mainly from its reliance on a paper- based claims system, can lead to misplaced or lost documents, which can contribute to lengthy processing times. VBA officials at three of the five regional offices we met with mentioned that errors and delays in handling, reviewing, and routing incoming mail to the correct claim folder can delay the processing of a claim or cause rework. For example, VBA officials at one regional office said that a claim may be stalled in the evidence gathering phase if a piece of mail that contains outstanding evidence is misplaced or lost. In addition, claims staff may rate a claim without knowledge of the additional evidence submitted and then, once the mail is routed to the claim folder, have to rerate the claim in light of the new evidence received. Furthermore, VBA officials at one regional office we met with said that processing can also be delayed if mail staff are slow to record new claims or appeals into IT systems. As of August 2012, VBA took 43 days on average to record Notices of Disagreement in the appeals system—36 days longer than VBA’s national target. In May 2011, the VA OIG reported that VA regional office mailroom operations needed strengthening to ensure that staff process mail in an accurate and timely manner. Specifically, the VA OIG found that staff did not always record incoming mail into IT systems within 7 days of receipt and that they did not properly process and route mail to existing claims folders in a timely fashion in 10 of the 16 VA regional offices they reviewed. VBA area directors said that mail processing timeliness varies by regional office and that the more efficient offices in general do a better job of associating mail with the correct claims folder. In addition, VBA area directors said that standardizing the mail handling and sorting process in an integrated mail processing center—a component of the Claims Organizational Model implemented in 18 regional offices in fiscal year 2012—is intended to improve mail processing by involving more senior staff in the process. VBA officials also said that moving claims folders among regional offices and medical providers contributes to lengthy processing times. According to a 2011 VA OIG report, processing delays occurred following medical examinations because staff could not match claims-related mail with the appropriate claim folders until the folders were returned from the VA Medical Center. In addition, processing halts while a claim folder is sent to another regional office or brokering center. Lastly, according to VBA officials, the lack of an integrated IT system that provides all necessary information and functionality to track and process claims and appeals can decrease the productivity of claims processing staff. For example, according to staff at one VA regional office we spoke with, currently, they must use different systems to track claims folders, order medical exams, record claim processing actions taken by VBA staff and evidence received on a claim, rate claims, process awards, and record the status of appeals to the Board. The lack of an integrated system requires staff to enter claim information multiple times, search through multiple systems for claim information, and maintain processing notes on the status of the claim or appeal in multiple systems. For example, officials at two regional offices we met with said RVSRs must enter information into the Rating Board Automation system that was already entered in the Modern Award Processing-Development (Map-D) system. In addition, appeals staff must maintain claim processing notes and information on the status of appeals in two different systems—one maintained by the Board (Veterans Appeals Control and Locator System) and one maintained by VBA (MAP-D). According to regional office staff, the redundant data entry takes extra time that could have been spent working on other cases. Moreover, staff at one regional office said they did not always keep their claim processing notes up-to-date in both systems. VBA Is Taking Steps to Improve Claims and Appeals Processing, but Future Impact Is Uncertain VBA is currently taking steps to improve the timeliness of claims and appeals processing. Based on a review of VA documents and interviews with VBA officials, we identified 15 efforts with a stated goal of improving claims and appeals timeliness. We selected 9 for further review— primarily based on interviews with VBA officials and a review of recent VA testimonies. VBA’s improvement efforts include using existing VBA staff and contractors to manage workload, modifying and streamlining procedures, improving records acquisition, and redesigning the claims and appeals processes (see fig. 8). Although VBA is monitoring these efforts, the planning documents provided to us lack key aspects of sound planning, such as performance measures for each effort. VBA has several ongoing efforts to leverage internal and external resources to help manage its workload (see fig. 8). One ongoing effort that began in 2001 is the use of brokering centers—which are 13 special teams that process claims transferred from regional offices experiencing a large backlog of claims. As we reported in 2010, these teams are staffed separately from other regional office teams. According to VA officials, brokering centers gather evidence for the claim, make a decision, process awards payments, and work on appeals. Brokering center teams processed nearly 171,000 claims in fiscal year 2009, according to the VA OIG. VA central office officials told us that in fiscal years 2010 and 2011, all brokering centers focused exclusively on the re- adjudication of Agent Orange claims. Through the first 11 months of fiscal year 2012, brokering centers processed approximately 24,000 claims. VBA officials at several regional offices told us that brokering, over the past year, has helped to manage their overall claims workload. VBA also began the Veterans Benefits Management Assistance Program (VBMAP) in late fiscal year 2011 to obtain contractor support for evidence gathering for approximately 279,000 disability claims. Under VBMAP, regional offices send cases to a contractor to gather evidence. After evidence has been gathered for an individual claim, the contractor sends the file back to the originating regional office, which reviews the claim for completeness and quality and then assigns a rating. Contractor staff are As required to complete their work within 135 days of receiving the file.of June 2012, VBA regional offices we spoke with were awaiting the first batch of claims that were to be sent to the contractors, so it remains to be seen if VBMAP reduces processing times. Contractors are required to provide VBA with status reports that include several measures of timeliness, including the time it took to receive medical evidence from providers and the time it took to return a claim to VBA for rating. VBA Is Changing Procedures and Modifying Requirements to Expedite Claims and Appeals Processing With the intent of speeding up the claims and appeals processes, VBA has several efforts that modify program requirements or relieve VA of certain duties (see fig. 8). One effort is the Fully Developed Claims (FDC) program, which began as a pilot in December 2008 and was implemented nationwide in June 2010. The FDC program was implemented in response to a congressional mandate that required VBA to conduct a pilot program to expedite processing of fully developed claims in 90 days or less. Normally, once a veteran submits a claim, VBA will review the claim and then send the veteran a letter detailing additional evidence required to support the claim. The FDC program eliminates this step and saves time because the required notification is provided to the veteran directly on the FDC form. The program also attempts to reduce the time VBA would normally spend gathering evidence for the veteran. In exchange for expedited processing, veterans participating in the FDC program send VBA any relevant private medical evidence with the claim and certify that they have no additional evidence to provide. While VBA officials and VSOs expect the program to reduce processing delays for veterans, claims submitted without the required evidence are considered incomplete. Furthermore, claims submitted under the FDC program with incomplete evidence sometimes lose their priority status and are processed with VBA’s non-expedited workload, which can result in additional processing time. According to VBA officials, in the first 2 years of the program, VBA has processed 33,001 FDC claims, taking an average of about 98 days to complete—8 days longer than the goal of 90 days for these claims. VBA officials attribute not meeting FDC processing time goals to the increased workload resulting from processing Agent Orange claims. As of July 2012, veteran participation in the FDC program has been low—only 4 percent of all compensation rating claims submitted in 2012. A VBA official told us that in response to VSO input, they have made the FDC form easier to use. Moreover, the VBA official we spoke with expects more FDC claims once veterans are able to electronically file claims. While FDC claims are currently submitted by paper, the proposed electronic system will guide veterans through the steps to gather the necessary evidence in support of their claim and draw information needed on the form from VBA electronic databases. VBA also began the Appeals Design Pilot—implemented at a single regional office—in spring 2012 to expedite appeals processing. The pilot modifies several program procedures with the goal of decreasing appeals processing times, according to management at the regional office conducting the pilot. For example, veterans participating in the pilot do not file appeals in non-traditional formats. Instead, they use a standardized Notice of Disagreement form. The pilot also forgoes the election of a traditional versus a DRO review of an appeal—providing DRO reviews for all appeals from veterans participating in the pilot. This change eliminates the need for VBA to wait up to 60 days for a veteran to make an election on the type of regional office review in an appeal. In addition, veterans submitting new evidence during the appeal can opt to have their case expedited directly to the Board without having the regional office review the additional evidence submitted. In addition to those mentioned above, the Appeals Design Pilot also has several other elements. For example, expedited processing is provided to appeals that are filed with only one or two disabling conditions. Under the pilot, some VSOs are also waiving the right to a local review of the appeal, but preserving the current practice of permitting VSOs to review the appeal once it goes before the From March through June 2012, 2,300 veterans participated in Board.the pilot. According to VBA, pilot changes have, based on early results, significantly improved processing times. Efforts to Improve Records Acquisition Have Produced Mixed Results VBA has established efforts to standardize and expedite the process for acquiring medical records of veterans (see fig. 8). According to a VBA official, in September 2010, in seven regional offices, VBA began the Vendors for Private Medical Records initiative, which uses a contractor to obtain veterans’ medical records from private physicians. According to VBA, as of July 2012, the contactor had obtained 39,662 treatment records from private medical providers. VBA officials at one site told us that the contractor is frequently able to communicate with doctors more quickly because unlike claims staff who are tasked with multiple duties, the contractor focuses solely on obtaining medical records. VBA has another effort intended to reduce the amount of time spent processing medical documentation. Specifically, physicians are asked to complete Disability Benefits Questionnaires (DBQ), which are standardized medical forms–downloaded from VA’s website—that are designed to speed up the evidence gathering process by using check boxes and standardized language that are intended to more accurately capture information needed from providers. The DBQ forms have been available since March 2012, and VBA claims staff at the sites we visited reported mixed results. For instance, the forms have helped to standardize the medical evidence gathering process, but regional office claims staff in four of the regional offices we met with said that some DBQ forms are quite lengthy, requiring them to scan through multiple pages to find certain information, which can be time-consuming. Claims staff also reported that some of the medical terminology used in the forms is not current, which may make it difficult for providers to complete. VBA officials said that improvements will be made to the forms when the agency converts to a paperless claims system, which might make it easier for claims staff to locate information contained in them. VBA has begun to track through their performance reporting system the number of DBQs completed and the completeness of those submitted by physicians, but is not measuring the initiative’s impact on timeliness. Efforts to Redesign Key Aspects of the Process Are Under Way without a Comprehensive Plan In March 2012, VBA implemented a nationwide initiative that requires staff to use the Simplified Notification Letter (SNL), a process to communicate ratings decisions to veterans.the goal of the SNL is to reduce the time it takes claims staff to provide veterans with claims decisions that are more consistent and easier to understand. The SNL aims to reduce the time that VA staff spend composing rating decisions for claims by providing staff with codes that are associated with template language for rating decisions instead of the previous practice of composing a free-form narrative for each claims decision. According to claims staff at each of the regional offices we According to VBA officials, visited, SNL has decreased the time it takes to rate claims, but claims staff in three regional offices told us it created additional steps in preparing the decision letter sent to the veteran, adding time to the processing awards phase. Claims staff we interviewed in one regional office estimated that the time needed to authorize a claim had increased from 3 minutes to 15 minutes. VBA officials said they have provided additional guidance to staff experiencing challenges with the SNL. In spite of these challenges, VBA reports an increase in production in two regional offices that piloted the SNL initiative. The Claims Organizational Model initiative is aimed at streamlining the overall claims process (see fig. 8). For this initiative, VBA created specialized teams that process claims based on their complexity. Specifically, an “express team” processes claims with a limited number of conditions or issues; a “special operations” team processes highly complex claims, such as former prisoners of war or traumatic brain injury cases; and a core team works all other claims. Each of these teams is staffed with both development and ratings staff, which VBA believes will lead to better coordination and knowledge-sharing. As of August 2012, VBA had implemented the initiative at 18 regional offices. Under this model, VBA also redesigned the procedures that mailrooms use to sort and process incoming claims. According to VBA central office staff, these changes entail incorporating more experienced claims staff to improve the process of routing incoming mail to the appropriate team and claims folder. This change aims to reduce the time it takes for claims-related mail to be entered into the claims processing systems. VBA tracks the impact of the claims process model using existing timeliness metrics and regional office performance measures. In 2010, VBA began to develop the Veterans Benefits Management System (VBMS), a paperless claims processing system that is intended to help streamline the claims process and reduce processing times. According to VBA officials, VBMS is intended to convert existing paper- based claims folders into electronic claims folders that will allow VBA employees electronic access to claims and evidence. Once completed, VBMS will allow veterans, physicians, and other external parties to submit claims and supporting evidence electronically. VBMS is currently being piloted in four VA regional offices. Although the most recent VBMS operating plan calls for national deployment of VBMS to start in 2012, VBA officials told us that VBMS is not yet ready for national deployment, citing delays in scanning claims folders into VBMS as well as other software performance issues. According to VBA officials, the Claims Organizational Model and VBMS will work together to reduce processing times and help VA process veterans’ claims within 125 days by 2015. Although VBMS began its pilot in 2010, VBA has not yet reported on how VBMS has affected processing times. These ongoing efforts should be driven by a robust, comprehensive plan; however when we reviewed VBA’s backlog reduction plan, we found that it fell short of established criteria for sound planning. Specifically, VBA provided us with several documents, including a PowerPoint presentation and a matrix that provided a high-level overview of over 40 initiatives, but could not provide us with a robust plan that tied together the group of initiatives, their inter-relationships, and subsequent impact on claims and appeals processing times. Although there is no established set of requirements for all plans, components of sound planning are important because they define what organizations seek to accomplish, identify specific activities to obtain desired results, and provide tools to help ensure accountability and mitigate risks. Some of VBA’s planning documents identify problems, summarize the overall purpose and goals of the redesign effort, and include some general estimates of project completion dates for some of the initiatives, as well as identify resources for managing the overall implementation efforts. However, the planning documents lack key elements of results-oriented planning. For example, they do not identify implementation risks or strategies to address them. In addition, the planning documents do not include performance goals, measures to assess the effectiveness of each initiative, or their impact on claims and appeals processing timeliness. VBA officials pointed out to us the challenges in isolating the impact of any one initiative on processing times. Nonetheless, sound practices require assessing the effectiveness of each initiative. Conclusions VA provides a critical benefit to veterans who have incurred disabilities as a result of their military service. For years, VA’s disability claims and appeals processes have received considerable attention as VA has struggled to process disability compensation claims in a timely fashion. Despite this attention, VA continues to wrestle with several ongoing challenges—some of which VA has little or no control over—that contribute to lengthy processing timeframes. For instance, the number and complexity of VA claims received has increased. And that number is projected to continue to increase as 1 million servicemembers become veterans over the next 5 years due to the drawdown of troops from a decade of conflict in Afghanistan and Iraq. Moreover, the evidence gathering phase in fiscal year 2011, which took over 5 months (157 days) on average, continues to worsen in 2012, partly due to difficulties in obtaining records for National Guard and Reserve and SSA medical records, according to VBA officials. While recent process and technology improvements hold some promise, without improved evidence gathering, VBA may struggle to meet its goal of processing all compensation claims within its 125 day goal by 2015. Although VBA is attempting to address processing challenges through various improvement initiatives, without a comprehensive plan to strategically manage resources and evaluate the effectiveness of these efforts, the agency risks spending limited resources on initiatives that may not speed up disability claims and appeals processes. This may, in turn, result in forcing veterans to continue to wait months and even years to receive compensation for injuries incurred during their service to the country. Recommendations for Executive Action We recommend the Secretary of Veterans Affairs direct the Veterans Benefits Administration to: 1. Develop improvements for partnering with relevant federal and state military officials to reduce the time it takes to gather military service records from National Guard and Reserve sources. 2. Develop improvements for partnering with Social Security Administration officials to reduce the time it takes to gather medical records. 3. Ensure the development of a robust backlog reduction plan for VBA’s initiatives that, among other best practice elements, identifies implementation risks and strategies to address them and performance goals that incorporate the impact of individual initiatives on processing timeliness. Agency Comments and Our Evaluation VA provided us with comments on a draft of this report, which are reprinted in appendix IV. In its comments, VA stated it generally agreed with our conclusions and concurred with our recommendations, and summarized efforts that are planned or underway to address the recommendations. Specifically, VA agreed with our recommendation to partner with relevant federal and state military officials to develop improvements to reduce the time it takes to gather military service records for National Guard and Reservists. VA stated it has recently initiated several interagency efforts to improve receipt of military service records. According to VA, on December 3, 2012, the joint VBA and DOD Disability Claims Reduction Task Force met to begin to evaluate the process to request records, among other issues, with the aim of improving the timeliness of record exchanges between the two agencies. In addition, VA stated that the joint VA-DOD Virtual Lifetime Electronic Record initiative is focused on developing a complete electronic health record for each servicemember that will be transmitted to VA upon the service member’s military discharge, including National Guard and Reservists. VA identified a targeted completion date of November 2013. We believe these initiatives are heading in the right direction in order to improve the timeliness of meeting VA requests for National Guard and Reservists records. VA agreed with our recommendation to work with SSA officials to develop improvements to reduce the time it takes to gather SSA medical records. VA stated that it is working with SSA to pilot a web-based tool to provide VA staff a secure, direct communication with SSA staff and to automate VA’s requests for SSA medical records. VA officials did not mention this pilot during the course of our data collection and it was not included on the agency’s list of efforts to improve claims and appeals processing initiatives provided to us. VA identified a targeted completion date of November 2013. VA agreed with our recommendation to develop a robust backlog plan for VBA’s initiatives that, among other elements, identifies implementation risks and strategies as well as performance goals that incorporate the impact of individual initiatives on processing timeliness. VA describes a number of approaches it has taken to address our recommendation. Most relevant are the Transformation Plan, which was provided to us during the data collection phase and which we determined fell short of established criteria for sound planning, and the Operating Plan, which was not mentioned during the course of our data collection. According to VA, the operating plan, currently under development, will focus on: (1) integration of people, process, and technology initiatives, (2) identification of new ways to improve efficiency and reengineer the claims process, (3) efforts to automate the current paper-based claims process, and (4) the measurement process. However, it is unclear at this time how the key elements of the operating plan will better position VA to address our recommendation. Moreover, without further information on how the operating plan will focus on the measurement process, it is difficult for us to determine at this time if VA will sufficiently address our recommendation to include performance goals that incorporate measuring the impact of individual initiatives to processing timeliness. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 28 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be made available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines the (1) factors that contribute to lengthy processing times for disability claims and appeals at the Department of Veterans Affairs (VA) and (2) status of the Veteran Benefits Administration’s (VBA) recent efforts to improve disability claims and appeals processing timeliness. To examine factors that contribute to lengthy processing times for disability claims and appeals, we reviewed past GAO and VA Office of Inspector General (OIG) reports and other relevant studies on VA’s claims and appeals processing, such as the Veterans’ Disability Benefits Commission’s 2007 report, Honoring the Call to Duty: Veterans’ Disability Benefits in the 21st Century. We reviewed congressional testimonies, federal statutes, relevant court decisions, and policy manuals and documents, including VA’s Web Automated Reference Manual System to understand the program rules and procedures that govern the claims and appeals processes. We also analyzed disability compensation and pension rating claims processing data from VBA’s internal dashboard and data on claims processing resources from VBA’s Personnel and Accounting Integrated Database. Moreover, we interviewed VBA officials, including VBA area directors, the Office of Field Operations, Compensation Service, and the Office of Performance Analysis and Integrity to gain a national perspective on factors affecting the timeliness of claims and appeals processing. To identify factors within VA regional offices that contribute to lengthy processing times, we conducted reviews of five VA regional offices—Atlanta, Georgia; Houston, Texas; Los Angeles, California; New York, New York; and Philadelphia, Pennsylvania. These reviews consisted of interviewing regional office management and claims processing staff and supervisors, reviewing workload management and performance documents, and reviewing written notifications sent to veterans. We did not conduct case file reviews in these regional offices. We also spoke with representatives of Veterans Service Organizations (VSO) in Los Angeles and in Washington, D.C. to gather perspectives of veterans’ representatives on challenges in the claims and appeals processes. To examine the status of VBA’s recent efforts to improve disability claims and appeals processing timeliness, we reviewed past GAO and VA OIG reports and congressional testimonies. We conducted interviews with officials from the VBA Implementation Center, Office of Strategic Planning and Office of Field Operations. Also, during our review of five VA regional offices, we interviewed claims and appeals processing staff about their experiences with VBA’s initiatives. To identify which VBA efforts were designed to improve timeliness, we reviewed documents providing an overview of the efforts, which included documentation identifying the purpose of each effort. We requested additional information for those initiatives that VBA identified as having the purpose of reducing disability claims and appeals processing times. Furthermore, we selected a sample of nine of VBA’s efforts identified as having the purpose of reducing disability claims and appeals processing times for further review primarily based on interviews with VBA officials and a review of recent VA testimonies. In addition, we spoke with representatives of national VSOs to gather their perspectives on the impact on the veterans they represent of recent and ongoing efforts. (For more information on VBA’s improvement efforts, see appendix III). Analysis of VBA Claims and Appeals Processing Timeliness and Resource Data To assess VBA disability claims workload and processing timeliness, we obtained monthly regional office and national data from VBA’s internal dashboard, which aggregates key metrics used to assess performance from a variety of data sources into one integrated tool. We limited our analysis to timeliness and workload metrics used to measure the performance of the disability compensation and pension rating claims and appeals processing. We analyzed data from fiscal year 2009 through August 2012. To verify the reliability of VBA’s internal dashboard, we conducted interviews with officials from VBA’s Office of Performance Analysis and Integrity about quality control procedures of VBA’s internal dashboard and practices used to extract timeliness and workload data from underlying data sources. We relied on past GAO data reliability assessments on the Veterans Services Network (VETSNET) system and accompanying VETSNET Operations Reports (VOR), and the Veterans Appeals Control and Locator System (VACOLS), where enterprise-wide workload and timeliness of claims and appeals processing data, respectively, are stored and extracted into the internal dashboard tool. We found the dashboard data to be reliable for reporting regional office and national workload and timeliness trends. To analyze VBA’s claims and appeals processing resources, we obtained data from VA’s Personnel and Accounting Integrated Database and accompanying ProClarity system. We limited our analysis to data on VBA job titles that typically include claims or appeals processing responsibilities—Veterans Service Representatives (VSR), Rating Veterans Service Representatives (RVSR), and Decision Review Officers (DRO)—from fiscal years 2009 through 2012. We reviewed data on full- time equivalents (FTE), number of employees, and personnel actions. To assess the reliability of these data, we interviewed officials in VBA’s Office of Human Resources about practices to record personnel actions, quality control procedures conducted within the Office of Human Resources to ensure the quality of the data, as well as potential limitations to the data. We found the data provided to us by the Office of Human Resources reliable for reporting on claims and appeals processing resources. Selection of VA Regional Offices for Review We selected five VA regional offices for review to gather information on the challenges these selected regional offices face in not only processing disability claims and appeals in a timely fashion, but also in implementing initiatives designed to address processing timeliness. Our five selected sites, which account for 15 percent of all disability compensation and pension rating claims, were Atlanta, Georgia; Houston, Texas; Los Angeles, California; New York, New York; and Philadelphia, Pennsylvania. We conducted site visits with the Los Angeles, Philadelphia, and Atlanta regional offices and teleconferences with the New York and Houston regional offices. We selected regional offices for review based on the following criteria: Geography: We selected at least one VA regional office in each of VBA’s four areas. The New York and Philadelphia regional offices are in the Eastern Area, Atlanta is in the Southern Area, Houston is in the Central Area, and Los Angeles is in the Western Area. Size of metropolitan area: We limited our selection process to regional offices in the Top 15 Metropolitan Statistical Areas (MSA) according to 2010 Census data, due to concerns about the ability of these offices to recruit and retain staff and upper management. Workload: We selected VA regional offices with medium or high disability compensation and pension rating claims workloads. All regional offices in the top 15 MSAs had more than 10,000 disability compensation and pension rating claims pending. According to VBA’s internal dashboard, the median regional office had 8,850 disability compensation and pension rating claims pending as of April 2012. The sites we selected had workloads ranging from 15,874 to 37,805 pending disability compensation and pension rating claims in April 2012. Timeliness: To examine the timeliness of claims processing at VA regional offices, we examined two metrics: the percentage of backlogged disability compensation and pension rating claims— defined as claims pending over 125 days–and the average number of days a disability compensation and pension rating claim was pending. According to VBA’s internal dashboard, 65.6 percent of disability compensation claims nationally were pending over 125 days in April 2012. For the regional offices we selected, the percent of backlogged claims ranged from 61.6 percent to 79.9 percent. Claims were pending an average 243.2 days nationally. For the regional offices we selected, the average days pending ranged from 219.6 days to 325.3 days. We conducted this performance audit from March 2012 through December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Veteran Notification during the Claims and Appeals Processes After VBA receives a disability claim, it generally sends notifications to veterans to either help gather evidence or to let them know that a decision has been made (see fig. 9). Throughout the claims process, VBA sends a standard form letter at the 60-, 120-, and 180-day marks, as applicable, to inform the veteran that VBA has received the claim and that the claim is still pending. During the initiating development phase, VBA sends the Veteran Claims Assistance Act (VCAA) letter acknowledging receipt of the claim, explaining the claims process, and outlining what additional information is needed and what steps VBA will take to substantiate the claim. Much of the notification to veterans occurs during the evidence gathering phase. During this phase, VBA sends the veteran a notification every time VBA makes an attempt to obtain additional evidence or when attempts to obtain evidence have been unsuccessful. Finally, at the end of the award processing phase, a decision letter is sent to the veteran. During the appeals process, VBA generally reaches out to veterans when additional evidence or the veteran’s input is needed, or to announce and explain a decision. The appeals process generally begins when a veteran disagrees with VA’s decision on their disability claim, and files a Notice of Disagreement (see fig. 10). If the veteran does not specify the type of review in the Notice of Disagreement, VBA sends an election letter that details the differences between a traditional and DRO review and asks the veteran to choose a review process. Once a veteran indicates the type of review desired, VBA sends a process letter that explains the review process chosen and details the veteran’s rights throughout the appeals process. Then, if additional evidence is needed to make a decision, such as ordering another Veterans Health Administration (VHA) examination, VBA sends notifications to the veteran throughout the evidence gathering process, similar to the initial claims process. Once all additional evidence is gathered, VBA will review the case. If VBA grants the appeal in full, a decision letter is sent. If VBA denies the appeal or does not grant the appeal in full, it sends a Statement of the Case (SOC) explaining the decision. At this point, the veteran has the option to send in additional evidence, which VBA must consider, and if this evidence does not lead to a full grant, then VBA must send a Supplemental Statement of the Case (SSOC) explaining their decision. In addition to receiving written notifications during the claims and appeals processes, veterans can proactively learn about the status of their claims in several ways. For example, veterans can use eBenefits, a website that VA and the Department of Defense launched in 2009 to help servicemembers and veterans manage their benefits and personal information. Veterans can also speak with staff in VA’s national call center or can contact VA through VA’s web-based Inquiry Routing and Information System (IRIS). Veterans can also visit a VA regional office to speak with VA public contact staff. Appendix III: Selected VBA Efforts to Improve Claims and Appeals Timeliness According to VBA, there are currently over 40 ongoing improvement efforts that are tracked by VBA’s Implementation Center. Below is a list of 15 improvement efforts we identified as having a stated purpose of improving timeliness of claims or appeals processing, based on a review of VA documents and interviews with VBA officials. Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Brett Fallavollita (Assistant Director); Lucas Alvarez; Michelle Bracy; and Ryan Siegel made key contributions to this report. In addition, key support was provided by James Bennett, Robert Campbell, Susan Chin, James Rebbe, Almeta Spencer, Kathleen van Gelder, and Walter Vance. Related GAO Products VA Disability Compensation: Actions Needed to Address Hurdles Facing Program Modernization. GAO-12-846. Washington, D.C.: September 10, 2012. VA Enhanced Monthly Benefits: Recipient Population Is Changing, and Awareness Could Be Improved. GAO-12-153. Washington, D.C.: December 14, 2011. Veterans Disability Benefits: Clearer Information for Veterans and Additional Performance Measures Could Improve Appeal Process. GAO-11-812. Washington, D.C.: September 29, 2011. Information Technology: Department of Veterans Affairs Faces Ongoing Management Challenges. GAO-11-663T. Washington, D.C.: May 11, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Veterans’ Disability Benefits: Expanded Oversight Would Improve Training for Experienced Claims Processors. GAO-10-445. Washington, D.C.: April 30, 2010. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Social Security Disability: Additional Outreach and Collaboration on Sharing Medical Records Would Improve Wounded Warriors’ Access to Benefits. GAO-09-762. Washington, D.C.: September 16, 2009. Veterans’ Benefits: Increased Focus on Evaluation and Accountability Would Enhance Training and Performance Management for Claims Processors. GAO-08-561. Washington, D.C.: May 27, 2008. Veterans’ Disability Benefits: Claims Processing Challenges Persist, while VA Continues to Take Steps to Address Them. GAO-08-473T. Washington, D.C.: February, 14, 2008.
For years, VA has struggled with an increasing workload of disability compensation claims. The average time to complete a claim was 188 days in fiscal year 2011, and VA expects an increase in claims received as 1 million servicemembers leave military service over the next 5 years. As GAO and other organizations have previously reported, VA has faced challenges in reducing the time it takes to decide veterans’ claims. GAO was asked to review these issues. Specifically, this report examines (1) the factors that contribute to lengthy processing times for disability claims and appeals, and (2) the status of VBA’s recent efforts to improve disability claims and appeals processing timeliness. To do this, GAO analyzed VBA performance data and program documents, reviewed relevant studies and evaluations, met with staff from five VA regional offices, and interviewed VBA officials and Veterans Service Organizations. A number of factors—both external and internal to the Veterans Benefits Administration (VBA)—have contributed to the increase in processing timeframes and subsequent growth in the backlog of veterans’ disability compensation claims. As the population of new veterans has swelled in recent years, the annual number of claims received by VBA has gone up. Compared to the past, these claims have a higher number of disabling conditions, and some of these conditions, such as traumatic brain injuries, make their assessment complex. Moreover, due to new regulations that have established eligibility for benefits for new diseases associated with Agent Orange exposure, VBA adjudicated 260,000 previously denied and new claims. Beyond these external factors, issues with the design and implementation of the compensation program have contributed to timeliness challenges. For example, the law requires the Department of Veterans Affairs (VA) to assist veterans in obtaining records that support their claim. However, VBA officials said that lengthy timeframes in obtaining military records—particularly for members of the National Guard and Reserve—and Social Security Administration (SSA) medical records impact VA’s duty to assist, possibly delaying a decision on a veteran’s disability claim. As a result, the evidence gathering phase of the claims process took an average of 157 days in 2011. Further, VBA’s paper-based claims processing system involves multiple hand-offs, which can lead to misplaced and lost documents and can cause unnecessary time delays. Concerning timeliness of appeals, VBA regional offices have shifted resources away from appeals and toward claims in recent years, which has led to lengthy appeals timeframes. VBA is currently taking steps to improve the timeliness of claims and appeals processing; however, prospects for improvement remain uncertain because timely processing remains a daunting challenge. VBA is using contractors to handle some aspects of the claims process, and is also shifting some workload between regional offices. Also, VBA is modifying and streamlining certain claims and appeals processing procedures for veterans who opt to participate in these initiatives in exchange for an expedited decision. For example, veterans receive expedited processing when they submit a claim that is certified as having all required evidence. Not many veterans have elected this option, but VA is making adjustments to increase its attractiveness. In addition, VBA is trying to decrease the amount of time it takes to gather medical evidence. For example, VBA recently encouraged medical providers to use a standardized form when responding to VBA’s request for information. However, results of this initiative have been mixed. VBA is also taking steps to streamline the claims process, including implementing initiatives to create (1) standardized language for decision letters sent to veterans, (2) specialized teams that process claims based on level of complexity, and (3) a paperless claims system. According to VBA officials, these efforts will help VA process veterans’ claims within 125 days by 2015. However, the extent to which VA is positioned to meet this ambitious goal remains uncertain. Specifically, VBA’s backlog reduction plan—its key planning document—does not articulate performance measures for each initiative, including their intended impact on the claims backlog. Furthermore, VA has not yet reported on how these efforts have affected processing times, a condition which raises concern given the mixed results that have emerged to date.
Background Established by the National Housing Act of 1934, the FHA single-family mortgage insurance program helps low- and moderate-income families, minorities, and first-time home buyers become homeowners by providing insurance on single-family mortgage loans. The mortgage insurance allows private lenders to provide qualified borrowers with favorable mortgage terms, such as a 3-percent down payment, and generally compensates lenders for nearly all of the losses incurred on such loans. To support the program, FHA imposes up-front and annual mortgage insurance premiums on home buyers. FHA’s single-family mortgage program currently does not require a federal credit subsidy to operate. The Mutual Mortgage Insurance Fund (MMI), which supports this program, is required by law to contain sufficient reserves and funding to cover the estimated future payment of claims on foreclosed mortgages and other costs. FHA’s current mortgage limits vary across the country from $261,609 in high-cost areas to $144,336 in low-cost areas. FHA uses management and marketing contractors to perform certain maintenance on foreclosed properties and to sell the properties to home buyers or investors. VA’s mortgage loan program is an entitlement program that provides eligible veterans with housing benefits. The VA guaranty program allows mortgage lenders to extend loans to eligible veterans on favorable terms, such as a no-down-payment loan, and provides lenders with substantial financial protections against the losses associated with extending such mortgages. To help support the program, veterans are required to pay a funding fee of 1.25 to 3.0 percent of the loan amount. In addition, the program is financed by credit subsidy appropriations to the Veterans Housing Benefit Program Account. RHS operates a guaranteed loan program to help rural Americans with low and moderate incomes purchase single-family homes. The RHS guaranteed loan program does not require borrowers to make down payments or pay monthly mortgage insurance fees. To help offset losses to the government associated with providing financial protections to lenders who make RHS mortgages, RHS currently requires lenders to pay a guarantee fee of 2 percent of the mortgage principal loan amount, which they may pass on to borrowers. Fannie Mae and Freddie Mac are private corporations chartered by Congress to provide a continuous flow of funds to mortgage lenders and borrowers. To fulfill their responsibilities of stabilizing the nation’s mortgage markets and expanding homeownership opportunities, Fannie Mae and Freddie Mac purchase mortgages from lenders across the country and package them into mortgage-backed securities. Most mortgages that Fannie Mae and Freddie Mac purchase are conventional mortgages (i.e., mortgages with no government mortgage insurance or guarantees). They purchase single-family mortgages up to the “conforming loan limit,” which is now set at $300,700. Fannie Mae and Freddie Mac typically require mortgage insurance from private companies on any mortgage purchases with loan-to-value ratios that exceed 80 percent. Fannie Mae and Freddie Mac finance their mortgage purchases through borrowing or issuing mortgage-backed securities that are sold to investors. Mortgage servicers, such as large mortgage finance companies or commercial banks, typically service mortgages insured or guaranteed by FHA, VA, or RHS or purchased by Fannie Mae or Freddie Mac. Mortgage servicers do not necessarily finance the mortgages they service, but rather service mortgages for a fee on behalf of those entities that own mortgages, such as lenders, Fannie Mae, or Freddie Mac. Large servicers typically service FHA, VA, Fannie Mae, and Freddie Mac mortgages, and some service RHS mortgages. Mortgage servicing involves administrative activities such as collecting monthly mortgage payments, maintaining escrow accounts for property taxes and hazard insurance, and forwarding proper payments to purchasers of the loans. Mortgage servicers also are generally responsible for “loss mitigation” (see fig. 1) and for conducting foreclosure proceedings. Table 1 shows the number of FHA, VA, RHS, Fannie Mae, and Freddie Mac foreclosures that were ongoing in 2000. Title insurance companies issue title insurance policies to protect purchasers and lenders against unknown defects of title or against a loss due to any lien or encumbrance that has not been disclosed when a property is purchased or acquired. Title policies typically cover such matters as defective or lost documentation, mistakes, maladministration, or forgery. In addition, title policies typically list exclusions from title coverage for certain defects of title. The Organizations Conduct Foreclosures within State and Federal Guidelines State foreclosure laws establish the general framework and processes that the organizations and mortgage servicers must follow when foreclosing on defaulted mortgages. These state laws and the federal bankruptcy code establish protections for residents and minimum time frames for conducting foreclosures. Within the framework of applicable state and federal laws, the organizations have developed specific procedures for conducting foreclosures. State and Federal Laws Establish Foreclosure Rules State foreclosure laws establish certain procedures that servicers must follow in conducting foreclosures and establish minimum time periods for various aspects of the foreclosure process (see fig. 2). Under state laws, servicers are required to provide to borrowers and the public notices associated with the initiation of the foreclosure process. For example, servicers may be directed to mail a notice to the borrower, post a notice of the foreclosure on the affected property, and publish notice of the foreclosure in local newspapers. State laws also generally require servicers or public officials to conduct foreclosure sales. At the foreclosure sale, the servicer purchases the property by bidding the amount of the outstanding debt or the property fair market value. Then servicers, as described in this report, transfer or convey the properties to the organizations for sale or sell the properties themselves. Several states have enacted “redemption” laws that give borrowers the opportunity to match the winning bids from the foreclosure sale and reclaim their properties. During redemption periods, the organizations or servicers are generally not permitted to pursue additional foreclosure proceedings, such as evicting property residents or securing the properties. According to a Freddie Mac official, state redemption periods range from 10 days to 9 months. If properties are vacant, some state laws may permit the shortening of redemption periods and allow the organizations or servicers to take control of foreclosed properties. After foreclosure sales and applicable redemption periods, the organizations or servicers typically proceed with eviction proceedings if foreclosed properties are not already vacant. State laws generally govern eviction proceedings and provide certain protections to the residents of foreclosed properties. For example, state laws require servicers or the organizations to notify property residents before the initiation of an eviction lawsuit. For example, some states have notification periods that range from 3 to 7 days. Homeowners may also file for bankruptcy proceedings under federal law, a procedure that can extend foreclosure proceedings. Filing a bankruptcy petition automatically stays any pending or planned foreclosure proceedings. Generally, a foreclosure conducted in violation of the stay is void, and the lender can be liable for damages. Under certain conditions, courts may lift stays and allow foreclosure proceedings to resume. Comparison of the Organizations’ Foreclosure and Property Sale Procedures Within the framework of state and federal laws, the organizations have established procedures for initiating foreclosure proceedings, conducting foreclosures, and selling foreclosed properties to home buyers or investors. Some procedures differ, such as the organizations’ criteria for initiating foreclosures, while others are similar. This section summarizes the organizations’ foreclosure procedures, while two key differences— FHA’s approach to foreclosed property custody and FHA and VA title evidence requirements—are discussed more fully later in the report. Criteria for Initiating Foreclosures Differ Table 2 shows the criteria that the organizations have established for initiating foreclosure proceedings. Fannie Mae and Freddie Mac direct their servicers to initiate foreclosure proceedings at earlier stages than FHA, VA, and RHS. According to Fannie Mae officials, while the organization directs servicers to proceed with foreclosure at an earlier stage than the government organizations, servicers are also required to continue pursuing loss mitigation efforts. Fannie Mae officials said that the organization directs foreclosures at an earlier stage to help minimize losses and because borrowers are more likely to be receptive to loss mitigation efforts when foreclosure is pending. Fannie Mae officials said that the simultaneous approach of loss mitigation and foreclosure proceedings is advantageous because it enables more borrowers to retain their homes and reduce losses in the event loss mitigation is not successful. Freddie Mac officials also stressed that loss mitigation efforts continue even after the initiation of foreclosure proceedings. A Freddie Mac official told us that in some cases the organization, servicers, and borrowers have worked out loss mitigation agreements on the date of foreclosure sales. FHA, VA, and RHS officials said that they have public missions and obligations to their customers, such as low-income Americans, veterans, and rural residents, and take additional time to initiate foreclosure proceedings. Like Fannie Mae and Freddie Mac, each of these organizations also encourage their servicers to continue pursuing loss mitigation efforts after the initiation of foreclosure proceedings. Table 3 summarizes the procedures that the organizations have established to conduct foreclosures. As shown in the first row of table 3, all of the organizations expect servicers to follow established foreclosure time frames, which can vary by state. Organization officials said that they have analyzed the foreclosure laws and bankruptcy laws in each state and collected data on past foreclosure proceedings to determine how long it should take servicers to complete the foreclosure process in each state. The organizations may reward servicers financially for meeting or beating these deadlines and may impose financial penalties where servicers fail to meet the guidelines. For example, FHA generally does not compensate servicers for their interest expenses if they exceed the established deadlines. As shown in the second row of table 3, the organizations have developed differing approaches regarding the law firms that mortgage servicers use to conduct foreclosures. In certain states, Fannie Mae and Freddie Mac have identified law firms that are available to servicers in conducting foreclosures. Referred to as “retained” or “designated” attorneys, these law firms conduct all of the legal procedures related to foreclosures. Fannie Mae and Freddie Mac officials said that the designated attorneys have significant experience in foreclosure work and can ensure that the process is completed in the most efficient manner possible. VA, FHA, and RHS do not designate attorneys but rather permit servicers to choose the law firms that they will employ to carry out foreclosures. According to FHA, while the use of designated attorneys may be more efficient than allowing servicers to choose their own law firms, as a mortgage insurer FHA lacks the prerogative to designate attorneys for its servicers. Also, as noted in table 3, the organizations have established different bidding instructions for servicers at foreclosure sales. VA instructs servicers to bid the property fair market value less foreclosure expenses, while Freddie Mac instructs servicers to bid the outstanding debt on foreclosed properties or the fair market value. In some cases, the properties’ fair market value may be less than the outstanding debt. While RHS does not require servicers to follow specific bidding instructions, the organization allows servicers to bid at 85 percent of the fair market value when the property value is less than the outstanding loan balance. According to organization officials, these instructions, by allowing bids below the outstanding debt or fair market value, are designed to encourage third parties such as investors to bid at foreclosure sales, thus permitting the organizations to avoid the costs associated with selling such properties. However, organization and servicer officials estimate that only about 3 to 5 percent of foreclosed properties are sold to third parties, because the properties are frequently occupied and investors are not allowed to inspect the properties unless they are vacant. Freddie Mac officials also said that bidding below a property’s outstanding debt allows financial institutions to pursue deficiency judgments against defaulted borrowers. In contrast, Fannie Mae and FHA generally instruct servicers to bid the outstanding debt plus foreclosure expenses, an amount that may be significantly higher than the fair market value. Although Fannie Mae and FHA bidding instructions deter third-party bids and result in the organizations themselves selling the properties, officials said that the instructions were cost-effective. Fannie Mae and FHA officials said that the costs associated with enticing third-party bids at foreclosure sales, such as the costs for conducting appraisals to determine fair market value, were not justified by the relatively low percentage of successful third-party bids. FHA officials said that they rarely pursue deficiency judgments because most defaulted borrowers have minimal, if any, recoverable assets. FHA Property Custody Procedures Delay Maintenance and Marketing VA, RHS, Fannie Mae, and Freddie Mac follow a similar approach in that the organizations, or servicers in the case of RHS, have custody of and are responsible for maintaining foreclosed properties from the time of the foreclosure sale until the properties are sold to home buyers or investors. FHA, on the other hand, divides foreclosed property custody between its servicers and its management and marketing contractors from the time of the foreclosure sale until the property is sold to purchasers. FHA procedures (1) prevent the timely initiation of critical property maintenance and marketing, as is practiced by the other organizations; (2) can delay conveyance to FHA management and marketing contractors due to time-consuming procedures necessary to perform maintenance that exceeds established cost ceilings; and (3) result in disputes between FHA servicers and management and marketing contractors after property conveyance. Because of delayed property maintenance and marketing strategies, FHA may receive lower property sales prices than would otherwise be the case. FHA officials have recently considered proposals to streamline FHA’s foreclosed property custody procedures. The Organizations’ Approaches to Property Custody Differ As shown in figure 3, Fannie Mae, Freddie Mac, and VA maintain unified custody of foreclosed properties from the time of the foreclosure sale until the properties are sold to home buyers or investors. Fannie Mae and Freddie Mac require servicers to convey properties to them within 24 hours of foreclosure sales, while VA generally requires servicers to convey within 15 days. The organizations and their vendors or contractors are responsible for overseeing properties during redemption periods, evicting property residents, performing necessary property maintenance, and selling the properties. Although RHS’s approach differs in that servicers never convey foreclosed properties, property custody is unified. RHS servicers are responsible for overseeing properties throughout the foreclosure process, maintaining the properties, and selling them. RHS establishes a 6-month deadline for servicers to sell foreclosed properties, measured from the date of the foreclosure sale, and will generally not compensate servicers for any liquidation expenses incurred beyond the deadline. In contrast, FHA divides property custody and maintenance responsibilities between its servicers and contractors, which operate largely independently of one another. FHA requires servicers to oversee properties during postforeclosure sale redemption periods, to evict residents if properties are occupied, and to perform critical maintenance on properties (also known as “preservation and protection”). Under FHA procedures, servicers are to initiate preservation and protection work on the date that they obtain “possession and control” of the properties, typically the date that tenants are evicted or when the servicers determine that the property is vacant (see fig. 4). Servicers have 30 days to complete the preservation and protection work, convey the properties to FHA’s management and marketing contractors, and then file claims with FHA to recover the costs associated with the foreclosures. FHA typically reimburses servicers for the costs associated with performing preservation and protection work. The management and marketing contractors also have ongoing preservation and protection responsibilities that can overlap those of the servicers (see table 4). FHA pays the contractors a fee, generally ranging from 6 to 10 percent of the net sales price, as compensation for their work. FHA usually does not reimburse contractors for the costs of performing basic maintenance; they are generally expected to cover such costs from the fees that they earn for selling the properties. FHA’s Divided Custody Prevents Timely Property Maintenance and Marketing With unified property custody, Fannie Mae, Freddie Mac, RHS, and VA are able to develop comprehensive and timely strategies that can help sell foreclosed properties quickly. For example, Fannie Mae and Freddie Mac officials said that with unified custody they can ensure that properties are inspected routinely, vacant properties are immediately secured, all needed maintenance and repairs are initiated promptly, and marketing strategies are developed at an early stage. VA and RHS officials we contacted expressed similar views about the benefits of unified property custody. In contrast, by dividing responsibility between servicers and contractors, FHA procedures prevent the initiation of all maintenance necessary to protect foreclosed properties and sell them quickly. Rather, FHA procedures can prevent the initiation of critical maintenance until after servicers convey foreclosed properties to management and marketing contractors—up to 30 days or more after the possession and control date.Current FHA guidance does not require servicers to clean up exterior and interior debris left on properties unless it poses public health and safety risks. The presence of such nonhazardous debris (such as discarded furniture) for extended periods can reduce buyer interest in the properties and negatively affect neighborhoods. Property maintenance is further complicated by the fact that some local FHA offices require servicers to clean up all exterior debris, regardless of whether it is a hazard or not, while others require only hazardous debris removal. An FHA official we contacted said that FHA is working on regulations that will require servicers to remove all exterior debris. However, the FHA official said that the revised regulations will not require servicers to remove nonhazardous debris from property interiors. FHA procedures can also require a substantial amount of interpretation by servicers and management and marketing contractors. For example, servicers are required to determine whether debris left on foreclosed properties poses “immediate” health and safety risks and whether to remove such debris. If servicers determine that an object such as an abandoned vehicle does not pose an immediate health risk, they may decide to leave it on the property. An FHA official also said that since servicers are responsible for lawn maintenance but are not responsible for removing nonhazardous exterior debris, servicers would technically be within regulations to cut the grass around debris left on the properties. An FHA official said that FHA’s new regulations will require servicers to remove exterior debris and cut the entire lawn. FHA’s divided approach to property custody also prevents the immediate development of marketing strategies. Representatives from management and marketing contractors that we contacted said that they often do not learn about foreclosed properties until servicers convey the properties to the contractors, which could be months after the foreclosure sale. In contrast, VA, Fannie Mae, and Freddie Mac officials and RHS servicers can begin marketing strategies shortly after the foreclosure sale, since these organizations have established unified property custody procedures. FHA Has Time-Consuming Procedures for Reviewing Maintenance That Exceeds Established Cost Limits FHA’s divided approach to property responsibility and custody can also result in delays in conveyance when required preservation and protection work exceeds established cost ceilings. Fannie Mae, Freddie Mac, VA, and FHA have established locality-based cost ceilings for property maintenance. Although establishing controls over maintenance expenditures is important, FHA’s procedures for reviewing such proposed expenditures are more formal and time consuming than those of the other organizations. Fannie Mae, Freddie Mac, and VA officials have extensive information about properties within days of foreclosure sales and can act in a coordinated fashion with vendors and contractors to review proposals to exceed established maintenance costs as quickly as is feasible. Fannie Mae and Freddie Mac officials said that in cases in which required maintenance exceeds established limits, vendors call the organizations or submit fax requests. According to Fannie Mae and Freddie Mac officials, their staffs work closely with their vendors and generally review and make final decisions on requests within 1 or 2 days, frequently via E-mail or telephone call. A VA official said that its 46 district offices have maintenance cost guidelines that are set within prevailing local rates. According to the VA official, when contractors determine that maintenance will exceed established rates, they call VA officials or representatives for decisions, which are typically granted within a day or two. The VA official said that contractors are subsequently required to submit photographs or other evidence to support these expenditures and VA performs audits to assess the appropriateness of these costs. Because FHA servicers and contractors act largely independently of one another, rather than in a coordinated fashion, FHA maintains a comparatively formal and time-consuming system for reviewing property maintenance proposals that exceed established cost ceilings. FHA expects servicers to complete preservation and protection work, including work exceeding established cost limits, within the 30-day period between possession and control and property conveyance. Servicers must submit written proposals to management and marketing contractors for review, and FHA regional officials must approve such requests as well. FHA allows the management and marketing contractors 10 days to review the servicers’ requests and to respond in writing. If contractors fail to respond in writing within 10 days, FHA regulations require servicers to follow up with the contractors until the servicers receive a written response. FHA, through its contractors, may also require servicers to obtain written bids from outside providers for work exceeding the established limits. For example, a senior FHA official said that servicers are required to obtain written bids when proposed removal of hazardous exterior debris removal exceeds established cost ceilings (see fig. 5). According to FHA servicer representatives we contacted, obtaining permission from management and marketing contractors to do maintenance that exceeds FHA’s cost ceilings can delay conveyance. In some cases, servicer representatives said that the management and marketing contractors did not respond to their requests on a timely basis, potentially preventing them from completing necessary preservation and protection work within the 30-day deadline. We asked three large FHA servicers to provide data on their performance in conveying properties within the 30-day guidelines. Table 5 shows that these servicers conveyed properties within the deadlines less than 80 percent of the time on average in 2000 and 2001. Figure 6 provides an example of the types of delays that servicer representatives said can occur in attempting to obtain approval from management and marketing contractors to perform preservation and protection work that exceeds established cost limits. Management and marketing contractor representatives said they did not believe that timeliness was generally a problem, but added that sometimes servicers fail to provide required paperwork and that this can cause delays. FHA officials we contacted cited other factors as responsible for servicers’ failure to convey all foreclosed properties within 30 days. The FHA officials said that some servicers lack the staff necessary to ensure that properties are maintained and conveyed in accordance with FHA standards. FHA Servicer and Contractor Disputes Occur after Property Conveyance As shown in table 4, FHA’s management and marketing contractors are responsible for performing certain preservation and protection work on conveyed properties, including work that servicers fail to complete. Servicer compliance with FHA standards for completing required preservation and protection work has been a source of continuing conflicts among FHA, contractors, and servicers. Senior FHA officials and several management and marketing contractor representatives that we contacted questioned servicer compliance with required preservation and protection work on conveyed properties and said that the failure to perform such work can negatively affect foreclosed property values and marketability. Representatives of several large FHA servicers that we contacted said that they generally perform preservation and protection work within established guidelines. Although FHA has instituted additional layers of review to ensure adequate property maintenance, FHA’s divided approach to property custody will likely result in continuing conflicts. In March 2001, FHA instituted a program to improve servicer preservation and protection work that subsequently encountered significant implementation problems. FHA required its management and marketing contractors to identify neglected maintenance on conveyed properties and demand that servicers reimburse FHA for such work. As a result of this program, management and marketing contractors issued hundreds of “demand letters” to servicers requiring refunds within 10-day deadlines. In many cases, servicer representatives we contacted said that the management and marketing contractors (1) failed to provide the demand letters until the expiration of the 10-day deadline or later, (2) demanded refunds to FHA on properties that had been conveyed as long as 2 years prior to the date of the letter, and (3) required refunds for maintenance that is not the servicers’ responsibility. Figure 7 provides an example of a dispute between an FHA servicer and a management and marketing contractor. A senior FHA official acknowledged that the program experienced significant implementation problems and said that revised procedures have been initiated. According to the official, FHA did not provide the contractors with adequate training on the types of maintenance deficiencies for which contractors could demand refunds from servicers. FHA has taken several steps to help clarify procedures and resolve disputes between contractors and servicers: FHA servicers can appeal contractor demand letters to the local regional office and subsequently to HUD’s National Servicing Center in Oklahoma City for a final decision. FHA is in the process of drafting additional guidance to management and marketing contractors that will clarify the circumstances under which the contractors can demand refunds from servicers on neglected preservation and protection work. FHA has instituted a pilot program in which independent inspectors examine properties at the time servicers obtain possession and control, which will allow FHA to identify damage caused by occupants and note preservation and protection work that must be completed prior to conveyance. (FHA officials said that in fiscal year 2002 inspectors will review about 250 properties and noted that there are plans to expand the program to 6,000 properties by the latter part of fiscal year 2003.) Despite FHA’s recent initiatives, divided custody will likely continue to generate disputes between servicers and management and marketing contractors. Continued disputes are likely between servicers and management and marketing contractors due to the complexities of FHA’s property maintenance procedures. FHA’s appeals process and preconveyance inspection program provide opportunities to resolve disputes between servicers and contractors. However, these initiatives add layers of oversight and review to FHA’s foreclosure and property sale processes, at FHA’s expense, that are not present at the other organizations, which have established unified property custody. Delayed Maintenance and Marketing Could Increase FHA’s Foreclosure Losses By delaying the initiation of critical maintenance and marketing and generating disputes between servicers and contractors, FHA’s divided property custody approach can place financial demands on the MMI. To the extent that property values deteriorate as a result of such factors as debris left on properties for extended periods or lawns left uncut due to disputes between servicers and contractors, the properties may sell for lower prices than would otherwise be the case. As a result of lower property sales prices, FHA would recover less of what it had already paid in claims to servicers. That is, revenues flowing into the MMI would be lower than would otherwise be the case. FHA Has Considered Proposals to Establish Unified Property Custody During 2000 and 2001, FHA considered proposals to revise procedures and establish unified property custody and thereby help ensure prompt maintenance and marketing strategies necessary to preserve property values. Two large servicers made proposals to FHA under which the servicers would sell foreclosed properties rather than conveying them to management and marketing contractors. The servicers proposed to clean up all debris on foreclosed properties immediately and develop strategies to market the properties at an earlier stage than was currently the case. One large servicer estimated that its proposal would shorten the sales time on FHA foreclosed properties by about 59 days and save FHA approximately $18 million annually. In addition, a senior FHA official said that FHA has considered implementing a pilot program under which management and marketing contractors would assume responsibility for foreclosed properties earlier in the process, perhaps as early as the foreclosure sale. Although FHA officials said that they were supportive of proposals to establish unified property custody, they have not established firm time frames to test their feasibility. A senior FHA official said that FHA and HUD must first resolve outstanding legal and contractual issues before the proposals can be tested. More recently, FHA has proposed a pilot program in which servicers would assign mortgages to FHA rather than completing the foreclosure process and conveying the properties to management and marketing contractors. FHA would then sell the defaulted notes to the private sector for servicing and/or sale, thereby permitting one entity to control both the foreclosure and property sale processes. FHA officials said that the program will likely reduce but not eliminate the number of properties that the organization acquires through foreclosure proceedings and becomes responsible for selling. FHA and VA Have Not Adequately Supported Title Insurance Policy Expenditures FHA and VA together spent approximately $31.5 million in 2000 reimbursing servicers for the costs associated with purchasing title insurance policies, which are used to help establish that the organizations have title to foreclosed properties that have been conveyed and can be resold to home buyers or investors. In contrast, Fannie Mae, Freddie Mac, and RHS do not reimburse servicers for the purchase of new title insurance policies and report few title-related problems in selling their foreclosed properties. FHA and VA do not collect adequate data to determine whether their expenditures on title insurance policies are cost effective. The Organizations’ Approaches to Title Insurance Policies Differ Although title insurance policies can provide protection against certain title defects to borrowers—and their lenders—when new mortgages are originated, Fannie Mae, Freddie Mac, and RHS have determined that new title policies are not necessary during the foreclosure process. Fannie Mae and Freddie Mac officials that we contacted said that servicer foreclosure attorneys are responsible for identifying parties and resolving title issues prior to the foreclosure sales. Organization officials said that the foreclosure sales typically resolve the vast majority of title issues. The officials also said that they do not purchase new title insurance policies at conveyance. Fannie Mae and Freddie Mac officials said that the organizations may reconvey foreclosed properties to servicers if serious title problems arise, but that such cases are very rare. Similarly, RHS does not require servicers to purchase title insurance policies at foreclosure sales, and RHS officials said that serious title-related problems are rare. FHA and VA statutes provide the organizations with discretion in establishing what constitutes acceptable evidence of title. Although FHA and VA have established several potential forms of evidence of title, title insurance policies are the de facto standard. FHA and servicer officials said that it is the standard practice for servicers to purchase title insurance policies, which cost about $500, and that FHA reimburses the servicers for two-thirds of the costs of these policies. According to a VA official, each of VA’s 46 field offices has the authority to determine acceptable evidence of clear title, and available evidence suggests that the majority of VA district offices require title insurance. For example, we contacted a judgmental sample of seven VA district offices nationwide and found that six require title insurance policies at conveyance. VA reimburses servicers for the full costs associated with these policies. FHA also typically provides financing for new title insurance policies when its foreclosed properties are sold to home buyers or investors, while some VA offices may provide similar financing. FHA and VA officials said that title insurance policies provide an extra level of assurance that the organizations have title to foreclosed properties. As discussed below, however, FHA and VA officials are reconsidering the cost-effectiveness of their title insurance policy expenditures. FHA and VA Do Not Collect Data to Support the Need for Title Insurance Expenditures Although FHA and VA generally expect servicers to purchase new title insurance policies as evidence of title, they do not collect data necessary to support these expenditures. An FHA official said that FHA does not collect data on the number of times that title policies are invoked during the foreclosure process, although the official said this rarely happens. Several management and marketing contractor representatives that we contacted also said that they do not collect data on how often FHA title insurance policies are invoked after conveyance. However, the contractor representatives also said that title insurance policies were rarely if ever needed during the foreclosed property sale process. Senior FHA officials said that they were not certain whether the costs associated with purchasing title insurance policies were justified and were considering revising these policies. FHA officials also stated that the state foreclosure laws require servicers’ attorneys to conduct title searches to ensure that all parties may be notified and the foreclosure properly conducted. These title searches are conducted prior to the foreclosure sale, and FHA compensates servicers for the costs associated with these title searches. FHA officials stated that the title insurance policies that servicers subsequently purchase are based on the same title searches. Therefore, the FHA officials stated that the title insurance policies offer questionable additional value. We note that two large servicers have made proposals to sell foreclosed properties for FHA, rather than conveying them to management and marketing contractors. The servicers proposed to sell FHA properties without obtaining new title insurance policies. VA officials have not implemented the recommendations in a 1995 VA inspector general (IG) report that questioned the cost-effectiveness of the department’s title insurance policy expenditures. Although VA began to encourage servicers to purchase title insurance policies in 1989 to facilitate title reviews on foreclosed properties, the IG report concluded that the policies did not meet this objective. The VA IG report found that title defects were rare and could generally be easily corrected by VA staff. The report also found that VA paid about $23.9 million in title insurance premiums in fiscal years 1992 through 1994, but would have paid only about $121,169 to correct title defects if it had “self-insured” (i.e., paid to resolve these title defects) during the same period. The IG report recommended that VA work with foreclosure attorneys to ensure that foreclosures produce adequate evidence of title and discourage the purchase of title insurance policies. VA agreed in 1995 to direct its district offices to review their title evidence requirements annually and to document that new title insurance policies are cost effective in establishing title. Despite this 1995 VA policy, five of the six VA offices we contacted that encourage the purchase of title insurance policies at conveyance could not produce the required memorandum on cost-effectiveness. One VA district office produced a memorandum, which merely stated that it would be cost effective to encourage the purchase of title insurance policies to facilitate title reviews. However, an official at the VA district office in Houston, which does not require a new title insurance policy as evidence of title, said that its servicers generally provide a copy of a title search, which costs about $75, as evidence of title. The VA official said that title searches save time and money associated with obtaining new title policies and that the office rarely encounters title problems when selling foreclosed properties to home buyers or investors. Because they do not collect additional data on the cost-effectiveness of title requirements, VA and its district offices cannot determine whether similar strategies to lower the costs of purchasing title insurance policies could be used successfully. According to VA officials, they are in the process of reviewing their approach to foreclosed property management. As part of this review, VA officials said that they are reviewing the cost- effectiveness of the organization’s title insurance expenditures. VA officials said that they expect to make a decision regarding whether to continue expending funds on title insurance policies by early in calendar year 2003. Available Data Suggest FHA Takes Longer to Sell Foreclosed Properties Determining the organizations’ comparative performance in selling foreclosed properties is difficult because FHA and RHS do not collect all of the data necessary to do so. On the basis of available data, we estimate that it takes about 55 to 110 days longer to sell foreclosed FHA properties than is the case for the other organizations. We note that it is also difficult to determine the extent to which FHA’s divided approach to foreclosed property custody contributes to FHA’s comparatively slow performance. Other factors, such as FHA’s approach to compensating servicers for their foreclosure expenses, may also play a role. Under certain conditions, FHA may inadvertently provide servicers with financial incentives to use the maximum allotted time to complete the foreclosure process. FHA and RHS Do Not Collect Data on the Time Used To Sell Foreclosed Properties Fannie Mae, Freddie Mac, and VA collect data on the time that it takes to sell foreclosed properties from the date of the foreclosure sale until the properties are sold to home buyers or investors. Although FHA does collect data on the time foreclosed properties are in management and marketing contractors’ inventory (from the date servicers convey properties to the contractors until the date they are sold to home buyers or investors), this period represents only a portion of the entire postforeclosure sale timeline. The data FHA collects does not measure the time that servicers have custody of properties, including the redemption, eviction, and preservation and protection periods. An RHS official said that RHS currently does not maintain centralized and automated data on the time that it takes their servicers to sell properties, from the time of the foreclosure sales until properties are sold. Senior FHA and RHS officials told us that they plan to collect additional data. As discussed earlier, FHA officials said that they plan to conduct preconveyance inspections on about 250 properties in fiscal year 2002 and hope to expand the program in the future to include as many as 6,000 properties by the latter part of fiscal year 2003. As part of these inspections, FHA officials said that they plan to collect information on property foreclosure sales dates. The FHA officials said that they would use the information to help assess servicer performance. Beginning in fiscal year 2002, RHS officials said servicers will report on the time necessary to sell their foreclosed properties. RHS officials said that collecting the data in a centralized and accessible manner would allow them to better monitor servicer performance. RHS officials expect the automated system to be in place by February 2003. Available Data Suggest FHA Foreclosed Property Sales Performance Is Comparatively Slow Table 6 provides available data on the time that elapsed in acquiring and selling FHA, VA, RHS, Fannie Mae, and Freddie Mac foreclosed properties in 2000, and the data indicates that FHA properties took the longest to sell, at 292 days. Although the Fannie Mae, Freddie Mac, and VA data are more comprehensive, we had to collect data and estimate comparable time frames for FHA and RHS. We collected state-by-state averages of the number of days from the foreclosure sale until property conveyance from four large servicers. We then added these data to data provided by FHA on the average number of days that it takes management and marketing contractors to sell foreclosed properties. We also contacted the two largest RHS servicers, and they provided data on the time their staffs took to sell foreclosed properties, as measured from the foreclosure sale until the properties were sold to home buyers or investors. Servicer Compensation Requirements May Inadvertently Provide Financial Incentives to Take the Maximum Time Allotted to Complete the Foreclosure Process While FHA’s divided approach to foreclosed property custody likely contributes to the length of time needed to sell FHA properties, other factors may contribute as well. We could not, however, determine the amount of time each factor contributes to FHA’s lengthy foreclosure sale to property sale process. These other contributing factors may include the strength of real estate markets and FHA’s sale of foreclosed properties to nonprofit organizations. In addition, the compensation that FHA is required to provide to servicers may help explain the comparatively long period that it takes to sell FHA’s foreclosed properties. Potentially, FHA’s process for compensating servicers for expenses associated with foreclosures inadvertently provides servicers with financial incentives to take the maximum time that FHA allows to complete foreclosure proceedings. Servicers typically use borrowed funds to finance the payment of outstanding debt on mortgages in the process of foreclosure. When servicers bid on properties at foreclosure sales, they continue to use borrowed funds to finance the recovered properties. Servicers pay interest on these borrowings, which is referred to as the cost of funds, until they file claims with FHA when properties are conveyed. FHA compensates servicers for their interest expenses at what is known as the debenture interest rate, which is set slightly below the rate at which the foreclosed mortgages were originated. We found that the difference between the servicers’ cost of funds, and the debenture interest rate that these servicers receive, can be significant. Our analysis shows that, from 1985 through the first half of 2001, the FHA debenture rate average consistently exceeded the cost of funds (see fig. 8). In September 2001, during a period of declining interest rates, the difference between this average rate and the cost of funds was about 4.5 percentage points. Given that FHA’s compensation system provides servicers with a potentially significant profit, these servicers appear to have financial incentives to take the maximum time allotted to complete foreclosure proceedings. Conversely, Fannie Mae, Freddie Mac, and VA servicers do not receive any interest after foreclosure sales because they convey properties immediately after the foreclosure sales. Therefore, Fannie Mae, Freddie Mac, and VA bear the interest costs associated with holding properties themselves and have financial incentives to complete proceedings and sell properties as quickly as possible. We note that RHS’s compensation system for servicers is similar to that of FHA and may provide its servicers with financial incentives to take the maximum time allotted to complete foreclosure and property sale proceedings. However, RHS has also set strict deadlines for servicers to complete foreclosure proceedings and sell properties. RHS generally requires servicers to sell properties within 6 months of foreclosure sales and will not pay any interest on loss claims beyond that date. As noted earlier, FHA has established time frames for servicers to complete foreclosure proceedings and will not pay interest expenses if servicers exceed these guidelines. The FHA guidelines are not directly comparable to the RHS guidelines because FHA servicers are not required to sell properties. Conclusions FHA’s divided approach to foreclosed property custody is inefficient, delays critical maintenance and marketing necessary to preserve property values, results in disputes between servicers and contractors, and likely contributes to the lengthy period of time that available data suggest it takes to sell FHA properties. As a result of the inefficiencies in FHA procedures, the organization maintains or has initiated several complex layers of oversight, such as an appeals process and preconveyance inspections, to ensure that properties are properly maintained. FHA officials have appropriately considered alternative procedures to establish unified property custody, but have not yet implemented pilot programs to test their feasibility. Although unified property custody would streamline FHA’s procedures, it need not come at the expense of current FHA policies that encourage servicers to pursue loss mitigation, and it need not result in foreclosure proceedings being initiated faster than is currently the case. Nor would unified custody affect state and federal laws that provide protections to homeowners. If properly designed, unified custody procedures would have built-in financial incentives that preserved foreclosed property values and resulted in faster sales to the benefit of the MMI and to neighboring communities. FHA and VA cannot provide information on either the costs of purchasing title insurance policies during the foreclosure process or their benefits. We estimate that FHA and VA spent $31.5 million on new title insurance policies in 2000. We also found that the limited evidence that is available suggests purchasing new title policies is not cost-effective and that less expensive options may be available. In particular, the VA IG reportquestioned the cost- effectiveness of title insurance policies, and management and marketing contractor representatives we contacted reported few if any instances when title policies were necessary. Further, Fannie Mae, Freddie Mac, and RHS do not encourage servicers to purchase new title insurance policies during foreclosures and report few title-related problems. FHA and RHS do not collect data necessary to measure the time that elapsed in acquiring and selling foreclosed properties. Specifically, neither organization collects data on the foreclosure sales date. Without such data, the organizations cannot assess the performance of servicers in fulfilling their obligations in either managing or selling foreclosed properties. FHA officials stated that collecting the foreclosure sale date would be helpful in measuring the performance of servicers in completing foreclosure sales and in obtaining control of properties. Likewise, RHS officials have stated that collecting such data would be useful in measuring servicer performance in selling properties. Both organizations plan to collect data on foreclosure sales dates. Collecting such data should not pose an undue burden on FHA and RHS servicers, given that we were able to collect it from several large servicers. Recommendations To provide for the most effective acquisition and sale of FHA’s foreclosed properties, we recommend that the secretary of the Department of Housing and Urban Development (HUD) establish unified property custody as a priority for FHA. The HUD secretary should determine the optimal method of unified property custody. That is, the HUD secretary should determine the method of unified custody that best ensures FHA borrowers continuing benefits from loss mitigation and homeowner protections under state and federal laws, provides appropriate incentives for limiting the time and expense of acquiring and selling properties, and ensures that properties are maintained to the benefit of the FHA insurance fund and communities. The HUD secretary should then implement the optimal method for establishing unified custody. If this optimal method requires additional statutory authority, the HUD secretary should seek it. We also recommend that the HUD secretary and the secretary of the Department of Veterans Affairs (VA) immediately assess the cost- effectiveness of their expenditures on title insurance purchases. The HUD secretary and VA secretary should revise these policies if the costs of purchasing these title insurance policies are not clearly justified by their benefits and less expensive alternative means of establishing title are available. Finally, to improve the quality of information available to FHA and RHS on the time necessary to sell foreclosed properties, we recommend that the HUD secretary and the secretary of the Department of Agriculture collect additional data from their servicers. Specifically, the HUD secretary should collect data on foreclosure sales dates, and the secretary of agriculture should collect data on foreclosure sales dates and the dates that foreclosed properties are sold to home buyers or investors, and maintain this data in a format that is easily accessible by RHS managers. Agency Comments and Our Evaluation We obtained written comments on a draft of this report from FHA and VA officials. The written comments are presented in appendixes II and III, respectively. In addition, we sought and obtained further clarification of FHA’s written comments from a senior FHA official. RHS, Fannie Mae, and Freddie Mac officials chose to provide oral comments on a draft of the report. All of the organizations provided technical comments, which have been incorporated into the final report as appropriate. FHA agreed with our recommendations to establish unified property custody as a priority, assess its title insurance policy expenditures, and collect additional data. First, FHA stated that unified custody could streamline processes and oversight, reduce holding time, and increase the net return on the sale of foreclosed properties. FHA also stated that there were statutory explanations for the current divided approach to foreclosed property custody, and that statutory changes are necessary to implement specific approaches to unified custody. FHA stated that it would continue research to determine the feasibility of unified custody within the framework of existing statutes and to identify regulatory and contractual issues that would be necessary to facilitate such a change. Further, FHA stated that it would explore statutory changes that could increase the efficiency of its property sale program. Second, FHA agreed that its expenditures on title insurance policies during the foreclosure process add questionable value. FHA stated that it is reviewing these expenditures and has begun to investigate alternative approaches. Third, FHA stated that it agreed with our recommendation to collect data from all servicers on foreclosure sales dates, although it may take a year or more to implement the recommendation for several reasons, such as the need to change computer systems. We believe that collecting data on foreclosure sales dates is crucial for FHA to assess servicer performance in managing foreclosed properties and in assessing the various approaches to establishing unified property custody. VA agreed with the report’s conclusions and the recommendation that the organization immediately assess the cost-effectiveness of its expenditures on title insurance policies. VA said that it would review its title insurance expenditures as part of its ongoing analysis of its loan-guaranty related business processes and policies. VA expects to complete this review early in calendar year 2003. RHS stated that it agreed with our recommendation that the organization collect data on foreclosure sales dates and the dates that foreclosed properties are sold to home buyers or investors, and it also agreed that the data be maintained in a format that is easily accessible by RHS managers. RHS said that it has plans to collect the additional data and is in the process of developing a comprehensive, fully automated system that will be used to support both payment and monitoring of loss claims. RHS estimates that the automated system will be in place by February 2003. Fannie Mae and Freddie Mac officials said that the draft report accurately portrayed their foreclosure, property sale, and data collection programs. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs and other interested members of Congress and congressional committees. We will also send copies to the HUD secretary, the VA secretary, the secretary of agriculture, the chief executive officer of Fannie Mae, and the chief executive officer of Freddie Mac. We will also make copies available to others upon request. Please contact me or Mathew J. Scire at (202) 512-6794 if you or your staff have any questions concerning this report. Key contributors to this report were Andrew E. Finkel, Diana Gilman, Rachel DeMarcus, Jill M. Johnson, Kyong H. Lee, Wesley M. Phillips, Barbara M. Roesmann, and Richard Vagnoni. Appendix I: Scope and Methodology To provide information on state foreclosure laws and compare the organizations’ procedures, we interviewed officials from the organizations, mortgage servicers, FHA management and marketing contractors, law firms that specialize in foreclosures, the Mortgage Bankers Association, the American Land Title Association, and Mortgage Insurance Companies of America. We also contacted other experts in housing market finance and foreclosures and three banks in the Boston area that hold mortgages in their portfolios and manage foreclosures and property sales. We reviewed relevant rules and regulations provided by the organizations, reports and studies, state and federal statutes pertaining to foreclosures, and statistics on mortgage defaults and foreclosures. We also developed summaries of the organizations’ foreclosure and property sale procedures. To identify the effects of FHA’s procedures on property maintenance and marketing, we contrasted FHA’s procedures to those of the other organizations and identified procedures that can delay steps necessary to sell properties and that offer no clear and corresponding benefits. We discussed FHA’s procedures with organization officials, servicer representatives, management and marketing contractor officials, and experts in real estate management. We also collected data and examples from FHA officials and large mortgage servicers that demonstrate the effects of FHA’s procedures. In particular, we asked three large servicers to provide data on their performance in conveying foreclosed properties within deadlines established by FHA, which they agreed to do. To assess the cost-effectiveness of FHA and VA title insurance expenditures, we reviewed their policies regarding the evidence necessary to establish title to foreclosed properties. We also requested that FHA and VA provide data on the benefits of their title insurance expenditures. In addition, we reviewed a relevant VA IG report on VA’s title evidence policies, and we contacted seven VA district offices to assess their compliance with a VA policy that was implemented in response to the report’s recommendations. To estimate the time that it takes to acquire and sell foreclosed properties, we collected data from Fannie Mae, Freddie Mac, and RHS. To estimate the time necessary to acquire and sell FHA properties, we collected data from four large servicers who conducted about 30 percent of all FHA foreclosures in 2000. We judgmentally selected these servicers on the basis of their size and willingness to provide data. Specifically, the servicers agreed to provide data on the average amount of time that they held custody of FHA foreclosed properties (from the time of the foreclosure sale until conveyance to FHA management and marketing contractors). We combined these data with national data provided by FHA on the time that it takes its contractors to sell foreclosed properties. Because RHS does not yet collect data on the time that it takes to sell foreclosed properties, we collected data from the two largest RHS mortgage servicers, which service about 30 percent of all RHS mortgages. We focused our analysis on the time from the foreclosure sale until properties are sold to homeowners or investors, because the organizations encourage servicers to pursue loss mitigation strategies until foreclosure sales. The organizations generally consider loss mitigation as the best means to minimize the cost and disruptions associated with foreclosures, and we did not want to imply that the completion of foreclosure sales should proceed any faster than is currently the case. The period of time from foreclosure sale until properties are sold to investors is also a common measure of performance in the real estate industry. We did not independently verify the data provided by the organizations or servicers. Due to data limitations, we were not able to estimate the number of days that various factors, such as FHA’s approach to foreclosed property custody and the strength of real estate markets, contribute to the total number of days that are taken to acquire and sell FHA foreclosed properties. To discuss another potential contributing factor, we collected historical data that shows the differences between FHA’s debenture interest rate and large servicers’ cost of funds. To estimate large servicers’ cost of funds, we used the interest rate on commercial paper. We conducted our work in Washington, D.C.; Boston; Dallas; Manchester, N.H.; and Oklahoma City between June 2001 and January 2002 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from FHA and VA, which are reprinted in appendixes II and III, respectively. RHS, Fannie Mae, and Freddie Mac officials decided to provide oral comments on a draft of this report. Each of the organizations provided technical comments, which have been incorporated into this report where appropriate. Appendix II: Comments from the Department of Housing and Urban Development The following are GAO’s comments on the Department of Housing and Urban Development’s March 5, 2002, letter. GAO Comments 1. We agree that FHA data show a significant decline in the average time that management marketing contractors held properties between 1999 and 2001. In fact, FHA data show that nearly all of the improvement in inventory time occurred by 2000. Specifically, data that FHA provided subsequent to the official comment letter show that the average inventory time declined from 270 days in calendar year 1999 to 185 days in calendar year 2000, and 173 days in calendar year 2001. According to a senior FHA official, the 219-day figure cited for 1999 in the comment letter is incorrect. Because we used data for calendar year 2000, the report reflects the significant decline in inventory times that FHA has reported. Therefore, we believe that the report fairly describes the time that it takes to sell FHA properties. 2. This footnote has been deleted. 3. The FHA comment letter paraphrases our recommendation differently than the way the full recommendation was written in the draft report. We recommended that the HUD secretary establish unified property custody as a priority for FHA and determine the optimal method of unified property custody. FHA paraphrased our recommendation as stating that HUD should determine if unified property custody represents the optimal method of property custody and, if so, implement it after seeking any required statutory authority. We contacted a senior FHA official to obtain clarification on FHA’s position on our recommendation. The FHA official said that FHA, in paraphrasing the recommendation in the draft report, did not mean to change the recommendation’s meaning. FHA agrees with our recommendation that it should establish unified property custody as a priority. FHA is conducting analysis to determine the feasibility of establishing unified custody within the existing statutory framework and to identify regulatory and contractual changes that would have to be resolved to implement unified property custody. 4. We do not agree with FHA that actions taken to suspend foreclosure proceedings in these 7,800 cases contributed to the time taken to sell FHA properties in 2000. The timeline we provided measures from the date of the foreclosure sale until properties are sold to homebuyers or investors. Because suspension of foreclosure proceedings on these mortgages occurred prior to the completion of the foreclosure sale, the suspension would not add time to the period measured in this report. 5. We revised the figure. 6. We revised the table. 7. We added language to the report body. 8. The final report notes that FHA is developing guidance to clarify the circumstances under which management and marketing contractors can demand refunds from servicers for preservation and protection work that has not been completed according to standards. 9. We have revised the report language. 10. As stated in the report body, the section 601 authority may allow for unified property custody. While FHA’s Accelerated Claims Disposition Program could reduce the number of properties that FHA acquires through foreclosure, it is too early to judge its ultimate success. Further, as the FHA commissioner states, even if the demonstration program is successful and expanded to the majority of defaulted mortgage loans, it will not eliminate FHA’s responsibility for acquiring and selling foreclosed properties entirely. We believe that FHA should established unified custody as a priority for any such foreclosed properties for which it becomes responsible in the future. 11. We disagree with FHA that the draft report stated that FHA does not retain data on the number of properties that are reconveyed due to irresolvable title defects. The draft report stated that FHA does not collect data on the number of times that title insurance policies are used during the foreclosure process or the types of problems that require title insurance. Therefore, we made no changes in response to this comment. 12. As stated in the report draft, the debenture rate can significantly exceed servicers’ cost of funds. In fact, the debenture rate has exceeded servicers’ cost of funds since 1985. In a rising interest rate environment, such as last occurred in 1984, servicers’ cost of funds may exceed the debenture rate. 13. We added further language to the final report noting that FHA has established time frames in each state for completing foreclosures. Appendix III: Comments from the Department of Veterans Affairs
Federal programs in the Federal Housing Administration (FHA), the Department of Veterans Affairs (VA), and the Rural Housing Service (RHS) promote mortgage financing for low-income, first-time, minority, veteran, and rural home buyers. Congress has also chartered private corporations--Fannie Mae and Freddie Mac--to provide mortgage lending and to promote homeownership opportunities. Many homeowners fall behind in their mortgage payments each year due to unemployment, health problems, or the death of a provider. To avoid high cost foreclosure proceedings when home buyers fall behind on their obligations, FHA, VA, and RHS instruct mortgage servicers, typically large financial institutions, to assist the home buyers in bringing their mortgage payments current. Despite these efforts, in 118,000 cases in 2000 the mortgage servicers engaged in various foreclosure proceedings under the direction of the organizations. FHA procedures delay the initiation of critical steps necessary to preserve the value of foreclosed properties and to sell them quickly. Although Fannie Mae, Freddie Mac, VA, and RHS designate one entity as responsible for the custody, maintenance, and sale of foreclosed properties, FHA divides these responsibilities between its mortgage servicers and management and marketing contractors, which operate largely independently of one another. Determining the organizations' comparative performance in selling foreclosed properties is difficult because FHA and RHS do not collect all of the data necessary for comparison. However, on the basis of available data, it takes nearly 90 days longer to acquire and sell FHA foreclosed properties than VA properties, and about 130 to 145 days longer to acquire and sell FHA properties than RHS, Fannie Mae, and Freddie Mac properties.
Magnitude of Unpaid Taxes of GSA Contractors Over 3,800 GSA contractors had about $1.4 billion in unpaid federal taxes as of June 30, 2005. This represents approximately 10 percent of GSA contractors during fiscal year 2004 and the first 9 months of fiscal year 2005. We took a conservative approach to identifying the amount of tax debt owed by GSA contractors, and therefore the amount is likely understated. Characteristics of Contractors’ Unpaid Federal Taxes As shown in figure 1, 85 percent of the approximately $1.4 billion in unpaid taxes owed by GSA contractors was comprised of corporate income and payroll taxes. The other 15 percent of taxes included excise, unemployment, individual income, and other types of taxes. Unlike our previous reports on contractors with tax debts, a larger percentage of taxes owed by GSA contractors was comprised of corporate income taxes, which are unpaid amounts that corporations owe on the income of their business. This was due to a handful of GSA contractors that owed a significant amount of corporate income tax debts as of June 2005. Excluding this handful of cases, payroll taxes make up about 40 percent of the outstanding taxes owed by GSA contractors. Unpaid payroll taxes include amounts that an employer withholds from an employee’s wages for federal income taxes, Social Security, and Medicare—but does not remit to IRS—and the related matching contributions of the employer for Social Security and Medicare. Employers who do not remit payroll taxes to the federal government are subject to civil and criminal penalties. The amount of unpaid federal taxes we identified among GSA contractors—$1.4 billion—is likely understated. First, to avoid overestimating the amount owed by government contractors, we intentionally limited our scope to tax debts that were affirmed by either the contractor or a tax court for tax periods prior to 2005. We did not include the most current tax year because recently assessed tax debts that appear as unpaid taxes may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid, abated, or both within a short period. We eliminated these types of debt by focusing on unpaid federal taxes for tax periods prior to calendar year 2005 and eliminating tax debt of $100 or less. Also limiting the completeness of our estimate of the unpaid federal taxes of GSA contractors is the fact that the IRS tax database reflects only the amount of unpaid taxes either reported by the contractor on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. During our review, we identified instances from our case studies in which GSA contractors failed to file tax returns for a particular tax period and, therefore, were listed in IRS records as having no unpaid taxes for that period. Further, our analysis did not attempt to account for businesses or individuals that purposely underreported income and were not specifically identified by IRS. According to IRS, underreporting of income is the largest component of the estimated $345 billion annual gross tax gap. IRS estimates that underreporting accounts for more than 80 percent of the total gross tax gap. Consequently, the true extent of unpaid taxes for these businesses and individuals is not known. GSA Contractors Involved in Abusive and Potentially Criminal Activity Related to the Federal Tax System As discussed previously, businesses with employees are required by law to collect, account for, and transfer income and employment taxes withheld from employees’ wages to IRS. Businesses that fail to remit payroll taxes to the federal government are liable for the amounts withheld from employees, and IRS can assess a trust fund recovery penalty (TFRP) equal to the total amount of taxes not collected or not accounted for and paid against individuals who are determined by IRS to be “willful and responsible” for the nonpayment of withheld payroll taxes. In addition to civil penalties, criminal penalties exist for an employer’s failure to turn over withheld employee payroll taxes to IRS. Willful failure to remit payroll taxes is a criminal felony offense punishable by imprisonment of not more than 5 years, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. Our audit and investigation of the 25 case-study business contractors showed substantial abuse or potential criminal activity as all had unpaid payroll taxes and have diverted those funds for personal or business use. The 25 case-study contractors typically operate in wage-based industries, providing security, building maintenance, computer services, and personnel services for GSA and the departments of Defense, Homeland Security, Justice, and Veterans Affairs. The types of contracts that were awarded to these contractors included products and/or services related to law enforcement, disaster relief, and national security. The amount of unpaid taxes associated with these case studies ranged from approximately $100,000 to over $9 million. Furthermore, we determined that several of the case studies had unpaid state and local taxes where state and local taxing authorities had filed multiple tax liens against them. Subsequent to the award of the most recent contract by GSA, one case study company and its owner were debarred from future federal contracts for illegal activity unrelated to their failure to pay their payroll taxes. Table 1 highlights 10 case studies with unpaid taxes. Our investigations revealed that, despite their companies owing substantial amounts of taxes to the IRS, some owners had substantial personal assets—including commercial real estate, interest in a chain store, or multiple luxury vehicles. Further, several owners owned homes worth over $1 million. See appendix III for the details on the other 15 GSA contractor case studies. We are referring the 25 cases detailed in our report to IRS so that it can determine whether additional collection action or criminal investigation is warranted. The following provides illustrative detailed information on several of these cases. Case 1: This contractor provides emergency supplies for civilian agencies. At the same time the company was not paying its taxes, the company made a loan to a company officer for hundreds of thousands of dollars. The company subsequently filed for bankruptcy owing a substantial amount of federal and state taxes. After the company came out of bankruptcy, the company again failed to remit all of its taxes, including payroll taxes. IRS assessed a trust fund recovery penalty against the company and the owner for willful failure to remit payroll taxes. Case 2: The company provided security services for a civilian agency. Our investigative work indicates that an owner of the company made multiple cash withdrawals, totaling close to $1 million, while the contractor owed payroll taxes. The company’s owner used the cash withdrawals to fund an unrelated business and purchase a men’s gold bracelet worth over $25,000. The company’s owner has been investigated for fraud. Case 4: The company provides security services for a civilian agency. Our investigative work indicates that the owner of the company did not make tax deposits because the company did not have the funds to pay employee costs or other business expenses. However, we found that the company owner owns multiple properties worth over $1 million. The owner also owes IRS approximately $200,000 for personal income taxes. Tax Debts Are Generally Not Considered When Awarding Contracts Federal law implemented in the FAR, and GSA internal policies do not require GSA contracting officers to examine tax debt when awarding contracts, nor do they provide guidance as to what role, if any, tax debt should play in determining whether prospective contractors meet the general criteria of responsible contractors. Also, due to a statutory restriction on disclosure of taxpayer information, even if tax debts specifically were to be considered in the awarding of contracts, no coordinated or independent mechanism exists for contracting officers to obtain complete information on contractors that have unpaid tax debt. Therefore, GSA does not screen contractors for tax debts prior to awarding contracts to GSA-paid contractors and GSA interagency contractors, and ultimately, contractors with unpaid federal taxes receive contracts from the federal government. Contractors with Federal Tax Debts Are Not Explicitly Prohibited from Doing Business with the Federal Government Federal law implemented in the FAR and GSA internal policies do not expressly prohibit a contractor with unpaid federal taxes from being awarded contracts from the federal government. Although the FAR requires that federal agencies only do business with responsible contractors, it does not specifically require federal agencies to deny the award of contracts to businesses and individuals that have unpaid taxes, unless the contractor was specifically debarred or suspended by a debarring official for specific actions, such as conviction for tax evasion. As part of the contractor responsibility determination for prospective contractors, the FAR requires contracting officers to determine whether a prospective contractor meets several specified standards, including adequate financial resources and a satisfactory record of integrity and business ethics. However, the FAR does not require contracting officers to consider tax debt in making this determination. Similarly, GSA policies in implementing the FAR do not provide any additional guidance to GSA contracting officers on whether or how tax debts should be considered when making a determination of financial responsibility. According to GSA officials, contracting officers may consider delinquent tax debts as part of their overall determination of a prospective contractor’s financial capability; however, the focus of such evaluation is on determining whether the contractor has the financial capability to deliver the products and services. Thus, there is no expectation that the contracting officer will consider tax compliance when evaluating whether companies have the integrity or ethics to perform the contract. In addition, according to GSA officials, the determination for financial capability of the contract is only applicable when awarding new contracts. Thus, if the contractor does not pay its tax debts after the contract award, no consideration of this will be made by GSA contracting officers for the duration of the contract or at the subsequent exercise of any options to extend, which for certain GSA Supply Schedule contracts can last up to 20 years. The FAR specifies that unless compelling reasons exist, agencies are prohibited from soliciting offers from, or awarding contracts to, contractors that are debarred, suspended, or proposed for debarment for various reasons, including tax evasion. Conviction for tax evasion is cited as one of the causes for debarment, while commission, i.e., indictment, for tax evasion is cited as a cause for suspension. However, the deliberate failure to remit taxes, in particular payroll taxes, while a felony offense, will likely not result in a company being debarred or suspended unless the contractor was indicted or convicted of the crime. During our work, we found that none of the contractors described in this testimony, nor the 97 contractors we reported in our previous work, have been charged with tax evasion, despite having abusive and potentially criminal activities related to the tax system. Restrictions on Tax Data Hamper Making Contractor Responsibility Determinations Current law restricts contracting officers’ access to tax debt information unless reported by prospective contractors themselves or disclosed in public records. Consequently, contracting officers do not have ready access to information on unpaid tax debts to assist in making contractor responsibility determinations with respect to financial capability, ethics, and integrity. Contracting officers do not have a coordinated and independent mechanism to obtain accurate tax debt information on contractors that abuse the tax system. Federal law does not permit IRS to disclose taxpayer information, including tax debts. Thus, unless the taxpayer provides consent, certain tax debt information can only be discovered from public records when IRS files a federal tax lien against the property of a tax debtor. However, contracting officers are not required to obtain credit reports, which provides public record information, and when credit reports are obtained, GSA contracting officers generally focus on the contractor’s credit score and not necessarily any liens or other public information. In addition, public record information is limited because IRS does not file tax liens on all tax debtors, and, while IRS has a central repository of tax liens, contracting officers do not have access to that information. Further, the listing of a federal tax lien in the credit reports of businesses or individuals may not be a reliable indicator of a contractor’s tax indebtedness because of deficiencies in IRS’s internal controls that have resulted in IRS not always releasing tax liens from property when the tax debt has been satisfied. Unless reported by prospective contractors themselves, contracting officers face significant difficulties obtaining or verifying tax compliance information on prospective contractors. For example, in one contractor file we reviewed, a GSA official did inquire about a federal tax lien with a prospective contractor. The prospective contractor provided documentation to GSA demonstrating the satisfaction of the tax liability covered by that lien. However, because the GSA official could not obtain information from the IRS on tax debts, this official was not aware that the contractor had other unresolved tax debts unrelated to this particular tax lien. GSA Contractors Not Required to Undergo Further Determination of Responsibility or Background Investigation GSA interagency contractors are not only approved to do business with GSA, but with all federal agencies. The FAR does not require agencies that use contracts awarded by other agencies to perform additional background or other investigation to validate the awarding agencies’ determination that the contractors are responsible. Agency officials at the four agencies at which we inquired—the departments of Justice, State, and Veterans Affairs, and the National Aeronautics and Space Administration— generally stated that they did not perform additional background or other investigations when using contractors selected for interagency contracts. These officials informed us that they had assumed GSA performed all the screening necessary to ensure that the contractors were responsible contracting sources. Consequently, when GSA awards interagency contracts to contractors with tax debt, contractors with tax debts will be given an opportunity to do business with other federal agencies for the duration of the GSA contract. Contractors with Tax Debts Have Unfair Cost Advantage in Contract Competition GSA contractors with tax debts have an unfair advantage in costs when competing with contractors that pay their taxes. This is particularly true for wage-based industries that provide relatively basic types of goods and services, such as security and moving services. The most egregious abuse, not remitting employee payroll taxes, saved these companies over 15 percent of the employee’s wages. Clearly, contractors that do not pay their taxes do not bear the same costs that tax compliant contractors have when competing on contracts. As a result, when in direct competition for homogenous types of goods and services in wage-based businesses, these contractors could offer prices for their goods and services that are lower than their tax compliant competitors. Our investigations showed that some GSA contractors that did not fully pay their payroll taxes were issued task orders based solely on price over competing contractors that did not have any tax debts. The following provides information on these cases. Case 1: A GSA Schedule contractor was competitively awarded a task order from the GSA schedule in the late 1990s to provide temporary personnel services over another GSA contractor that was compliant with its taxes. The task order award was based solely on the hourly cost of the temporary employee. At the time, the contractor had owed taxes for at least 10 years. This contractor had a history of incurring payroll taxes on one company, then upon being assessed a trust fund recovery penalty on that company but making little or no payments, closing that company and starting another. The owner later renewed the contract under a new company name and Taxpayer Identification Number. Case 2: A GSA Schedule contractor was issued two competitive task orders for services related to moving office furniture and equipment. On both task orders, the contractor’s offer for services was significantly less than three competing offers on the first order and two competing offers on the second order. The contractor owed about $700,000 in unpaid taxes (mostly payroll taxes) while its competitors did not owe any unpaid taxes. Because the contractor did not pay its payroll taxes, a significant cost in a wage-based business, the contractor’s cost structure provided the contractor more flexibility in setting its price in the competition for federal contracts. Concluding Comments There is widespread concern today about contractor fraud and related ethics problems in federal government contracting. However, except for contractors charged with or convicted of tax evasion, no laws or policies exist today that prevent GSA contractors with abusive and potentially criminal activity related to the federal tax system from being awarded contracts and doing business with federal agencies. Aside from any general concerns about the federal government doing business with contractors that do not pay their taxes, allowing these contractors to do business with the federal government while not paying their taxes could create an unfair competitive advantage for these contractors. In essence, the current contract award process fails to encourage contractors to pay these taxes. This causes a disincentive to contractors to pay their fair share of taxes, and could lead to further erosion in compliance with the nation’s tax system. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the magnitude of tax debts owed by GSA contractors; (2) identify examples of GSA contractors that have tax debts and are also engaged in potentially abusive, fraudulent, or criminal activities; and (3) determine whether GSA screens contractors for tax debts and criminal activities prior to awarding contracts and at the exercise of any government contract options. To identify the magnitude of unpaid taxes owed by GSA contractors, we first identified the federal contractors that were either GSA interagency contractors or that were paid by GSA. To identify GSA-paid contractors, we obtained from the Department of the Treasury’s Financial Management Service (FMS) the Payments, Claims, and Enhanced Reconciliation (PACER) database containing all Automated Clearing House (ACH) and check payments made by FMS on behalf of GSA to federal contractors during fiscal year 2004 and the first 9 months of fiscal year 2005. To identify contractors screened by GSA’s Federal Supply Service (FSS), we obtained and analyzed GSA data on Multiple Award Schedule (MAS) contracts and other FSS award contracts as recorded in the Federal Supply Service Automated Supply System (FSS-19). To identify contractors screened by GSA’s Public Buildings Service (PBS), we obtained and analyzed GSA data from its Pegasys and FPDS-NG systems. To identify contractors screened by GSA’s Federal Technology Service, we obtained and analyzed GSA data from its Pegasys system. To identify GSA contractors with unpaid federal taxes, we obtained and analyzed the Internal Revenue Service (IRS) unpaid assessment data as of June 30, 2005. We matched the GSA screened and/or paid contractor records to the IRS unpaid assessment data using the taxpayer identification number (TIN) field. We also matched data obtained from competing bidders (those who were not awarded the task order) to the IRS assessment database using the TIN field to determine whether they owed tax debt. To avoid overestimating the amount owed by contractors with unpaid tax debts and to capture only significant tax debts, we excluded from our analysis tax debts and payments meeting specific criteria to establish a minimum threshold in the amount of tax debt and in the amount of payments to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts are as follows: tax debts that IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2005 tax periods, and contractors with total unpaid taxes of $100 or less. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid, and tax debts that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts from calendar year 2005 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid or abated within a short period. We further excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of taxes owed by GSA contractors. To identify indications of abuse or potentially criminal activity, we selected 25 GSA contractors for a detailed audit and investigation. The 25 contractors were chosen using a non-representative selection approach based on our judgment, data mining, and a number of other criteria. Specifically, we narrowed the 25 contractors with unpaid taxes based on the amount of unpaid taxes, number of unpaid tax periods, amount of payments reported by GSA and FMS, indications that owner(s) might be involved in multiple companies with tax debts, and selection of contractors doing business with a variety of federal agencies. We obtained copies of automated tax transcripts and other tax records (for example, revenue officer’s notes) from IRS as of June 30, 2005, and reviewed these records to exclude contractors that had recently paid off their unpaid tax balances and considered other factors before reducing the number of businesses to 25 case studies. We performed additional searches of criminal, financial, and public records. In cases where record searches and IRS tax transcripts indicate that the owners or officers of a business are involved in other related entities that have unpaid federal taxes, we also reviewed the related entities and the owner(s) or officer(s), in addition to the original business we identified. For the selected 25 cases, our investigators also contacted some contractors, performed interviews, and reviewed contract files to determine the extent of price competition and to identify bidders on competitively awarded contracts. To determine the extent to which contracting officers are to consider tax debts or other criminal activities, we examined the Federal Acquisition Regulation (FAR) and GSA policies and procedures for conducting responsibility determinations on prospective contractors, including specific guidance on responsibility determinations and periodic reviews focusing on the quality of contract awards. We discussed acquisition policies and procedures used to award contracts with officials from the Office of Chief Acquisition, FSS, FTS, and PBS. As part of these discussions, we asked whether contracting officers specifically consider tax debts or perform background investigations to determine whether a prospective contractor is a responsible source before the contract is awarded. We also discussed with GSA officials whether any review is performed by the contracting officer at the option to extend the contract. Additionally, we interviewed an official from GSA’s Kansas City Credit and Finance Center to determine how the center makes financial determination recommendations and the role, if any, that tax debts have on that recommendation. To obtain an understanding of what steps other federal agencies take to screen GSA supply schedule contractors for tax debts or other criminal activities, we interviewed procurement agency officials at selected civilian agencies (including the National Aeronautics and Space Administration and the departments of Justice, State, and Veterans Affairs). We selected these agencies based on a number of criteria, including national security concerns and amount of payments to contractors, especially those with tax debts. As part of these discussions, we determined the level of reliance agencies placed on GSA’s contractor qualification determinations in awarding contracts, even for sensitive contracts such as security, to contractors that have been approved by GSA as responsible sources. We conducted our audit work from June 2005 through January 2006 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Data Reliability Assessment For the IRS unpaid assessments data, we relied on the work we performed during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data was sufficiently reliable to address this report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s masterfile to IRS’s general ledger, identified no material differences. For Payments, Claims, and Enhanced Reconciliation (PACER) database, we interviewed FMS officials responsible for the database and reviewed documentation provided by FMS supporting quality reviews on its databases. In addition, we performed electronic testing of specific data elements in the database that we used to perform our work. To help ensure reliability of GSA-provided data, we interviewed GSA officials concerning the reliability of the data provided to us. In addition, we performed electronic testing of specific data elements in the databases that we used to perform our work and performed other procedures to ensure the completeness of the contract data provided by GSA. We also reviewed the results of the GSA Inspector General’s audit of the system’s internal controls completed in support of GSA’s fiscal year 2004 consolidated and combined financial statements. Based on our discussions with agency officials, review of agency documents, and our own testing, we concluded that the data elements used for this testimony were sufficiently reliable for our purposes. Appendix II: Background As the federal government’s principal business agent, General Services Administration’s (GSA) activities and programs are diverse and have governmentwide implications. Through its supply schedules and governmentwide acquisition contracts, GSA arranges for federal agencies to purchase billions of dollars of goods and services directly from private vendors. In addition, its telecommunication and computer services and real estate activities involve huge sums of money and extensive interaction with the private sector. GSA provides goods and services and develops policy through a network of 11 regional offices and a central office in Washington, D.C. GSA’s programs are generally run by its three service components—Federal Supply Service (FSS), Federal Technology Service (FTS), and Public Buildings Service (PBS). FSS assists federal agencies in acquiring a full range of products— including over 4 million commonly used commercial items, ranging from furniture, computers, tools, equipment, and motor vehicles. FSS also supports agencies in acquiring services, such as professional consulting, travel, transportation, and property management. FSS has followed a self- service business model, using contracts, called supply schedule contracts, that are designed to be flexible, simple to use, and consistent with commercial buying practices. FSS negotiates master contracts with vendors, seeking discounts off commercial list prices that are at least as favorable as the discounts offered to those vendors’ most favored customers. Federal agencies can then use these supply schedule contracts to issue task orders from which goods and services are acquired. FTS provides customers with telecommunications products and services—voice, data, and video—and a full range of IT products and services. Unlike FSS, FTS has followed a full-service business model, providing assisted procurement services to help agencies define and fill their IT and telecommunications requirements. FTS is a major user of the FSS supply schedule contracts as well as a range of contract vehicles FTS and other federal agencies have awarded—commonly known as governmentwide acquisition contracts. PBS is the primary property manager for the federal government, utilizing government buildings and privately owned leased facilities. In order to meet the office space needs for federal agencies, GSA hires and manages private sector professionals, such as architects, engineers, and contractors to design, renovate, and construct federal buildings. In addition, GSA leases space in cities and small towns when leasing is the practical answer to meeting federal space needs. From a financing standpoint, GSA is unusual among federal agencies in that most of its funding does not come from direct appropriations from Congress. Instead, GSA’s funding comes from fees GSA charges agencies for the goods and services provided and the rents from its buildings. As such, GSA must encourage other agencies to acquire goods and services from the contracts GSA has awarded to help cover its operating costs. In fiscal year 2004, GSA reported revenues of approximately $20 billion to cover the costs of its operations. Appendix III: Contractors with Unpaid Taxes Table 1 in the main portion of this testimony provides data on 10 detailed case studies. Table 2 shows the remaining case studies that we audited and investigated. As with the 10 cases discussed in the body of this testimony, we also found substantial abuse or potentially criminal activity related to the federal tax system during our review of these 15 case studies. The case studies involving businesses with employees primarily involved unpaid payroll taxes. Several of the companies negotiated an installment or repayment agreement with the Internal Revenue Service (IRS) but subsequently defaulted on that agreement.
In February 2004 and again in June 2005, GAO testified that some Department of Defense (DOD) and civilian agency federal contractors abused the federal tax system with little consequence. Previous problems we identified with contractors with unpaid taxes have led to concerns over whether any interagency contractors, such as those on the General Services Administration's (GSA) federal supply schedule, failed to pay their taxes. GSA, through its federal supply schedule and other interagency contracts, arranges for federal agencies to purchase billions of dollars of goods and services directly from private vendors. GAO was asked to determine if GSA contractors, including both contractors that were paid by GSA and GSA interagency contractors, have unpaid federal taxes, and if so, to (1) determine the magnitude of tax debts owed by GSA contractors; (2) identify examples of GSA contractors that have tax debts and are also engaged in potentially abusive, fraudulent, or criminal activities; and (3) determine whether GSA screens contractors for tax debts and criminal activities prior to awarding contracts and at the exercise of any government contract options. Over 3,800 GSA contractors had tax debts totaling about $1.4 billion as of June 30, 2005. This represented approximately 10 percent of the number of GSA contractors during fiscal year 2004 and the first 9 months of fiscal year 2005. GAO investigated 25 GSA contractors with abusive and potentially criminal activity. These businesses had not forwarded payroll taxes withheld from their employees and other taxes to IRS. Willful failure to remit payroll taxes is a felony under U.S. law. Furthermore, some company owners diverted payroll taxes for personal gain or to fund their businesses. These contractors worked for a number of federal agencies including the departments of Defense, Justice, and Homeland Security. A number of owners or officers of the 25 GSA contractors have significant personal assets, including commercial properties, houses worth over $1 million, and luxury vehicles. In addition, several of the owners of these GSA contractors gambled hundreds of thousands of dollars at the same time they were not paying the taxes that their businesses owed. Neither federal law, as implemented by the Federal Acquisition Regulation (FAR), nor GSA policies require contracting officers to specifically consider tax debts in making contracting decisions either at initial award or when considering options to extend. In addition, federal law generally prohibits the disclosure of taxpayer data, and consequently contracting officers have no access to tax data directly from the IRS. GSA contractors that do not pay tax debts could have an unfair competitive advantage in costs because they may have lower costs than tax compliant contractors on government contracts. This is especially true in wage-based businesses that provide homogenous types of goods and services. GAO's investigation identified instances in which contractors with tax debts won awards based on price differential over tax compliant competing contractors.
Background Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar- orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position relative to the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, there is one operational POES satellite and two operational DMSP satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government relies on a European satellite, called the Meteorological Operational (MetOp) satellite, for satellite observations in the midmorning orbit. In addition to the operational satellites, NOAA, the Air Force, and a European weather satellite organization maintain older satellites that still collect some data and are available to provide limited backup to the operational satellites should they degrade or fail. The last POES satellite was launched in February 2009. The Air Force plans to launch its two remaining DMSP satellites as needed. Figure 1 illustrates the current operational polar satellite constellation. Polar satellites gather a broad range of data that are transformed into a variety of products. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, processing centers format the data so that they are time-sequenced and include earth-location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather and climate products called environmental data records. These environmental data records include a wide range of atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; land surface products showing snow cover, vegetation, and land use; ocean products depicting sea surface temperatures, sea ice, and wave height; and characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 2 is a simplified depiction of the various stages of satellite data processing, and figure 3 depicts examples of two different weather products. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program—NPOESS—capable of satisfying both civilian and military requirements. The converged program, NPOESS, was considered critical to the nation’s ability to maintain the continuity of data required for weather forecasting and global climate monitoring. NPOESS satellites were expected to replace the POES and DMSP satellites in the morning, midmorning, and afternoon orbits when they neared the end of their expected life spans. To manage this program, DOD, NOAA, and NASA formed a tri-agency Integrated Program Office, with NOAA responsible for overall program management for the converged system and for satellite operations, the Air Force responsible for acquisition, and NASA responsible for facilitating the development and incorporation of new technologies into the converged system. When the primary NPOESS contract was awarded in August 2002, the program was estimated to cost about $7 billion through 2018. The program was to include the procurement and launch of 6 satellites over the life of the program, with each satellite hosting a subset of 13 instruments. The planned instruments included 11 environmental sensors, and two systems supporting specific user services (see table 1). To reduce the risk involved in developing new technologies and to maintain climate data continuity, the program planned to launch the demonstration satellite in May 2006. This satellite was intended to demonstrate the functionality of selected instruments that would later be included on the NPOESS satellites. The first NPOESS satellite was to be available for launch in March 2008. In the years after the program was initiated, NPOESS encountered significant technical challenges in sensor development, program cost growth, and schedule delays. By November 2005, we estimated that the program’s cost had grown to $10 billion, and the schedule for the first launch was delayed by almost 2 years. These issues led to a 2006 decision to restructure the program, which reduced the program’s functionality by decreasing the number of planned satellites from 6 to 4, and the number of instruments from 13 to 9. As part of the decision, officials decided to reduce the number of orbits from three (early morning, midmorning, and afternoon) to two (early morning and afternoon) and to rely solely on the European satellites for midmorning orbit data. Even after the restructuring, however, the program continued to encounter technical issues in developing two sensors, significant tri- agency management challenges, schedule delays, and further cost increases. Because the schedule delays could lead to satellite data gaps, in March 2009 agency executives decided to use S-NPP as an operational satellite. Later, in August 2009, faced with costs that were expected to reach about $15 billion and launch schedules that were delayed by over 5 years, the Executive Office of the President formed a task force, led by the Office of Science and Technology Policy, to investigate the management and acquisition options that would improve the NPOESS program. As a result of this review, in February 2010, the Director of the Office of Science and Technology Policy announced that NOAA and DOD would no longer jointly procure the NPOESS satellite system; instead each agency would plan and acquire its own satellite system. Specifically, NOAA would be responsible for the afternoon orbit and the observations planned for the first and third satellites. DOD would be responsible for the early morning orbit and the observations planned for the second and fourth satellites. The partnership with the European satellite agencies for the midmorning orbit was to continue as planned. When this decision was announced, NOAA and NASA immediately began planning for a new satellite program in the afternoon orbit called JPSS. DOD began planning for a new satellite program in the morning orbit, called the Defense Weather Satellite System, but later decided to terminate the program and reassess its requirements, as directed by Congress. Overview of Initial NOAA Plans for the JPSS Program After the decision was made to disband the NPOESS program in 2010, NOAA began the JPSS satellite program. Key plans included: acquiring and launching two satellites for the afternoon orbit, called relying on NASA for system acquisition, engineering, and integration; completing, launching, and supporting S-NPP; developing and integrating five sensors on the two satellites; finding alternative host satellites for selected instruments that would not be accommodated on the JPSS satellites; and providing ground system support for S-NPP, JPSS, and the Defense Weather Satellite System; data communications for MetOp and DMSP; and data processing for NOAA’s use of microwave data from an international satellite. In 2010, NOAA estimated that the life cycle costs of the JPSS program would be approximately $11.9 billion for a program lasting through fiscal year 2024, which included $2.9 billion in NOAA funds spent on NPOESS through fiscal year 2010. Subsequently, the agency undertook a cost estimating exercise where it validated that the cost of the full set of JPSS functions from fiscal year 2012 through fiscal year 2028 would be $11.3 billion. After adding the agency’s sunk costs, which had increased to $3.3 billion through fiscal year 2011, the program’s life cycle cost estimate totaled $14.6 billion. This amount was $2.7 billion higher than the $11.9 billion estimate for JPSS when NPOESS was disbanded in 2010. In working with the Office of Management and Budget to establish the president’s fiscal year 2013 budget request, NOAA officials stated that they agreed to cap the JPSS life cycle cost at $12.9 billion through 2028, to fund JPSS at roughly $900 million per year through 2017, and to merge funding for two climate sensors into the JPSS budget. Because this cap was $1.7 billion below the expected $14.6 billion life cycle cost of the full program, NOAA decided to remove selected elements from the satellite program. Table 2 compares the planned cost, schedule, and scope of NOAA’s satellite programs at different points in time. We have issued a series of reports on the NPOESS and JPSS programs highlighting technical issues, cost growth, and key management challenges affecting the tri-agency program structure. In June 2012, we reported that while NOAA officials communicated publicly and often about the risk of a polar satellite data gap, the agency had not established plans to mitigate the gap. At the time, NOAA officials stated that the agency would continue to use existing satellites as long as they provide data and that there were no viable alternatives to the JPSS program. However, our report noted that a more comprehensive mitigation plan was essential since it is possible that other governmental, commercial, or foreign satellites could supplement the polar satellite data. Because it could take time to adapt ground systems to receive, process, and disseminate an alternative satellite’s data, we noted that any delays in establishing mitigation plans could leave the agency little time to leverage its alternatives. We recommended that NOAA establish mitigation plans for risks associated with pending satellite gaps in the afternoon orbit as well as potential gaps in the early morning and midmorning orbits. NOAA agreed with the report’s recommendation and noted that the National Environmental Satellite, Data, and Information Service—a NOAA component agency—had performed analyses on how to mitigate potential gaps in satellite data and planned to provide a report by August 2012. More recently, in February 2013, we added the potential gap in weather satellite data to our biennial High-Risk list. In that report, we noted that satellite data gaps in the morning or afternoon polar orbits would lead to less accurate and timely weather forecasting; as a result, advanced warning of extreme events would be affected. Such extreme events could include hurricanes, storm surges, and floods. For example, the National Weather Service performed case studies to demonstrate how its forecasts would have been affected if there were no polar satellite data in the afternoon orbit, and noted that its forecasts for the “Snowmaggedon” winter storm that hit the Mid-Atlantic coast in February 2010 would have predicted a less intense storm further east, with about half of the precipitation at 3, 4, and 5 days before the event. Specifically, the models would have under-forecasted the amount of snow by at least 10 inches. Similarly, a European weather organization recently reported that NOAA’s forecasts of Hurricane Sandy’s track could have been hundreds of miles off without polar-orbiting satellites—rather than identifying the New Jersey landfall within 30 miles 4 days before landfall, the models would have shown the storm remaining at sea. Such degradation in forecasts and warnings would place lives, property, and our nation’s critical infrastructure in danger. We reported that the length of an afternoon polar satellite data gap could span from 17 months to 3 years or more. In one scenario, S-NPP would last its full expected 5-year life (to October 2016), and JPSS-1 would launch as soon as possible (in March 2017) and undergo on-orbit checkout for a year (until March 2018). In that case, the data gap would extend 17 months. In another scenario, S-NPP would last only 3 years as noted by NASA managers concerned with the workmanship of selected S-NPP sensors. Assuming that the JPSS-1 launch occurred in March 2017 and the satellite data were certified for official use by March 2018, this gap would extend for 41 months. Of course, any problems with JPSS- 1 development could delay the launch date and extend the gap period. Figure 4 depicts four possible gap scenarios. We also noted that NOAA had recently established a mitigation plan for a potential 14- to 18-month gap in the afternoon orbit, which identified and prioritized options for obtaining critical observations, including alternative satellite data sources and improvements to data assimilation in models and listed technical, programmatic, and management steps needed to implement these options. However, these plans were only a beginning. We suggested that NOAA must make difficult decisions on which steps it would implement to ensure that its mitigation plans are viable when needed, including how these plans would be integrated with the agency’s broader end-to-end plans for sustaining weather forecasting capabilities. NOAA Has Made Progress on JPSS Development, but Continues to Face Challenges in Completing S-NPP Products, Revising the Program’s Scope, and Meeting Schedules NOAA has made progress towards JPSS program objectives of sustaining the continuity of NOAA’s polar-orbiting satellite capabilities through the S-NPP, JPSS-1, and JPSS-2 satellites by (1) delivering S-NPP data to weather forecasters and (2) by completing significant instrument and spacecraft development for the JPSS-1 satellite. However, the program is behind schedule in validating the readiness of S-NPP products and has experienced delays on the ground system schedules for the JPSS-1 satellite. Moreover, the program is moving to revise its scope and objectives to reduce costs and prioritize NOAA’s weather mission. Until it addresses challenges in product and ground system development, the program office may continue to experience delays in delivering actionable S-NPP data to users and in meeting program development schedules. Weather Forecasters Are Using Selected S-NPP Products, but the JPSS Program Is Behind Schedule in Validating Products and Unaware of the Full Extent to Which They Are Being Used In order to sustain polar-orbiting earth observation capabilities through the S-NPP satellite, over the past 18 months the JPSS program had planned to complete activation and commissioning of the S-NPP satellite, transition the satellite from interim to routine operations, and deliver 76 data products that were precise enough for use in operational weather observations and forecasts. To develop the precise data products, NOAA established a process for calibrating and validating its products. Under this process, most products (which are primarily sensor data records and environmental data records) proceed through three different levels of algorithm maturity—the beta, provisional, and validated levels. NOAA had originally planned to complete efforts to validate S-NPP products by October 2013, which was 2 years after the S-NPP satellite was launched. It is not enough, however, to simply deliver validated products. Both the Software Engineering Institute and GAO recommend tracking whether customers are receiving the expected value from products once they are deployed, and whether corrective actions are needed. Moreover, in April 2013 the Executive Office of the President’s National Science and Technology Council released a national strategy for civil earth observations that called for agencies to, among other things, track the extent to which earth observation data are actually being used, track whether the data had an impact, and provide data users a mechanism to provide feedback regarding ease of use, suspected quality issues, and other aspects of the data. The JPSS program has made progress on S-NPP since launching the satellite in October 2011. Specifically, the program completed satellite activation and commissioning in March 2012, and transitioned from interim operations under NASA to routine operations under NOAA in February 2013. The program also made key upgrades to the ground system supporting S-NPP. For example, in November 2012 the office completed an interim backup command and control facility that could protect the health and safety of the satellite if unexpected issues occurred at the primary mission operations facility. In addition, the JPSS program office has been working to calibrate and validate S-NPP products in order to make them precise enough for use in weather-related operations. While the program office plans to have 18 products validated for operational use by September 2013, it is behind schedule for the other products. Specifically, the program expects to complete validating 35 S-NPP products by the end of September 2014 and 1 other product by the end of September 2015, almost 1 and 2 years later than originally planned. In addition, the program office reported that 15 products do not need to be validated, one product’s validation date has not been established, and 6 products do not have estimated validation dates because the program plans to remove them from its requirements. The program categorized its products by their priority, ranging from priority-1 for the highest priority products, to priority-4 for the lowest priority products. According to NOAA and NASA officials, the S-NPP products’ validation has been delayed in part because of issues initially identified on VIIRS that had to be corrected and additional time needed to validate environmental data record products that require observations of seasonal weather phenomena. Further, program officials stated that they rebaselined the planned product validation timelines in November 2011 and have been generally meeting the target dates of this revised plan. Table 3 illustrates program-reported data on the number of products in each priority level, examples of products, and the estimated validation date for the last product at each level. Even though S-NPP products are not at the validated stage in which products are ready for operational use, the National Weather Service (NWS) has accepted certain products for use in its operational systems. For example, the JPSS program office reported that NWS is using ATMS temperature data records in its operational forecasts, and that the Alaska Weather Forecast Offices are using VIIRS imagery in its forecasts. In addition, NWS’s National Centers for Environmental Prediction is evaluating CrIS sensor data records for use in numerical weather prediction, but has not yet used the data operationally because it is in the midst of a computer upgrade. Officials also stated that the program obtains information on the operational use of S-NPP data from other NOAA offices, including the National Ocean Service and the National Marine and Fisheries Service. While NOAA is aware of these uses, it does not track the extent to which key satellite data users—including users from the Air Force, Navy, Forest Service, European weather offices, and academic institutions—have incorporated S-NPP data into their operations or if corrective actions are needed to make the products more accurate or more effective for the specific users. Program officials noted that they are not required to tailor products to meet non-NOAA user requirements, and that they do not have a tracking mechanism that would allow them to identify which entities are using the data. They noted, however, that the program obtains informal reports from customer representatives through various working groups and forums, such as the Low-earth Orbiting Requirements Working Group and the JPSS Customer Forum. While these efforts obtain information from known customer groups, they do not meet best practices for actively tracking whether customers are using the products, receiving the expected value, or in need of product corrections. Until the program office tracks the use of S-NPP and future JPSS products, it will not have full knowledge of the extent to which products are being used to assess the value they provide to end users and whether corrective actions are needed. More significantly, without information on who is using S-NPP data, NOAA will be unable to ensure that the significant investment made on this satellite is not wasted. Development of JPSS Flight Project Is on Track, but Scheduling Issues on the Ground System Have Caused Delays In order to sustain polar-orbiting earth observation capabilities, the program is working to complete development of the JPSS-1 systems in preparation for a March 2017 launch date. To manage this initiative, the program office organized its responsibilities into two separate projects: (1) the flight project, which includes sensors, spacecraft, and launch vehicles and (2) the ground project, which includes ground-based data processing and command and control systems. Table 4 shows the JPSS projects and their key components. JPSS projects and components are at various stages of system development. The flight project has nearly completed instrument hardware development for the JPSS-1 satellite and has begun testing certain instruments. Also, the flight project completed a major design review for the JPSS-1 satellite’s spacecraft. While the flight project’s development is on track, the ground project experienced delays in its planned schedule that could further delay major program milestones, including key reviews required to establish the program’s cost and schedule baseline. The flight project is generally on track with respect to planned JPSS-1 instrument and spacecraft development efforts. According to program reports of instrument development, the instruments for the JPSS-1 satellite are nearly complete. Specifically, as of July 2013, the instrument hardware ranged from 80 to 100 percent complete. Also, all of the instruments have completed or are scheduled to complete environmental testing reviews in 2013 and are to be delivered to the spacecraft by 2014. The spacecraft completed its critical design review—which evaluates whether the design is appropriately mature to continue with the final design and fabrication—in January 2013. While individual instruments have experienced delays, the key testing milestones and delivery dates for the instruments and spacecraft have generally held constant since the last key decision point in July 2012. CERES experienced a 10-month slip in its delivery date due to a technical issue with the instrument’s internal calibration monitor, and ATMS experienced an 8-month slip to its pre-environmental review due to an issue in one of the sensor’s channels, but even accounting for these slips, the instruments have a schedule reserve of 14 and 10 months, respectively. VIIRS is expected to be the last instrument to be delivered to the spacecraft and has a schedule reserve of 6 months. Also, between July 2012 and December 2012 instrument contractors’ estimated costs at completion increased by $29 million for ATMS, CrIS, and OMPS, while the cost for VIIRS decreased by $46 million. In addition, based on program reports of technical performance, the instruments and the spacecraft are generally meeting expected technical performance. Table 5 describes the current status of the components of the JPSS-1 flight project. The JPSS ground project has made progress in developing the ground system components, but scheduling issues have caused delays in the deployment of system upgrades. Specifically, between August 2012 and February 2013, the program office defined the ground system’s technical performance baseline, ordered and received the first increment of hardware for the next major software release, and transitioned S-NPP operational management from the JPSS program to NOAA’s office responsible for satellite operations. However, the program has delayed the delivery of key ground system upgrades needed to support JPSS-1 because the facilities needed for hardware installation, software development, and testing activities were not available when needed. The ground system upgrades, called block 1.5 and 2.0, were originally scheduled to be delivered in January and December 2015, respectively. To address the problem in scheduling the facilities, NOAA delayed the delivery of block 1.5 and merged it with block 2.0. The program is now expecting to deliver both upgrades in December 2015. We have previously reported that compressing system development schedules introduces program risk because it implies the need to accomplish a larger number of activities in parallel and on time before the next major event can occur as planned. As a result, any complications in the merged ground system upgrades could affect the system’s readiness to support the JPSS-1 launch date. NOAA Revised Program Scope to Focus on Weather Priorities and Reduce Costs While NOAA is moving forward to complete product development on the S-NPP satellite and system development on the JPSS-1 satellite, the agency recently made major revisions to the program’s scope and planned capabilities and is moving to implement other scope changes as it finalizes its plans pending congressional approval. We previously reported that, as part of its fiscal year 2013 budget process, NOAA was considering removing selected elements of the program in order to reduce total program costs from $14.6 billion to $12.9 billion. By October 2012, NOAA made the following changes in the program’s scope: develop two (instead of three) TSIS instruments as well as two free- flyer spacecraft and launch vehicles to accommodate the instruments; reduce the previously planned network of fifteen ground-based receptor stations to two receptor sites at the north pole and two sites at the south pole; increase the time it takes to obtain satellite data and deliver it to the end user from 30 minutes to 80 minutes on the JPSS-2 satellite; not install an interface data processing segment at the two Navy locations or at the Air Force Weather Agency; and withdraw future support for ground operations for DOD’s Defense Weather Satellite System, which was subsequently cancelled. More recently, as proposed by the administration, NOAA began implementing additional changes in the program’s scope and objectives in order to meet the agency’s highest-priority needs for weather forecasting and reduce program costs from $12.9 billion to $11.3 billion. Specifically, NOAA has begun to: Transfer requirements for building the OMPS-limb and CERES follow- on climate sensors for the JPSS-2 satellite to NASA. Transfer the first free-flyer mission from the JPSS program to a separate NOAA program, called the Polar Free Flyer program, and cancel the second free-flyer mission. More information on the Polar Free Flyer program is provided in appendix II. Eliminate requirements for a legacy type of broadcast transmitter, which, according to NOAA officials, is in a spectrum range being crowded out by terrestrial users and is consistent with its European partners’ plans. Reduce science and algorithm requirements for lower-priority data products. Reduce operations and sustainment costs based on increased efficiencies through moving from customized components to more off- the-shelf solutions. Reduce the mission life cycle by 3 years from 2028 to 2025. While we were unable to precisely itemize the reductions in costs associated with various program changes, program officials provided rough estimates. The following table summarizes the reported cost reductions associated with key changes to the JPSS program. While there are a number of reasons for individual changes in the program, the key reason for the June 2012 changes was to meet the program’s $12.9 billion cost cap. The reasons for the more recent changes were to reduce mission costs and complexity, focus JPSS priorities on NOAA’s weather forecasting mission, and identify opportunities to reduce potential gaps between JPSS satellites, all of which an independent study on NOAA’s satellite program recommended in July 2012. While these are worthy goals, the changes NOAA implemented over the last 2 years will have an impact on those who rely on polar satellite data. Specifically, satellite data products will be delivered more slowly than anticipated because of the reduction in the number of ground stations, and military users may not obtain the variety of products once anticipated at the rates anticipated because of the removal of their ground-based processing subsystems. Further, while not as obvious, the impact of other changes, including the removal of the communications downlink and the reduction of requirements for certain algorithms, could also affect specific groups of satellite data users. As NOAA moves to implement these program changes, it will be important to assess and understand the impact the changes will have on satellite data users. JPSS Schedules Demonstrate Multiple Best Scheduling Practices, but Integration Problems and Other Weaknesses Reduce Confidence in the JPSS-1 Launch Date The JPSS program office has established a preliminary integrated master schedule and implemented multiple scheduling best practices, but the integrated master schedule is not complete and weaknesses in component schedules significantly reduce the program’s schedule quality as well as management’s ability to monitor, manage, and forecast satellite launch dates. The incomplete integrated master schedule and shortfalls in component schedules are due in part to the program’s plans to further refine the schedule as well as schedule management and reporting requirements that varied among contractors. Further, while the program is reporting a 70 percent confidence level in the JPSS-1 launch date, its analysis is likely to be overly optimistic because it was not conducted with an integrated schedule and included a component schedule with weaknesses. Until the program office completes its integrated master schedule and addresses weaknesses in component schedules, it will lack the information it needs to effectively monitor development progress, manage dependencies between schedules, and forecast the JPSS-1 satellite’s completion and launch. The JPSS Program Has Not Yet Established a Complete Integrated Master Schedule According to our guidance on best practices in scheduling, the success of a program depends in part on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. The program schedule provides not only a road map for systematic project execution but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. An integrated master schedule constitutes a program schedule as a network of logically linked sequences of activities that includes the entire required scope of effort, including the effort necessary from the government, contractors, and other key parties for a program’s successful execution from start to finish. Although the integrated master schedule includes all government, contractor, and external effort, the government program management office is ultimately responsible for its development and maintenance. The JPSS program office provided a preliminary integrated master schedule in June 2013, but this schedule is incomplete. The program’s June 2013 schedule is its first attempt to document a programwide integrated master schedule since it began in October 2010. The schedule contains the scope of work for key program components, such as the JPSS-1 and JPSS-2 satellites and the ground system, and cites linkages to more detailed component schedules. However, significant weaknesses exist in the program’s schedule. Specifically, about one-third of the schedule is missing logical relationships called dependencies that are needed to depict the sequence in which activities occur. Because a logic relationship dictates the effect of an on-time, delayed, or accelerated activity on subsequent activities, any missing or incorrect logic relationship is potentially damaging to the entire network. Complete network logic between all activities is essential if the schedule is to correctly forecast the start and end dates of activities within the plan. Program documentation acknowledges that this schedule is not yet complete and the program office plans to refine it over time. Until the program office completes its integrated schedule and includes logically linked sequences of activities, it will lack the information it needs to effectively monitor development progress, manage dependencies, and forecast the JPSS-1 satellite’s completion and launch. The Quality of JPSS-1 Component Schedules Is Inconsistent Our scheduling guidance identifies ten best practices that support four characteristics of a high-quality, reliable schedule—comprehensive, well- constructed, credible, and controlled. A comprehensive schedule includes all government and contractor activities, reflects resources (labor, materials, and overhead) needed to do the work, and realistically reflects how long each activity will take. A well-constructed schedule includes activities that are sequenced with the most straightforward logic possible, a critical path that represents a true model of the activities that drive the project’s earliest completion date, and total float that accurately depicts schedule flexibility. A credible schedule reflects the order of events necessary to achieve aggregated products or outcomes (horizontal traceability) and maps varying levels of the schedule to one another (vertical traceability). Also, a credible schedule includes data about risks and opportunities that are used to predict a level of confidence in meeting the project’s completion date. A controlled schedule is updated periodically by trained schedulers using actual progress and logic to realistically forecast dates for program activities and is compared against a designated baseline schedule to measure, monitor, and report the project’s progress. The JPSS program office is applying NASA’s schedule management handbook guidance to manage its schedules, which is largely consistent with our guidance on scheduling best practices. Table 7 provides more detail on the best practices and key characteristics of a reliable schedule. The quality of three selected component schedules supporting the JPSS- 1 mission—VIIRS, the spacecraft, and the ground system—was inconsistent with respect to implementing the characteristics of a high- quality, reliable schedule. Each schedule had strengths and weaknesses with respect to sound scheduling practices, but VIIRS was a stronger schedule with fewer weaknesses compared to the ground system and spacecraft schedules. Since the reliability of an integrated schedule depends in part on the reliability of its subordinate schedules, schedule quality weaknesses in these schedules could transfer to an IMS derived from them. Table 8 identifies the quality of each of the selected JPSS-1 component schedules based on the extent to which they met ten best practices of high-quality and reliable schedules; the discussion that follows highlights observed strengths and weaknesses from each schedule. In addition, appendix III includes a more detailed assessment of each schedule against the ten best practices. Of the ten best practices, the ground system schedule minimally met two best practices, partially met two best practices, and substantially met six best practices. There were strengths in the ground schedule in that the contractor established a clear process for integrated information between the schedule and its resource management software and the contractor has performed resource leveling on the schedule. In addition, the contractor stated that people responsible for the activities estimated activity durations. Also, the contractor stated that it performs wellness checks on the quality of the schedule after each update to identify issues associated with missing logic or date constraints and provides a monthly status briefing to the JPSS program office that addresses the status of external schedule handoffs. However, there were also weaknesses in the ground schedule. For example, activities on the critical path with date constraints are preventing accurate calculations of the schedule’s total float, or flexibility. In order for the critical path to be valid, the activities on the critical path must also have reasonable total float. Without a critical path that accurately calculates schedule flexibility, the program office will not be able to provide reliable timeline estimates or identify when problems or changes may occur and their effect on downstream work. Moreover, while the contractor conducted a schedule risk analysis on the schedule, that analysis was for select near-term milestones rather than the readiness of the ground system for the launch of JPSS-1 and it did not include the risks most likely to delay the project. A schedule risk analysis should be conducted through the finish milestone and should include risk data to determine activities that most often end up on the critical path. Of the ten best practices, the spacecraft schedule partially met eight best practices, and substantially met two best practices. There were strengths in the spacecraft schedule in that it was horizontally and vertically traceable; the contractor provided evidence of monthly progress updates to management, including status reporting of key milestones, handoffs, explanations of date changes, and an analysis of the critical and near- critical paths; the contractor conducted a schedule risk analysis; and the schedule included baseline dates of activities for comparisons of actual performance to date. However, there were also weaknesses in the spacecraft schedule. For example, the schedule had a low level of detail and included one-third of remaining activities with durations greater than 44 days, even after accounting for undefined and procurement-related activities. When establishing the durations of activities, they should be reasonably short and meaningful and allow for discrete progress measurement. Durations longer than 2 months do not facilitate objective measurement of accomplished effort and the milestone to detail activity ratio does not allow for effective progress measurement and reporting. As another example of a quality shortfall, the schedule was overly flexible with high float values that were not justified in schedule documentation. Specifically, 70 percent of remaining activities had about 5 business weeks of float, including 67 activities that had over 1,000 days of float, meaning that these activities could slip approximately 3.5 years without affecting the project’s completion date. In order to establish reasonable total float, there should be documented justification for high float values in the schedule. Without this, it is unclear whether float values are high due to factors accepted by management and which are due to incomplete logic or other issues. The VIIRS schedule partially met one best practice, substantially met seven best practices, and fully met two best practices. There were strengths in the VIIRS schedule in that the contractor established a clear process for integrating information between the schedule and resource management software, stated that durations were estimated by the people responsible for the activities based on work to be done, and justified in its schedule documentation activities with durations longer than 44 days. In addition, the contractor justified in schedule documentation the use of all date constraints, identified a valid driving path of activities for managing the program, and identified reasonable float values or justified them to the JPSS program office. Further, the contractor provided a schedule narrative accompanying each status update, which describes the status of key milestone dates (including the program finish date); explanations for changes in key dates; and a description of critical paths. However, there were also weaknesses in the VIIRS schedule. For example, the schedule had milestones that represented handoffs between contractor integrated product teams, but it did not include handoffs to the JPSS program office. In order to verify a schedule’s horizontal traceability, handoffs should link products and outcomes associated with other sequenced activities. Without this, there could be different expectations between management and activity owners. As another example, the contractor conducted a schedule risk analysis with a good schedule network and obtained three different duration estimates from subject matter experts. However, the duration estimates did not reflect risks from the project’s risk register and the analysis was focused only on activities on the critical path. This approach is flawed because activities that are not currently on the critical path could become critical as risks occur. The inconsistency in quality among the three schedules has multiple causes. Program and contractor officials explained that certain weaknesses have been corrected with updated schedules. In other cases, the weaknesses lacked documented explanation in part because the JPSS program office did not require contractors to provide such documentation. Based on program schedule documentation, the schedule management and reporting requirements varied across contractors without documented justification for tailored approaches, which may partially explain the inconsistency in practices among the schedules. Since the reliability of an integrated schedule depends in part on the reliability of its subordinate schedules, schedule quality weaknesses in these schedules will transfer to an integrated master schedule derived from them. Consequently, the extent to which there are quality weaknesses in JPSS-1 support schedules further constrains the program’s ability to monitor progress, manage key dependencies, and forecast completion dates. Until the program office addresses the scheduling shortfalls in its component schedules, the JPSS schedule will have lower quality and reduced reliability as a management tool for monitoring and forecasting satellite launch dates. Program Has Confidence in the JPSS-1 Schedule, but Its Assumptions Do Not Reflect Weaknesses in the Underlying Data According to our guidance on best practices in scheduling, a schedule risk analysis uses statistical techniques to predict a level of confidence in meeting a program’s completion date. This analysis focuses on key risks and how they affect the schedule’s activities. The analysis does not focus solely on the critical path because, with risk considered, any activity may potentially affect the program’s completion date. By relying on statistical simulations to randomly vary activity durations according to the probability of occurrence for certain durations and risks, the analysis seeks to develop a probability distribution of possible completion dates that reflect the program plan and enable an organization to match a date to its degree of risk tolerance. The JPSS program office has conducted a schedule risk analysis on the JPSS-1 mission schedule (and launch date) through NASA’s joint cost and schedule confidence level (JCL) process. The JCL implemented by the JPSS program office represents a best practice in schedule management for establishing a credible schedule and reflects a robust schedule risk analysis conducted on key JPSS-1 schedule components. For example, the analysis assessed the impacts of key risks from the risk register and how multiple duration estimates for activities, based on documented uncertainty distributions, could affect the schedule. Based on the results of the JCL, the program office reports that its level of confidence in the JPSS-1 schedule is 70 percent and that it has sufficient schedule reserve to maintain a launch date of no later than March 2017. However, the program office’s level of confidence in the JPSS-1 schedule may be overly optimistic for two key reasons. First, the model that the program office used was based on flight project activities rather than an integrated schedule consisting of flight, ground, program office, and other activities relevant to the development and launch of JPSS-1. As a result, the JPSS program office’s confidence level projections do not factor in the ongoing scheduling issues that are impacting the ground project. Had those issues been considered, the JPSS-1 confidence level would have been lower. Second, there are concerns regarding the spacecraft schedule’s quality as discussed in the previous section. Factoring in these concerns, the confidence of the JPSS-1 satellite’s schedule and projected launch date would be lower. We have previously reported that when using the JCL, NASA projects did not always include relevant cost and risk inputs. While program officials noted that they included key ground system risks in their calculations, they did not include ground system scope in the JCL because it was too difficult to allocate ground system components to individual missions. Moreover, officials stated that they do not plan to include ground project or program office activities in future JCL updates. While it may have been difficult to include ground system scope in the JCL, without this, the program’s schedule risk analysis and JCL do not reflect the full amount of work to be performed leading to JPSS-1 launch. Until the program office conducts a schedule risk analysis on an integrated schedule that includes the entire scope of effort and addresses quality shortfalls of relevant component schedules, it will have less assurance of meeting the planned March 2017 launch date for JPSS-1. NOAA Has Analyzed Alternatives for Addressing Gaps in Satellite Data, but Lacks a Comprehensive Contingency Plan While NOAA has identified multiple ways to help mitigate expected gaps in polar satellite data, it has not yet developed and implemented a comprehensive contingency plan. In October 2012, NOAA established a plan to address the impact of potential gaps in polar afternoon satellite data and contracted for a technical assessment that generated additional alternatives for the agency to consider. However, NOAA’s mitigation plan has shortfalls when compared to government and industry best practices. Moreover, NOAA intends to update its plan by fall 2013 by integrating alternatives generated from the contractor’s technical assessment. Until NOAA establishes a comprehensive contingency plan that addresses key shortfalls, it may not be positioned to effectively mitigate anticipated gaps in polar satellite coverage. NOAA Identified Multiple Ways to Mitigate Polar Satellite Data Gaps Polar satellites are essential to NOAA’s mission to understand and predict changes in climate, weather, oceans, and coasts. Satellite data gaps in the morning or afternoon polar orbits would lead to less accurate and timely weather forecasting; as a result, advanced warning of extreme events would be affected. In June 2012, we reported that while NOAA officials communicated publicly and often about the risk of a polar satellite data gap, the agency had not established plans to mitigate the gap. We recommended that NOAA establish mitigation plans for pending satellite gaps in the afternoon orbit as well as potential gaps in the early morning and midmorning orbits and NOAA agreed with the report’s recommendation. In October 2012, NOAA established a mitigation plan to address the impact of potential gaps in polar afternoon satellite data. This plan identifies alternatives for mitigating the risk of a 14- to 18-month gap in the afternoon orbit beginning in March 2016, between the current polar satellite and the JPSS-1 satellite. Key alternatives include utilizing different satellites as data sources and improving data assimilation in models. The plan also lists technical, programmatic, and management actions needed to implement these options. Table 9 provides an overview of NOAA’s polar satellite gap mitigation plan. However, NOAA did not implement the actions identified in its mitigation plan and decided to identify additional alternatives. In October 2012, at the direction of the Under Secretary of Commerce for Oceans and Atmosphere (who is also the NOAA Administrator), NOAA contracted for a detailed technical assessment of alternatives to mitigate the degradation of products caused by a gap in satellite data in the afternoon polar orbit. This assessment solicited input from experts within and outside of NOAA and resulted in the following alternatives: rely on DOD’s DMSP satellite; expand the use of radio occultation data, including funding the ground segment for a follow-on United States/Taiwan radio occultation mission; use atmospheric motion vectors (observed wind data); utilize future geostationary advanced imagery data; expand the use of aircraft observations; expand the use of targeted observations for high-impact events; implement a 4-dimensional hybrid data assimilation system (by adding a time dimension) ; improve data assimilation of cloud-impacted radiances; implement blends of global models, such as European and Canadian models; accelerate global model research to operations; sustain the use of high-latitude direct readout imagery; and rely on China’s future Feng Yun-3 satellite. Moving forward, NOAA officials stated that they are currently considering the additional alternatives and that the agency intends to integrate a final set of alternatives into its existing mitigation plan by the fall of 2013. NOAA Does Not Yet Have a Comprehensive Contingency Plan Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event. As a complement to risk mitigation, contingency planning includes strategies that attempt to reduce or control the impact of risks should they occur. These practices identified by, for example, the National Institute of Standards and Technology and the Software Engineering Institute, include key elements such as defining failure scenarios, identifying and selecting strategies to address failure scenarios, developing procedures and actions to implement the selected strategies, testing the plans, and involving affected stakeholders. These elements can be grouped into categories, including (1) identifying failure scenarios and impacts, (2) developing contingency plans, and (3) validating and implementing contingency plans (see table 10). By documenting its mitigation plan and conducting a study on additional alternatives, NOAA has taken positive steps towards establishing a contingency plan for handling the potential impact of satellite data gaps in the afternoon polar orbit. However, NOAA does not yet have a comprehensive contingency plan because it has not yet selected the strategies to be implemented, or established procedures and actions to implement the selected strategies. In addition, there are shortfalls in the agency’s current plans as compared to government and industry best practices, such as not always identifying specific actions with defined roles and responsibilities, timelines, and triggers. Moreover, multiple steps remain in testing, validating, and implementing the contingency plan. The following table provides an assessment of the extent to which NOAA’s mitigation plan met contingency planning practices in three general categories. NOAA officials stated that the agency is continuing to work on refinements to its gap mitigation plan, and that they anticipate issuing an updated plan in fall 2013 that will reflect additional alternatives. While NOAA expects to update its plan, the agency does not yet have a schedule for adding key elements—such as specific actions, roles and responsibilities, timelines, and triggers—for each alternative. Until NOAA establishes a comprehensive contingency plan that integrates its strategies and addresses the elements identified above to improve its plans, it may not be sufficiently prepared to mitigate potential gaps in polar satellite coverage. Conclusions While NOAA has made noteworthy progress over the past year in utilizing S-NPP data in weather forecasts and developing instrument and spacecraft components of the JPSS-1 satellite, the agency is facing challenges in its efforts to ensure sustained satellite observations. Specifically, NOAA does not expect to validate key S-NPP products until September 2014—nearly 3 years after the satellite’s launch. Also, the agency does not track the usage of its satellite products or obtain feedback on them, which limits the program’s ability to ensure that satellite products are useful. Further, the program experienced scheduling problems on its ground systems, which led to a delay in planned system upgrades. Until NOAA establishes a way to track which agencies are using its products and to obtain feedback on those products, the program office may continue to experience delays in delivering actionable S-NPP data to users. Almost 3 years after the JPSS program was established, it lacks a complete integrated master schedule. While program officials recently established a preliminary integrated master schedule, the schedule lacks proper linkage among dependent activities, which limits its ability to calculate dates and predict changes in the future. Further, the quality of component schedules varied for certain practices. These issues raise questions about the program’s 70 percent joint cost and schedule confidence level in the JPSS-1 launch date. Until the program office develops a complete integrated schedule and addresses weaknesses in component schedules, it will lack the information needed to effectively monitor development progress and ensure the planned JPSS-1 launch date. NOAA has taken steps to mitigate an anticipated gap in polar afternoon satellite data, but its efforts are incomplete. Specifically, the agency has not yet established a comprehensive contingency plan that identifies specific actions with defined roles and responsibilities, timelines, and triggers for contingency strategies. Moreover, the agency’s recent assessment of a larger set of alternatives has not yet been integrated with its mitigation plans. As a result, the agency faces important decisions as to whether and how the various alternatives should be carried out. While NOAA plans to add alternatives to its mitigation plan by fall 2013, it does not yet have plans to add the other key components. Until NOAA establishes a comprehensive contingency plan that addresses these shortfalls, its plan for mitigating potential gaps in the polar orbit may not be effective in avoiding significant impacts to NOAA’s weather mission. Recommendations for Executive Action Given the importance of having reliable schedules for managing JPSS satellite launch dates and the significance of polar-orbiting satellite data to weather forecasts, we recommend that the Secretary of Commerce direct the Administrator of NOAA to track the extent to which key groups of satellite data users are using S-NPP and JPSS products, and obtain feedback on these products; establish a complete JPSS program integrated master schedule that includes a logically linked sequence of activities; address the shortfalls in the ground system and spacecraft component schedules outlined in this report; after completing the integrated master schedule and addressing shortfalls in component schedules, update the joint cost and schedule confidence level for JPSS-1, if warranted and justified; establish a comprehensive contingency plan for potential satellite data gaps in the polar orbit that is consistent with contingency planning best practices identified in this report. The plan should include, for example, specific contingency actions with defined roles and responsibilities, timelines, and triggers; analysis of the impact of lost data from the morning orbits; and identification of opportunities to accelerate the calibration and validation phase of JPSS-1. Agency Comments We sought comments on a draft of our report from the Department of Commerce and NASA. We received written comments from Commerce transmitting NOAA’s comments. NOAA concurred with all five of our recommendations and identified steps that it is taking to implement them. It also provided technical comments, which we have incorporated into our report, as appropriate. NOAA’s comments are reprinted in appendix IV. NASA did not provide comments on the report’s findings or recommendations, but noted that it would provide any input it might have to NOAA for inclusion in NOAA’s comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) evaluate the National Oceanic and Atmospheric Administration’s (NOAA) progress in meeting the Joint Polar Satellite System (JPSS) program’s objectives of sustaining the continuity of NOAA’s polar-orbiting satellite system through the Suomi National Polar-orbiting Partnership (S-NPP) and JPSS satellites, (2) evaluate the quality of the JPSS program schedule, and (3) assess NOAA's plans to address potential gaps in polar satellite data. To evaluate NOAA's progress in meeting JPSS program objectives, we assessed (1) the status of activities supporting the operational S-NPP satellite, (2) progress on efforts to develop the JPSS-1 satellite, and (3) recent changes in JPSS program scope. A more detailed description of our activities in each of these areas follows. S-NPP progress: We reviewed monthly program reports to identify the status of key upgrades to the ground system supporting S-NPP and the efforts to transition operational control of the satellite to NOAA. In addition, we compared the program’s current estimated completion dates for S-NPP products to original program estimates for when the products would be available for operational use. We compared program office information on the extent to which S-NPP products were being used to best practices in evaluating the use of completed products. We also interviewed program officials about algorithm maturity and the extent to which users are using S-NPP products. JPSS-1 progress: We analyzed plans and reports on system development efforts for the JPSS-1 satellite. Specifically, we reviewed the JPSS-1 mission preliminary design review package to assess completion of work on the instruments, spacecraft, and ground system as well as cost, schedule, and technical performance for the JPSS-1 satellite. We also examined JPSS program office monthly status reports on system development progress to identify variances and corrective actions being taken to address the most critical issues and risks to the program. We interviewed JPSS program officials to discuss system development status. We assessed the reliability of reported milestone dates for top-level milestones by examining multiple project status reports at different points in time for consistent reporting of dates or explanations of any changes and compared reported dates to source schedule data. We determined that the milestone data were sufficiently reliable for our reporting purposes. Changes in JPSS program scope: We compared the program’s requirements as of September 2011 to the program’s updated plans and requirements as of May 2013 to identify key changes and to assess whether changes in capabilities have impacted program goals and objectives. We interviewed program officials about changes in the JPSS program’s scope. We assessed the reliability of the program’s estimated savings from program scope changes by comparing them to program documentation on prior and current cost estimates and found that the estimates were sufficient for our purposes. To evaluate the quality of NOAA's program schedule, we used an exposure draft of GAO’s Schedule Assessment Guide to assess schedule management practices and characteristics of selected contractor schedules. We selected and analyzed three component contractor schedules—the ground system, the spacecraft, and the Visible/Infrared Imager/Radiometer Suite instrument—because these schedules represented the critical path for flight and the entire ground system development schedule that was either already or likely to be driving the JPSS-1 satellite launch date. We also analyzed schedule metrics as a part of that analysis to highlight potential areas of strengths and weakness in, among other things, schedule logic, use of resources, task duration, float, and task completion. In order to assess each schedule against the ten best practices, we traced and verified underlying support and determined whether the program office or contractor provided a small portion, about half, a large portion, or complete evidence that satisfied the criterion and assigned a score depicting that the practices were met, minimally met, partially met, substantially met, or fully met. By examining the schedules against our guidance, we conducted a reliability assessment on each of the schedules and incorporated our findings on reliability limitations in the analysis of each component schedule. We reviewed documentation on a schedule risk assessment the JPSS program office conducted on JPSS-1 flight project schedules to identify assumptions and results of its analysis and to assess the reliability of the reported JPSS joint cost and schedule confidence level. We interviewed government and contractor officials to discuss reasons for observed shortfalls in schedule management practices. We determined that the schedules were sufficiently reliable for our reporting purposes and our report notes the instances where reliability concerns affect the quality of the schedules as well as the program’s schedule risk assessment. To assess plans to address potential gaps in polar satellite data, we reviewed NOAA’s October 2012 polar satellite gap mitigation plan and a subsequent technical assessment as well as NOAA’s plans for implementing recommendations from the assessment. We compared elements of the plan and assessment against best practices developed from leading government and industry sources such as the National Institute of Standards and Technology, the Software Engineering Institute’s Capability Maturity Model® Integration, and our prior report. Based on that analysis, we identified shortfalls in NOAA’s current plans as well as key remaining activities for the agency to accomplish. We interviewed NOAA headquarters staff and JPSS program officials about the technical assessment and their plans. We performed our work at NASA and NOAA offices in the Washington, D.C. area. We conducted this performance audit from October 2012 through September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: NOAA Plans to Transfer Selected JPSS Program Components to the Polar Free Flyer Program In order to reduce Joint Polar Satellite System (JPSS) program costs and increase the program’s focus on its weather mission, the National Oceanic and Atmospheric Administration (NOAA) plans to transfer key program components to a separate program, called the Polar Free Flyer program. After establishing JPSS in 2010, NOAA committed to developing three units of the Total and Spectral Solar Irradiance Sensor (TSIS) and to finding a spacecraft and launch accommodation for three instruments that would not be on the JPSS satellite: TSIS, the Advanced Data Collection System (A-DCS), and the Search and Rescue Satellite- Aided Tracking (SARSAT) system. As of June 2012, the JPSS program planned to launch two stand-alone satellites (called free flyers) to accommodate two suites of these instruments. However, NOAA recently made several decisions that affect these commitments, and expects to finalize these plans by the end of September 2013: NOAA plans to transfer responsibility for developing TSIS and accommodating the launch of the three instruments out of the JPSS program and into a newly established Polar Free Flyer program. According to JPSS program officials, a transition plan for the new program is under review and selected staff positions have been filled. The Polar Free Flyer program will deliver a single free flyer mission instead of the two missions planned under the JPSS program. NOAA will transfer the responsibility for developing the second TSIS instrument to the National Aeronautics and Space Administration (NASA), rely on an Air Force Global Positioning System mission to continue SARSAT coverage, and find a launch vehicle to accommodate an additional A-DCS instrument. NOAA plans to use the JPSS ground system to support the Polar Free Flyer Program. The JPSS program plans to award a contract in fiscal year 2014 for a spacecraft that is to accommodate the TSIS, A-DCS, and SARSAT instruments. The three instruments are in development and testing, and are expected to be delivered to the satellite by 2015. The planned launch readiness date for the free-flyer mission was originally July 2016, but that date may change pending the outcome of the spacecraft contract award. Also, the program is looking to share a launch vehicle with some other mission to reduce launch costs. However, the program office is not aware of any ride-sharing opportunities that could accommodate the mission’s planned launch readiness date. Appendix III: Assessment of JPSS Component Schedules Implementation of Best Practices in Scheduling The following tables identify detailed assessments of the extent to which three component schedules supporting the JPSS-1 schedule met the ten best practices and four characteristics of a high-quality, reliable schedule. Table 12 provides an assessment of the ground system contractor’s schedule, which integrates activities from seven components of the ground system; table 13 provides an assessment of the spacecraft contractor’s detailed schedule; and table 14 provides an assessment of the VIIRS contractor’s detailed schedule. The following information describes the key that we used in tables 12 through 14 to convey the results of our assessment of the schedules’ consistency with an exposure draft of GAO best practices for schedule management. ● Met: The program office or contractor provided complete evidence that satisfies the entire criterion. ◕ Substantially met: The program office or contractor provided evidence that satisfies a large portion of the criterion. ◑ Partially met: The program office or contractor provided evidence that satisfies about half of the criterion. ◔ Minimally met: The program office or contractor provided evidence that satisfies a small portion of the criterion. ○ Not met: The program office or contractor provided no evidence that satisfies any of the criterion. Schedule characteristic or best practice Comprehensive Capturing all activities The schedule largely reflects the statement of work. However, the schedule only partially reflects the work breakdown structure and includes 40 activities that are marked as both summary activities and milestones. The contractor has established a clear process for integrating information between the schedule and the resource management software. However, resource leveling has been performed outside of the schedule, which limits the effectiveness of the process. According to the contractor, durations were estimated by the people responsible for the activities based on work to be done. Additionally, calendars were used to specify valid working times for all activities. However, over 35 percent of the activities in the schedule were of long duration, and only half of these were justified in schedule documentation. Schedule characteristic or best practice Well-constructed Sequencing all activities A majority of the activities in the schedule had dependencies, and the schedule’s relationships were largely finish-to-start. However, program officials did not justify in schedule documentation the small number of activities with missing dependencies, date constraints, and lags. The critical path and driving path are not fully valid because they are not free of long activities, constraints, and lags. Moreover, considering the schedule as a whole, the schedule software may not be calculating the true critical path of the project because the use of more than 800 constraints. These may result in float values that present an unrealistic view of the critical path. According to contractor officials, float values have been assessed as part of regularly scheduled health checks and they have determined that for certain cases float values are necessarily high. However, not all float values calculated by the schedule are reasonable and many values do not accurately reflect true schedule flexibility. Additionally, the JPSS program office did not provide a documented assessment of total float values that appear to be excessive to show that the team agrees with the logic and that the float is consistent with the plan. The schedule is vertically traceable in all but one of the milestones that we reviewed, meaning that it allows activity owners to trace activities to higher-level milestones with intermediate and summary schedules. However, the schedule is not fully horizontally traceable—that is, although the schedule includes giver/receiver milestones that are defined in the schedule documentation, the schedule was not always affected by activities whose durations were extended by hundreds of days. The contractor conducted a schedule risk analysis with a schedule network that partially meets the characteristics associated with a good schedule network, as well three point duration estimates that were captured from control account managers. However, the analysis was conducted for select near-term milestones—not to the readiness of the ground system for the launch of JPSS-1. Additionally, the analysis did not include risks most likely to delay the project, the paths or activities that are most likely to delay the project, and the activities that most often ended up on the critical path. Responsibility for changing the schedule has been assigned to someone who has the proper training and experience in critical path method scheduling and the schedule is free of clearly erroneous progress information. However, although the contractor provides a monthly program management briefing that addresses the status of external giver/receiver activities, it does not address the status of key milestone dates, changes in network logic, or critical paths. A baseline schedule exists and is compared to the current schedule to track variances from the plan. According to contractor officials, a formal change control process is used to make changes to the baseline. However, the contractor’s rolling wave reports do not satisfy all elements of a baseline schedule document. A baseline schedule document is a single document that describes, among other things, the organization of the IMS; the logic of the network; the basic approach to managing resources; the schedule’s unique features; and justification for lags, date constraints, and long activity durations. The schedule reflects the work necessary to build the spacecraft, and schedule activities are mapped to the contract data requirements list and contractor work breakdown structure numbers. The schedule contains a low level of detail, which reflects the contractor’s role as integrator for multiple vendors in a fixed-price environment. However, with a nearly 1:1 ratio of detail activities to milestones, the schedule would benefit from increased detail into work activities. The contractor has established a clear process for integrating information between the schedule and the resource management software. However, resource leveling has been performed outside of the schedule, which limits the effectiveness of the process. The contractor has experience in developing spacecraft similar to JPSS-1, including S-NPP. Contractor officials stated that they obtained duration estimates for activities from engineers that were responsible for them while other engineers conducted peer reviews on those estimates. However, durations in general appear too long to facilitate objective measurement of accomplished effort. Even accounting for procurement-related activities and level-of-effort type recurring meeting activities, one-third of all remaining activities are longer than 2 business months. The schedule was partially logically sequenced. Approximately 20 percent of all remaining activities and milestones were missing predecessor links, successor links or both. Officials stated that many of these activities were related to contract data requirements list deliveries and internal or external handoffs (called givers/receivers). We found other areas of questionable sequencing logic. For instance, there are about 10 percent of remaining activities in the schedule that have lags and leads, including some instances of leads with start-to-finish logic—a particularly abnormal logical relationship. We also found date constraints pervasive throughout the schedule: 140 activities have soft constraints and 17 have hard constraints. Hard constraints are useful for calculating the amount of float available in the schedule and, therefore, the realism of the required project finish date and available resources during schedule development. However, they may be abused if they force activities to occur on specific dates that are determined off- line without much regard for the realism of the assumptions necessary to achieve them. The schedule defines activities with zero total float as critical. However, partly because of logic issues, the critical path as calculated by the scheduling software was convoluted and most likely unreliable. The path includes lags, leads, long-duration activities, and activities with hard constraints, which by definition will appear as critical. Officials stated they agreed that software-calculated critical paths cannot be relied upon in a complex schedule, and said they report the longest (or driving path) to management. Ideally, the critical path and the longest path should be the same, but our analysis found the longest path to be somewhat different than the default critical path; it does not include several activities that appeared on the critical path because of their date constraints. In addition, the longest path also includes several near-term, nonprocurement-related activities with long durations, spanning between 84 and 365 days. Schedule characteristic or best practice Ensuring reasonable total float Officials stated that the total float values calculated by the schedule accurately reflect true schedule flexibility. However, we found that the schedule appears overly flexible due to high amounts of total float. 70 percent of remaining activities and milestones have greater than 30 days (about 5 business weeks) of total float. This includes 67 activities (8 percent of remaining) with over 1,000 days of float, meaning these activities can slip more than 3.5 business years before impacting the planned finish date of the project. Without documented justification for high float values in the schedule, it is not clear which are explained by milestones without successors, which are due to schedule maintenance, and which are due to incomplete logic. The schedule is vertically traceable, with dates in the detail schedule mapping to higher- level management briefing charts. The schedule is generally horizontally traceable. The schedule clearly identifies givers and receivers and negative total float calculations respond appropriately when significant delays are introduced into the network. However, negative float is calculated because key milestones are constrained. While the negative float may be an accurate assessment of potential delay, management may not be aware of potential delays when constrained dates are reported in summary-level schedules. Officials stated that they follow an internal process to perform schedule risk analyses on the schedule. Officials also stated that three-point durations are applied to activities, correlation is accounted for, and a Monte Carlo analysis is run on the schedule to derive probabilities for forecasted dates. Although the contractor has no contractual requirement to share schedule risk analysis results with the JPSS program office, it provided a summary of its risk assessment report and instructions. However, this summary information did not include supporting details such as risk data inputs and data normalization techniques and the contractor did not incorporate correlation or perform the schedule risk analysis on a logically sound (well-constructed) schedule. Schedule progress is updated monthly and the schedule is delivered to the JPSS program office in accordance with contractual requirements. While a formal schedule narrative does not accompany the schedule delivery to the government, much of the narrative information—such as the status of key milestones and handoffs, explanations for changes in key dates, and an overview of critical and near-critical paths—is conveyed in monthly management meetings. However, 26 activities had start or finish dates in the past. Of these, 12 activities could be explained by obsolete scope of work. We also found 12 out-of-sequence activities, representing 13 percent of in-progress activities. Contractor officials stated that they maintain schedule baseline information in the default baseline fields in the schedule and we found that baseline dates were set in the schedule. However, a schedule baseline document was not created for the schedule baseline. We found 104 activities in the schedule without baseline dates, 72 of which are complete or are planned to start by 2014. The majority of start variances appear reasonable, but we did find start variances ranging from -221 days (221 days ahead of schedule) to 237 days (237 days delayed). Despite the significant variances noted, it is commendable that the schedule includes baseline information that allows for analysis and monitoring of dates’ variances. The schedule largely reflects the work breakdown structure and statement of work. However, the schedule does not reflect work to be performed by a subcontractor and includes 10 activities that are marked as both summary activities and milestones. The contractor has established a clear process for integrating information between the schedule and the resource management software. However, resource leveling has been performed outside of the schedule, which limits the effectiveness of the process. According to the contractor, durations were estimated by the people responsible for the activities based on work to be done, realistic assumptions about available resources, productivity, normal interferences and distractions, and reliance on others. Further, the contractor justified in its schedule documentation virtually all activities with durations longer than 44 days. All but one activity in the schedule has at least one predecessor and one successor, and that activity was justified in the schedule documentation. Additionally, every schedule date constraint was justified in schedule documentation. However, the schedule has a very small number of activities with dangling logic. Further, although explanations were provided for most of the small number of lags, the explanations did not justify their use. Program office and contractor officials use the driving path to manage the program, which is preferred because it represents the activities that are driving the sequence of start dates directly affecting the estimated finish date. However, the driving path and the critical path to key milestones should be the same, and they are not. Also, the critical path is not valid because it contains level of effort activities. The program office has defined reasonable float values, and the values associated with the schedule largely fit that definition. For those float values that were not reasonable, the program office provided a documented assessment of those values to show that the team agrees with the logic and that the float is consistent with the plan. However, the schedule has a small number of activities that have unrealistic float values. The schedule is largely horizontally traceable. In particular, the schedule is affected by activities whose durations are extended by hundreds of days, and it includes giver/receiver milestones that represent handoffs between contractor integrated project teams. However, the schedule does not include all givers/receivers between the contractor and the program office. Additionally, the schedule is vertically traceable. Specifically, it allows activity owners to trace activities to higher-level milestones with intermediate and summary schedules. A schedule risk analysis was conducted with a good schedule network, and three point duration estimates that were captured from subject matter experts. However, the duration estimates did not reflect risks from the project’s risk register and the analysis was focused on only the deterministic critical path and near-critical path. GAO assessment Examples of strengths and weaknesses Responsibility for changing or updating the schedule has been assigned to someone who has the proper training and experience in critical path method scheduling. Additionally, the schedule is free of clearly erroneous progress information. Further, the contractor provides a schedule narrative accompanying each status update, which describes the status of key milestone dates (including the program finish date); explanations for changes in key dates; and a description of the critical paths. A baseline schedule exists and is compared to the current schedule to track variances. However, the contractor did not have a baseline schedule document. A baseline schedule document is a single document that describes, among other things, the organization of the IMS; the logic of the network; the basic approach to managing resources; the schedule’s unique features; and justification for lags, date constraints, and long activity durations. Appendix IV: Comments from the Department of Commerce Appendix V: GAO Contact and Staff Acknowledgments GAO Contact David A. Powner (202) 512-9286 or pownerd@gao.gov. Staff Acknowledgments In addition to the contact named above, Colleen Phillips (Assistant Director), Paula Moore (Assistant Director), Shaun Byrnes, Juaná Collymore, Lynn Espedido, Kate Feild, Nancy Glover, Franklin Jackson, Kaelin Kuhn, Jason Lee, Joshua Leiling, and Maria Stattel made key contributions to this report.
NOAA established the JPSS program in 2010 to replace aging polar satellites and provide critical environmental data used in forecasting weather and measuring variations in climate. However, program officials anticipate a gap in satellite data between the time that the S-NPP satellite reaches the end of its life and the JPSS-1 satellite becomes operational. Given the criticality of satellite data to weather forecasts, the likelihood of a significant satellite data gap, and the potential impact of a gap on the health and safety of the U.S. population and economy, GAO added this issue to its High Risk List in 2013. GAO was asked to review the JPSS program because of the importance of polar satellite data. GAO's objectives were to (1) evaluate NOAA's progress in sustaining the continuity of NOAA's polar-orbiting satellite system through S-NPP and JPSS satellites; (2) evaluate the quality of NOAA's program schedule; and (3) assess NOAA's plans to address potential gaps in polar satellite data. To do so, GAO analyzed program management status reports, milestone reviews, and schedule data; examined polar gap contingency plans; and interviewed agency and contractor officials. The National Oceanic and Atmospheric Administration (NOAA) has made noteworthy progress on the Joint Polar Satellite System (JPSS) program by delivering data from its first satellite--the Suomi National Polar-orbiting Partnership (S-NPP)--to weather forecasters, completing significant instrument development for the next satellite (called JPSS-1), and reducing the program's life cycle cost estimate from $12.9 billion to $11.3 billion by refocusing on weather products. However, key challenges remain. Specifically, S-NPP has not yet achieved full operational capability because the program is behind schedule in validating the readiness of satellite products. Also, the program does not track whether key users are using its products or if the products meet the users' needs. In addition, issues with the JPSS ground system schedules have delayed the delivery of key system capabilities. Until the program addresses these challenges, it may continue to experience delays in delivering actionable S-NPP data to system users and in meeting JPSS-1 development schedules. A program's success depends in part on having an integrated master schedule that defines when and how long work will occur and how activities are related to each other; however, the JPSS program office does not yet have a complete integrated master schedule and weaknesses exist in component schedules. Specifically, the program established an integrated master schedule in June 2013 and is reporting a 70 percent confidence level in the JPSS-1 launch date. However, about one-third of the program schedule is missing information needed to establish the sequence in which activities occur. In addition, selected component schedules supporting the JPSS-1 satellite have weaknesses including schedule constraints that have not been justified. Until the program completes its integrated schedule and addresses weaknesses in component schedules, it will lack the information needed to effectively monitor development progress and have less assurance of meeting the planned JPSS-1 launch date. While NOAA developed a mitigation plan to address a potential 14 to 18 month gap in afternoon polar satellite data in October 2012 and subsequently identified additional alternatives for addressing potential gaps, it has not yet established a comprehensive contingency plan. Specifically, NOAA has not yet revised its mitigation plan to include the new alternatives, and the plan lacks several key elements, such as triggers for when to take key actions and detailed procedures for implementing them. Until NOAA establishes a comprehensive plan, it may not be sufficiently prepared to mitigate anticipated gaps in polar satellite coverage.
Background USAID’s cash-based food assistance program started in 2008 under the management of its Office of Foreign Disaster Assistance. In June 2010, management of the program was transferred to FFP. In 2014, FFP provided funding for cash and voucher programs in 28 countries as shown on the map in figure 1. EFSP projects are implemented in some countries—for example, Somalia, Syria, and Yemen—with areas considered high risk based on security risk scores that the United Nations uses to assess the overall level of threat or danger. U.S. Cash-Based Food Assistance Has Significantly Increased Since 2010, and the Majority of Aid Went to the Syria Region In fiscal years 2010 through 2014, USAID awarded EFSP grants totaling about $991 million for cash-based food assistance. The following observations are shown in figure 2: Obligations for cash-based EFSP projects grew from $75.8 million in fiscal year 2010 to $409.5 million in fiscal year 2014—an increase of 440 percent over the 5-year period, the majority of which was in response to a large and sustained humanitarian crisis in Syria. Of the $409.5 million awarded by EFSP in fiscal year 2014, $272.7 million (67 percent) was for the humanitarian crisis in Syria, including cash-based food assistance to Syrian refugees in the Syria region. WFP was the implementing partner for $331.6 million (81 percent) of total EFSP obligations in fiscal year 2014, while NGOs and others were implementing partners for the remaining $77.8 million (19 percent). Of the $991 million in total grant funding obligated in fiscal years 2010 to 2014, $330.6 million was for cash interventions and $660.3 million for voucher interventions. The majority of the funding—$621.7 million (or 63 percent)—was awarded to WFP, and $369.3 million (or 37 percent) was awarded to other implementing partners. International Donors Have Recognized the Benefits of Cash-Based Food Assistance and Have Increasingly Supported It Donors have recognized the potential benefits of cash-based assistance under certain conditions and have increased funding to support it. According to donor representatives, implementing partners, and academics, cash-based assistance can improve the food security of recipients in a more efficient manner than in-kind food aid. Targeted cash transfers or food vouchers can be distributed more quickly than food delivery, which requires procuring and shipping food commodities, a complex and lengthy process. The costs associated with cash-based assistance might also be less than the cost of shipping food commodities from the United States to recipient countries. Additionally, according to some donor representatives and implementing partners, cash-based assistance can have the benefit of providing recipients with flexibility and dignity to choose the type of food they want to eat (see fig. 3). Furthermore, by increasing demand for food commodities through cash or vouchers, cash-based assistance can stimulate the local economy and support local producers and merchants. WFP has seen its cash and voucher programs significantly increase from $139 million in 2010 to $1.37 billion in 2014—with the largest increases occurring between 2012 and 2014 owing primarily to the civil war in Syria. In 2014, funding for cash and voucher programs for the Syria regional emergency operation accounted for $836 million or about 61 percent of WFP’s overall funding for cash and voucher programs (see fig. 4). However, other cash and voucher programs, excluding those for Syria, also experienced substantial increases over the same years, from $139 million in 2010 to $531 million in 2014. USAID’s Implementing Partners Use a Range of Mechanisms to Deliver Cash-Based Food Assistance To deliver cash-based food assistance, USAID’s implementing partners employ a variety of mechanisms ranging from direct distribution of cash in envelopes to the use of information technologies such as cell phones and smart cards to redeem electronic vouchers or access accounts established at banks or other financial institutions. These assistance delivery mechanisms can be grouped into four types—two cash transfer mechanisms that provide money to targeted households with no restrictions on how or where the money is to be used, and two voucher- based mechanisms that entitle the holder to buy goods or services up to the cash value written on the voucher, typically for the purchase of approved items from participating vendors (see fig. 5). The cash and voucher transfers can be either (1) conditional transfers, where certain requirements are imposed on beneficiaries such as their participation in community work programs or attending training or going to school; or (2) unconditional transfers, whereby no requirements on beneficiaries are made, and the assumption is that beneficiaries will use the cash or vouchers to obtain food based on a household assessment of food access and availability. The value of cash and voucher transfers is generally based on a formula that attempts to bridge the gap between people’s food needs and their capacity to cover them. Financial Oversight for Cash-Based Food Assistance Entails Assessing Risks and Implementing Control Activities Financial oversight in cash-based food assistance programs includes managing program funds to ensure they are spent in accordance with grant agreements by, among other things, assessing financial risks and implementing controls to mitigate those risks, including controls to prevent theft and diversion of cash, counterfeiting of vouchers, and losses. In recent years, for example, implementing partners have been increasingly piloting the use of technology that they deem to have the additional benefit of mitigating some potential risks by better tracking beneficiaries and their purchases (see fig. 6). Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control in federal programs. In addition, the Committee of Sponsoring Organizations of the Treadway Commission (COSO) has issued an internal control framework that, according to COSO, has gained broad acceptance and is widely used around the world. Both frameworks include the five components of internal control: control environment, risk assessment, control activities, information and communication, and monitoring. Internal control generally serves as a first In line of defense in safeguarding assets, such as cash and vouchers.implementing internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit the entity’s operations and to ensure they are built into and are an integral part of operations. USAID Has Developed Processes for Awarding EFSP Funds, but It Lacks Guidance for Staff on Modifying Awards and for Partners on Responding to Changing Market Conditions USAID has developed processes for awarding cash-based food assistance grants; however, it lacks formal internal guidance for its process to approve award modifications and provides no guidance for partners on responding to changing market conditions that might warrant an award modification. USAID awards new cash-based food assistance grants through either a competitive proposal review or an expedited noncompetitive process. We reviewed 22 proposals for new cash-based food assistance projects that were awarded and active as of June 1, 2014; we found that USAID made 13 of these awards through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. According to USAID officials, USAID follows a similar process in reviewing requests to modify ongoing awards, which implementing partners may propose for a variety of reasons, such as an increase in the number of beneficiaries within areas covered by an award or a delay in completing cash distributions. For our four case study countries, we reviewed 13 grant agreements made from January 2012 to June 2014 that had 41 modifications during that period, including 20 cost modifications that resulted in the total funding amount for the 13 grants increasing from about $91 million to about $626 million, which is about a 591 percent increase. According to USAID, its draft internal guidance for modifying awards is under review and will be incorporated into formal guidance in the future, but it could not provide a time frame for completing that process. In the absence of formal guidance on that process, USAID cannot hold its staff and its partners accountable for taking all necessary steps to justify and document the modification of awards. Additionally, we found that although USAID requires that partners implementing cash- based food assistance monitor market conditions, USAID does not provide clear guidance about how to respond when market conditions change—for example, when and how partners might adjust levels of assistance that beneficiaries receive. Without such guidance, USAID runs the risk of beneficiaries’ benefits being eroded by price increases or, if prices decrease, of partners’ using scarce project funding inefficiently. USAID’s Process for Awarding New Cash- Based Food Assistance Projects Was Consistent with Its Policies and Procedures USAID Program Guidance Outlines the EFSP Competitive Proposal Review Process and Noncompetitive Exceptions USAID outlines its process for reviewing and deciding to fund proposals for cash-based food assistance projects in the Annual Program Statement (APS) for International Emergency Food Assistance (see fig. 7). According to USAID, the APS functions as guidance on cash-based programming by describing design and evaluation criteria for selecting project proposals and explaining the basic steps in the proposal review process. The APS also serves as a primary source of information for prospective applicants that apply for emergency food assistance awards using EFSP resources. While Title II in-kind food aid resources represent the majority of USAID’s emergency food assistance funding, USAID’s policy is that EFSP resources may be used when one or more of the following conditions apply: 1. Local and regional procurement, cash-based food assistance, or both are deemed more appropriate than in-kind food aid because of market conditions. 2. Title II in-kind food aid cannot arrive in a sufficiently timely manner through the regular ordering process or through the use of prepositioned stocks. 3. Significantly more beneficiaries can be served through the programming of local and regional procurement or cash-based food assistance. The competitive proposal review process outlined in the APS includes at least three documented steps intended to ensure that the proposal is aligned with U.S. foreign assistance objectives and is technically sound: 1. Partners submit a brief concept paper. Partners initiate the review process by submitting a brief concept paper that describes the chosen program approach, and the relevant justification. Generally, an official at USAID headquarters and an FFP officer from the field review the concept paper to determine if the project is aligned with FFP’s objectives and if the needed resources are available. Partners may change the proposed scope, assistance delivery mechanism—cash, vouchers, or commodities—or funding level based on FFP’s feedback. 2. Partners submit a full proposal addressing APS design criteria. If FFP decides to move forward with a proposed concept paper, it will invite partners to submit a proposal that addresses the design criteria outlined in the APS. 3. FFP assembles a technical evaluation committee to review and score the proposal. The committee includes at least two USAID officials, who review and score the content of the proposal to determine whether it sufficiently addresses the design criteria in the APS. The APS describes four categories of project design and evaluation criteria: program justification; program design and description; management and logistics; and past performance. Applicants must justify their chosen delivery mechanism based on one or more of the following criteria: appropriateness, timeliness, or cost-effectiveness. The program design and description criteria cover several project design areas, including three identified as good practices in cash-based programming: (1) assessing beneficiary needs, (2) market analysis and impact, and (3) coordination with other entities. Management and logistics criteria establish the capabilities of the applicant to carry out the proposed program based on staffing, infrastructure, and logistical arrangements. Past performance criteria include accomplishments, quality of performance, and demonstrated expertise in implementing programs similar to the one proposed. After its review, the technical evaluation committee may submit an issues letter to the partner, indicating areas where the proposal must be improved to receive a recommendation for funding. The partner then has the opportunity to address these issues and submit additional application information. This iterative process allows for dialog between FFP and the partner about how the project design can be improved. Once the committee is satisfied that the proposal meets the design criteria, it will recommend the proposal for funding. There are exceptions to the competitive proposal review process, according to the APS and related guidance in USAID’s Automated Directives System (ADS). According to the APS, USAID reserves the right to make awards to public international organizations (PIO) under different terms and conditions than those that apply to NGOs, including different documentation requirements prior to an award or included in an award. For example, when super typhoon Haiyan struck the Philippines in November 2013, FFP funded a WFP emergency operation using an abbreviated process for noncompetitive PIO grants. FFP officials said that this kind of abbreviated process for noncompetitive PIO grants allows for rapid emergency response. WFP, which is a PIO, receives a significant portion of the EFSP funding for cash and voucher programs—about 80 percent in fiscal year 2014. USAID maintains a list of PIOs, and ranks major PIOs like WFP and FAO as Category One partners based on the agency’s experience with these organizations and a determination as to their level of responsibility.predetermined the general suitability of a PIO, it can then review the PIO’s funding appeal document rather than a concept paper or proposal specifically written in response to the APS. Nevertheless, according to FFP officials, in considering funding appeals from PIOs, they review basic project aspects covered under the APS, such as planned scope, logistics, and targeting of beneficiaries. To document this process with the PIOs, FFP officials said they generally issue an action memo justifying the decision to review the proposal outside the competitive APS process and its evaluation of the PIO’s proposed project. USAID officials stated that when it has Additionally, in accordance with the ADS and USAID policy, the Director of FFP may issue a determination of noncompetition for any NGO implementing partner on a case-by-case or disaster-by-disaster basis. This ADS policy, which applies to awards to carry out International Disaster Assistance under the Foreign Assistance Act or emergency assistance under the Food for Peace Act, states that FFP may issue such exceptions when it deems competition impracticable. In our case study selection, for example, FFP issued a determination of noncompetition covering Syria for fiscal year 2014; the determination stated that full competition would not allow FFP to respond in a timely manner and that few partners had the capacity to access certain areas in Syria. According to FFP officials, USAID staff review a partner’s program appeals documentation before deciding to fund a program noncompetitively. According to USAID officials, the exceptions to the competitive proposal review process for EFSP grant awards enable FFP to respond to some crises more quickly, particularly when it is aware of partners that possess sufficient capacity for the project. For example, PIOs like WFP may already have an appeal for funding that satisfies EFSP design criteria. According to FFP officials, they can review a multilateral appeal in as little as a few days, and the APS design criteria continue to serve as guidance when they consider appeals from PIOs. USAID Followed Its EFSP Competitive Proposal Review Process, or Used an Exception, in All 22 Cases We Reviewed We found that FFP followed its APS process for reviewing and deciding to fund competitive cash-based EFSP project proposals, or used an allowed noncompetitive exception, in all 22 cases that we reviewed. The proposals we selected for review were all new cash-based projects that were awarded and active as of June 1, 2014. The 22 awards totaled $126.3 million; covered 14 countries in Africa, Asia, and the Middle East; and included 11 awards to NGOs and 11 awards to one PIO. Of the 22 proposals we reviewed, we determined that USAID completed each required step in the APS competitive review process for 13. remaining 9 proposals we reviewed, we found that USAID made 7 awards through an abbreviated noncompetitive review, and 2 awards under authorities allowing an expedited emergency response. Of the 13 proposals, 9 were subject to the three steps in the APS review process, and 4 were subject to each step except for the submission of a concept paper. USAID issued an amendment to the APS waiving the concept paper requirement for proposals for projects in Yemen during fiscal year 2013. USAID Lacks Internal Formal Guidance for Modifying Awards and Lacks Any Guidance to Partners on Responding to Market Conditions Modifications Can Greatly Increase Funding over Original Award Amounts, but USAID Has No Formal Guidance on Its Process for Modifying Awards According to USAID officials, two main types of modifications may be made to a grant agreement—no-cost modifications and cost modifications. For our four case study countries, we reviewed 13 EFSP grant agreements from January 2012 to June 2014 that had a total of 41 modifications. No-cost modifications. Eleven of the 13 grant agreements had a total of 21 no-cost modifications. Some no-cost modifications extended the life of a program without additional cost, for instance when a project experienced unavoidable delays in completing cash or voucher distributions. Other no-cost modifications were largely due to administrative changes such as revising sections of the original agreement, or adding changes made to the standard provisions that USAID incorporates into grant agreements. Cost modifications. Eight of the 13 grant agreements had a total of 20 cost modifications that increased funding for the 13 awards from an initial total of about $91 million to about $626 million, approximately a 591 percent increase. Ten of these cost modifications were made to 1 award, the Syria regional award, and totaled about $441 million over the original grant of $8 million (see fig. 8). The Syria regional award modifications amounted to about 82 percent of the total increase in funding for the cost modifications we reviewed. Fourteen of the 20 cost modifications we reviewed were made to awards to PIOs, amounting to about $502 million, or 94 percent of the total increase in funding for the 20 cost modifications we reviewed. Another 6 cost modifications we reviewed were made to awards to NGOs, amounting to about $34 million. Cost modifications in the cases we reviewed were approved mainly for two reasons—to extend the duration of the project and to increase the number of beneficiaries being assisted. Fifteen of the 20 cost modifications extended the project by 2 to 12 months, while at least 6 modifications were due to increased number of beneficiaries. One cost modification extended the project by 6 months and added about $10 million in unconditional cash transfers to a project that had been implementing distributions of locally and regionally procured food. This award is an example of a modification that changed an existing assistance delivery mechanism or added a new one. Although the APS for International Emergency Food Assistance outlines the review process for new award proposals, neither the current 2013 APS nor the two previous versions provide clear guidance on the process for submission, review, and approval of modifications to existing According to USAID officials, USAID generally follows the awards.process for new award proposals when reviewing proposals for cost modifications, although there is no formal guidance to inform modification decisions. USAID’s procedures with regard to reviewing award modifications have been largely informal. According to USAID officials, documentation in each modification file may lack uniformity because of different preferences of agreement officer representatives (AOR) or their teams. For instance, the title of key documents may differ, such as the program justification which may be referred to as a concept note, proposal, program description, program narrative, or application. AORs may refer to draft procedural guidance on modifications, according to USAID officials, who also said a recently established Grants Management Services Team, housed within FFP’s Policy and Technical Division, is reviewing the draft procedures as part of a broader effort to improve guidelines for AORs.this process, and it is unclear when this enhanced guidance will be finalized. Standards for Internal Control in the Federal Government calls for policies, procedures, techniques, and mechanisms that enforce management’s directives. Without formal guidance to establish the policies and procedures for modifying awards, USAID staff and its partners lack assurance as to whether they are taking all necessary steps to justify and adequately document the modification of awards. Changes in Market Conditions May Warrant Adjustments to Cash-Based Food Assistance Projects, but USAID Provides No Guidance on This to Partners USAID requires implementing partners of EFSP projects to collect and report market prices of key commodities during the implementation of the assistance. This is because increases in prices of key commodities may erode beneficiaries’ benefits, whereas decreases in prices of key commodities may justify a reduction in the amount of cash distributed while maintaining the amount of food beneficiaries can purchase. We found significant changes in the price of key staple commodities in selected markets in Niger and Somalia but not in Jordan and Kenya. We also found that USAID’s implementing partners responded differently to changing market conditions, and that USAID had not provided partners with clear guidance on when and how to modify cash-based food assistance projects in response to changing market conditions. In contrast, WFP’s Cash and Vouchers Manual suggests that WFP’s country offices consider setting cutoff limits for maximum acceptable price inflation and have a contingency exit plan to respond to the situation when acceptable price inflation limits are exceeded. During Cash-Based Food Assistance Projects, Prices of Some Key Staple Commodities Changed Significantly in Selected Markets in Two of Our Four Case Study Countries We analyzed data on the prices of key staple commodities in selected markets in our case study countries from fiscal years 2010 through 2014. We found that the prices of key cereal commodities in Niger and Somalia changed significantly without corresponding adjustments to all implementing partners’ cash-based projects. We did not find similar food price changes in Jordan and Kenya. Niger. In Ouallam, we found that the prices of key cereal commodities rose an average of around 25 percent from April to September 2012 (see fig. 9), a large increase compared with historical price trends. We estimated that the increases in the prices of these cereals may have increased the cost of the beneficiaries’ food basket by roughly 20 percent. During this period, USAID’s implementing partner distributed 20,000 West African CFA francs per household (around $38) to beneficiaries each month from May through September 2012.beneficiaries was not changed during the period of significant price increases. USAID’s implementing partner stated that its staff found that regional food stocks remained sufficient in project areas and that beneficiaries were able to purchase the food they needed. In addition, USAID’s implementing partner stated that the government of Niger and humanitarian agencies had previously agreed on the amount to be distributed to beneficiaries, and the partner did not want to unilaterally change that amount. In Birnin Gaoure, we found that the price of millet, Niger’s key staple commodity, increased 20 percent from March to September 2013, a large increase compared with historical price trends. An implementing partner’s Niger officials acknowledged the steep increases in cereal prices in 2013 and attributed them to a bad harvest and increased demand from Nigeria. During this period, the implementing partner increased the transfer rate from 25,000 West African CFA francs per household (around $50) to 32,500 West African CFA francs per household (around $65) to beneficiaries each month in May 2013 because of the increase in prices. The partner also monitored both the local market price for commodities and the cost of distributing these commodities. The implementing partner then considered switching from distributing cash to in-kind food distributions when the price of commodities that beneficiaries purchased in local markets neared the cost of in-kind distributions. The implementing partner’s Niger officials reported that they switched from cash to in-kind distributions in certain geographic areas but not in other areas where the prices of staple commodities did not reach the cost of in-kind distributions. Somalia. In Baydhaba, we found that the price of key cereal commodities increased significantly in mid-2011 (see fig. 10). By September 2011, the UN had declared famine in seven regions in Somalia, including Baydhaba. In November 2011, USAID signed a grant agreement with its implementing partner to fund a cash-for-work project. By that time, however, the prices of these key cereal commodities had declined from their peaks earlier in 2011. USAID’s implementing partner provided $72 per household each month during the period when prices of these key cereal commodities declined. The partner then increased the amount to $96 per household each month in June 2013. Our analysis also found that the price of red sorghum and white maize increased significantly in 2014 (also shown in fig. 10). From April to July 2014, the price of red sorghum increased by 77 percent and the price of white maize by 37 percent, large increases compared with historical price trends. However, the implementing partner did not adjust the transfer rate during this period. The implementing partner cautioned that adjusting the transfer rate should be done in context of the wage rate in the labor market, the general volatility of commodity markets in Somalia, and food being only a part of the beneficiaries’ expenditure basket. The implementing partner stated that starting in late February 2015 it would be testing a new methodology intended to establish the transfer rate at the subregional level based on the cost of commodities, the wage rate in the labor market, the available budget, and an assessment of the implications of adjusting the transfer rate by examining short- and long-term trends. Once established, the transfer rate would remain for approximately 2 to 3 months, comprising a round of distributions. Jordan and Kenya. In Jordan, we analyzed data from Jordan’s Department of Statistics and found that the price of food in Jordan did not change significantly after an implementing partner started its project in July 2012. In Kenya, the implementing partner established the transfer rate based on the value of the standard food basket; it reviewed prices every month and would change transfer amounts only in response to price fluctuations, in either direction, of more than 10 percent. In Taita Taveta, the site we visited in Kenya, the implementing partner informed us that the transfer value had not been adjusted since June 2013 because retail food prices had not changed more than 10 percent. USAID Does Not Provide Guidance to Partners on When and How to Modify Programs in Response to Changing Market Conditions According to USAID officials, USAID does not have a standard for identifying significant price changes, since the definition of significance is specific to each country and region. In addition, we did not find guidance addressing modifications in response to changing market conditions in the APS for International Emergency Food Assistance. This lack of guidance has resulted in inconsistent responses to changing market conditions among different cash and voucher projects funded by USAID. For example, an implementing partner in Kenya predetermined, as part of its project design, when adjustments to cash transfer amounts would be triggered by food price changes, while an implementing partner whose project we reviewed in Niger relied on an ad hoc response. The implementing partner in Kenya established the cash and voucher transfer rate based on the value of the standard food basket; it reviewed prices every month but would change cash and voucher transfer amounts only in response to price fluctuations, in either direction, of more than 10 percent. Without clear guidance about when and how implementing partners should modify cash-based food assistance projects in response to changing market conditions, USAID runs the risk of beneficiaries’ benefits being eroded by price increases or inefficient use of scarce project funding when prices decrease. USAID’s Partners Had Generally Implemented Financial Controls in Projects We Reviewed; We Found Weaknesses in Risk Planning, Implementation, and Guidance USAID relies on its implementing partners to implement financial oversight of EFSP projects, but it does not require them to conduct comprehensive risk assessments to plan financial oversight activities— two key components of an internal control framework (see sidebar)—and provides little or no guidance to partners and its own staff on these two components. For case study projects we reviewed in four countries, we found that neither USAID nor its implementing partners conducted comprehensive risk assessments that address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. Lacking comprehensive risk assessments of its projects, USAID staff would be hampered in developing financial oversight plans to help ensure that partners are implementing the appropriate controls, including financial controls over cash and vouchers to mitigate fraud and misuse of EFSP funds. We also found that USAID’s partners had generally implemented financial controls over cash and voucher distributions but the partners’ financial oversight guidance had weaknesses. Because of the limitations in their guidance, partners may neglect to implement appropriate controls in areas that are most vulnerable to fraud, diversion, and misuse of EFSP funding. In addition, we found that USAID’s guidance to partners on financial control activities is limited. For example, USAID lacks guidance to aid implementing partners in estimating and reporting losses. With regard to USAID’s oversight of its partners, in the projects we reviewed, we found that USAID staff were challenged by limited resources and access issues in high-risk areas. For example, USAID had two staff members in the field to oversee its Syria regional cash-based projects spread over five countries that had received approximately $450 million in EFSP funding from July 2012 through December 2014. Furthermore, USAID has provided limited guidance to its field staff for overseeing financial controls put in place by implementing partners. Neither USAID nor Its Implementing Partners Conducted Comprehensive Risk Assessments of the EFSP Projects We Reviewed Though USAID Missions Conduct Country Risk Assessments, These Do Not Address Risks Specific to Cash-Based Food Assistance USAID officials said that they conduct a risk assessment for all USAID’s programs within a country and not separate risk assessments for EFSP projects. According to USAID, its country-based risk assessments focus primarily on the risks that U.S. government funds may be used for terrorist activities and on the security threat levels that could affect aid workers and beneficiaries; these risk assessments do not address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. A USAID official provided us with internal EFSP guidance to staff on the grant proposal and award process stating that an award would not be delayed if a risk- based assessment has not been conducted. For countries with ongoing conflicts and civil unrest, such as Somalia, USAID said that it performs a risk-based assessment for all of its program funding. USAID said that most of these assessments are sensitive and primarily focus on security risks. For example, in Niger, FFP’s 2014 risk analysis of its over $75 million country portfolio, including Title II and EFSP, identified such risk factors as the distance to the project sites and security threats but contained no assessment of likely financial risks. USAID Partners Did Not Conduct Comprehensive Risk Assessments of Their EFSP Projects USAID’s 2013 APS for International Emergency Food Assistance requires EFSP implementing partners to indicate the controls in place to prevent diversion of cash, counterfeiting of food vouchers, and diversion of food voucher reimbursement funds. According to USAID officials, its partners have established records of effective performance in implementing cash and voucher projects and they understand the context of operating in these high-risk environments. As a result, these officials told us, USAID expects that its partners will conduct comprehensive risk assessments, including financial risk assessments, and develop appropriate risk mitigation measures for their cash-based food assistance projects. However, none of the partners implementing EFSP-funded projects in our four case study countries had conducted a comprehensive risk assessment based on their guidance or widely accepted standards during the period covered by our review. USAID does not require its implementing partners to develop and submit comprehensive risk assessments with mitigation plans as part of the initial grant proposals and award process or as periodic updates, including when grants are modified.agreements do not contain risk assessments and mitigation plans. In addition, the implementing partners we reviewed had not consistently prioritized the identification or the development of financial risks that address vulnerabilities such as counterfeiting, diversion, and losses. USAID officials stated that most EFSP grant proposals and Jordan and Kenya. In fiscal year 2012, USAID’s implementing partner for EFSP cash and voucher projects that we reviewed in Jordan and Kenya documented risk assessments and mitigation plans, but these were not comprehensive. While the 2012 risk assessment and mitigation plan for Jordan identified financial risks such as counterfeiting and diversion of vouchers and corruption, it did not address the likelihood, impact, and seriousness of these risks as required by its guidance. For example, in discussing the risk of counterfeiting of vouchers in Jordan, the implementing partner stated in its 2012 document that “Experience shows that shops are gaining significant business from the voucher program and hence not willing to jeopardize that by breaking the rules.” However, during our fieldwork in Jordan in July 2014, we learned that the implementing partner had discovered that several retail shops had been involved in fraudulent activities, and after receiving the results of an independent review, the partner decided not to renew its contracts with those shops. According to the implementing partner, monitoring of retail shops is now systematically in place to ensure that retailers respect the rules in the contract with the partner. As of December 2014, the risk registers maintained by the implementing partner for EFSP projects in Jordan and Kenya addressed key elements of the risk management process including identifying risk categories, likelihood of occurrence, impact and seriousness, mitigation actions currently in place, and mitigation actions needed. These registers described a number of potential risks, including a challenging funding environment, political interference in the project, increased levels of insecurity affecting the partner’s ability to reach the most vulnerable, and lack of specific staff expertise. However, as of December 2014, the risk registers for Jordan and Kenya did not identify and address financial risks, such as counterfeiting of vouchers and diversion and losses of cash and vouchers, which is contrary to the implementing partner’s financial directive for the use of cash and vouchers. The financial directive states that assessments should identify risks related to implementing food assistance programs from a multifunctional perspective, involving offices such as finance, program, logistics, and security, and that identified macro risks, such as financial, economic, political, and environmental risks, along with mitigating actions, shall be included in the country office operational risk analysis. Somalia. Although the USAID implementing partner for the cash-for-work project we reviewed in Somalia conducted a risk assessment and in November 2011 developed a risk register that includes a number of potential risks, including potential financial risks, such as the risks of collusion and diversion of funds by the money vendor and subawardees, it did not prepare a comprehensive risk register with clear mitigation plans in accordance with its guidance and the international standards that it had adopted. This partner’s risk management guidance state that risk assessment would provide staff and supervising management with a shared, comprehensive view of the potential risks to the achievement of objectives, together with a prioritized mitigation action plan, among other things. However, this partner’s risk register for its cash-for-work project in Somalia did not identify, for example, counterfeiting of vouchers as a potential risk. According to the partner, it has developed controls such as a management tool through which beneficiary vouchers are generated with unique serial codes, as well as the development of beneficiary registration with biometrics, such as fingerprinting to address issues identified in handling vouchers. In addition, the November 2011 cash-for- work risk register, which we received from this partner in November 2014, had not been updated since May 2012. In January 2013, this partner developed an operational risk management framework for south and central Somalia. While the framework included a number of planned measures to help ensure greater control, it has not been updated to conform to the partner’s December 2013 guidance on risk management and to reflect a key change in its payment process for cash distribution. We determined that the implementing partner in Somalia periodically prepared a summary of its risk mitigation measures for its donors. The most recent summary we reviewed, dated November 2014, stated that the partner was regularly reviewing its activities in Somalia through a set of risk mitigation measures and was not starting any new activity without first reviewing risks and mitigation measures. However, the summary discussed the mitigation measures as stand-alone activities and did not link them to any identified risks. Moreover, the summary or the risk register did not prioritize the mitigation measures described based on its guidance, and the lack of detail left us uncertain as to the extent to which certain mitigation measures, such as the use of remote sensing and biometrics technology, were being implemented based on risk assessment. Niger. In Niger, we found that the implementing partner of the cash transfer project we reviewed had guidance for staff on the risk management process and on developing a risk register; however, this partner had not conducted a comprehensive risk assessment that took into account the likelihood, impact, or severity of fraud, diversion, losses, or theft of cash for the cash transfer project it implemented in 2012. This implementing partner prepared a security and safety management plan, which focused on the risks to staff implementing its projects as well as beneficiaries and project visitors in Niger. During the implementation of the 6-month project we reviewed, this implementing partner informed us that one of its employees had stolen $330 in cash intended for project beneficiaries. The partner reported that the cash had been repaid, the employee had been fired, and the donor had been notified. According to the implementing partner, the donor confirmed its satisfaction with the partner’s handling of the incident. This partner said that, as a result of this incident, it had strengthened several of the controls that were in place, including a complaint mechanism, and it had conducted sensitization training for beneficiaries, communities, and staff, to help ensure that such incidents did not recur. However, the partner did not subsequently conduct a risk assessment and develop a risk register before it began implementing its 2014 EFSP cash and voucher projects in Niger. Figure 11 shows photographs from our site visits to cash transfer and cash-for- work projects in Niger. USAID Partners Generally Implemented Financial Controls for Projects We Reviewed, but Their Implementation and Related Guidance Had Certain Weaknesses Partners Generally Implemented Controls over Cash and Paper Voucher Distributions, Though with Certain Weaknesses We reviewed selected distribution documents for three implementing partners with projects that began around 2012 in our four case study countries (Jordan, Kenya, Niger, and Somalia). We examined the documents for conformance with the partners’ financial oversight procedures, USAID’s guidance, and relevant provisions of the grant agreements. Our review found that the three implementing partners had generally implemented financial controls over their cash and voucher distribution processes. For example, in Niger, we verified that there were completed and signed beneficiary payment distribution lists with thumb prints; field cash payment reconciliation reports that were signed by the partner, the financial service provider, and the village chief; and payment reconciliation reports prepared, signed, and stamped by the financial service provider. Additionally, we determined that these three implementing partners generally had proper segregation of financial activities between their finance and program teams. Nonetheless, in Kenya, our review showed that in some instances, significant events affecting the cash distribution process were not explained in the supporting documentation. For example, we found that an implementing partner’s total number of beneficiary households for a cash distribution project was different from the total number recorded by its subawardee. No explanation for this was documented in reports provided by the implementing partner, nor was an explanation entered into the partner’s data system. Although the explanation of how this discrepancy was resolved, according to the implementing partner, demonstrated that its controls worked effectively, we were unable to verify this information because the data records and reports we reviewed contained no explanatory notes. Additionally, to determine whether the three implementing partners were in conformance with USAID’s 2011 and 2013 APS and with grant award requirements, we reviewed key financial information in project reports from the partners, including their Federal Financial Reports, quarterly performance reports, and final program reports.most instances the implementing partners had submitted these reports within the required time frames and that these reports contained the key reporting elements required by the grant award. However, in some instances, we were unable to determine whether the quarterly reports were submitted timely because USAID was unable to provide us with the dates it received these reports from the implementing partners. According to USAID officials, USAID does not have a uniform system for recording the date of receipt for quarterly progress reports and relies on FFP officers to provide this information; however, individual FFP officers have different methods for keeping track of the reports and the dates on which they were received. These various record-keeping methods made it difficult for USAID to provide us with the information required for us to determine if the implementing partners had submitted the project reports on time. quarterly performance reports within 30 days after the end of each quarter, except when reporting period ends before 45 days from the effective date of the award or less than 1 month from estimated completion date of the award and the award will not be extended; and This beneficiary of a food assistance project in Kenya is receiving her benefits in cash at a bank. Though all partners generally implemented financial controls over their cash and paper voucher distributions that we reviewed, we found that the partner in Somalia faced several challenges in implementing the project as a whole. The implementing partner put several mitigation actions in place to improve financial oversight, but we found that weaknesses in controls still existed. In October 2011, before USAID provided its initial EFSP funds, the implementing partner received allegations of fraud, which resulted in its Office of Inspector General (OIG) initiating several investigations. During the implementation of the project, the OIG conducted multiple fraud investigations involving subawardees’ and money vendors’ non- or under-implementation of cash-for-work activities and money vendors’ violations of contractual terms of disbursement, including claiming fees for services not rendered. Letters of agreement with the partner’s subawardees and money vendors stipulated that these entities should avoid corrupt and fraudulent activities. According to the implementing partner, its OIG investigated the allegations and concluded that the financial loss to USAID was approximately $237,000; according to the partner, it recovered about $188,000, resulting in a total financial loss of about $49,000 to USAID. However, in one case, the OIG concluded that because of security restrictions, it was unable to confirm the financial loss. In addition, two new investigations that began in 2014 involving one of the money vendors were still ongoing as of December 2014, making it premature to conclude whether there was any financial loss and whether USAID funds were affected. The implementing partner in Somalia suspended the cash-for-work project in all locations from May to October 2012 because of the fraud allegations and investigative findings. During the suspension of the project, the partner said that it expanded and added other mitigation measures to help ensure greater controls, such as expanding a call center in Nairobi to verify payment to beneficiaries and conduct post- distribution surveys. However, many beneficiaries in south central Somalia, a high-risk area, had unreliable phone connectivity or did not own phones. This implementing partner also said that it provided systematic training to its subawardees, including training on fraud prevention measures and post-distribution assessment. Prior to December 2014, post-distribution assessments were being done by the same subawardees who were overseeing the cash-for-work project, which is not considered good practice according to internal control standards. For the project we reviewed, the implementing partner reported that it would prefer to use independent third parties to conduct the post-distribution assessment where security and access made that possible but cited accessibility limitations and cost as reasons it had not done so. According to this implementing partner, as of January 2015, a third-party monitoring entity had begun a post-distribution assessment for the current phase of the project in south central Somalia, and the partner’s field monitors were conducting the post-distribution assessment in northern Somalia. Implementing Partners’ Financial Oversight Guidance Had Weaknesses That Could Hinder Effective Implementation of Controls Implementing partners in the case study countries we reviewed had developed some financial oversight guidance for their cash and voucher projects, but we found gaps in the guidance that could hinder effective implementation of financial control activities. One implementing partner developed a financial procedures directive in 2013 that requires, among other things, risk assessments, reconciliations, and disbursement controls. For example, the directive required the country office’s finance officer to reconcile bank accounts used for cash and voucher transfers on a monthly basis. It also required that undistributed cash and unredeemed vouchers be reconciled, receipted, and recorded before financial closure of a project. The directive specified disbursement controls, such as requirements for a certified and approved distribution plan and two authorized signatures before payments are released. The directive also instructed country offices to assess the financial strength of the financial service provider. However, the directive lacked guidance on how to estimate and report losses. The implementing partner told us it was in the process of developing guidance for cash and voucher losses, which it planned to complete by December 2015. A second implementing partner that had been implementing USAID’s cash-based projects since November 2011 and since about 2007 for other donors had also developed policies and guidance for some key financial control procedures. For example, it developed policy in November 2012 and guidelines in April 2013 for cash-based food assistance projects, as well as guidance on fraud control, sanctions procedures, and due diligence procedures. The guidelines included requirements for the implementing partner’s service providers. For example, the service providers were required to maintain and document financial records and certification of proper use of funds. However, other guidance was lacking, including guidance on estimating and reporting losses. Furthermore, in October 2014, the implementing partner’s external auditor recommended that the implementing partner should formalize its policy framework on internal control and design a mechanism to monitor, assess, and report on the overall effectiveness of the internal control system to reinforce accountability and transparency. The external auditor considers such actions fundamental, meaning that they are imperative to ensure that the implementing partner is not exposed to high risks.found significant progress in the implementation of the partner’s enterprise risk management, including formal adoption of a corporate risk policy and the integration of risk management into the activities of the field offices. The auditor also noted that in May 2014, it had A third implementing partner had been implementing cash and voucher projects for USAID since the EFSP program began in 2010. It developed field financial guidance in 2013 that provides standardized policies and procedures for financial management and accounting in the partner’s field offices. However, the implementing partner acknowledged that the field manual does not address financial procedures specifically for voucher projects. According to this implementing partner’s staff, the country teams had each designed financial procedures for vouchers with input from its headquarters. In October 2014, this implementing partner, in conjunction with a global financial services corporation, developed an E-Transfer Implementation Guide that covered various processes, tools, and checklists for assessing the capacity of e-transfer services providers and procuring e-voucher systems. Further, this implementing partner is in the process of developing an enterprise risk management framework. When implementing partners for EFSP projects have gaps in financial guidance and limitations with regard to the oversight of cash-based food assistance projects, the partners may not put in place appropriate controls for areas that are most vulnerable to fraud, diversion, and misuse of EFSP funding. Limited Staff Resources, Security Constraints, and Lack of Guidance Hindered USAID’s Financial Oversight of EFSP Projects According to USAID officials, Washington-based country backstop officers (CBO) perform desk reviews of implementing partners’ financial reports and quarterly and final program reports and share this information with FFP officers in the field; in addition, both the Washington-based CBOs and FFP officers in-country conduct field visits. However, we found that the ability of the CBOs and FFP officers to consistently perform financial oversight in the field may be constrained by limited staff resources, security-related travel restrictions and requirements, and a lack of specific guidance on conducting oversight of cash transfer and food voucher programs.oversight and a key control to help ensure management’s objectives are carried out. They allow CBOs and FFP officers to physically verify the project’s implementation, observe cash disbursements, and conduct meetings with the beneficiaries of the grant and the staff of the organizations implementing the grant to determine whether the project is being implemented in accordance with the grant award. According to the CBOs and FFP officers, the frequency of field visits for financial oversight depends on staff availability and security access. According to the same USAID headquarters official, during the period covered by our review, a second FFP officer based in Turkey was responsible for oversight of EFSP Syria regional award projects in that country and Iraq. twice during the past 2 years—once in 2013 and 2014. All four FFP officers responsible for EFSP grants in our four case study countries emphasized that they would have liked to have additional staff because of the importance of conducting site visits to observe operations for which they had an oversight responsibility. The often volatile security situations in our four case study countries and surrounding areas also limited USAID’s ability to perform financial oversight in the field. FFP officers we spoke with noted that U.S. government security personnel had restricted travel in certain areas of their host countries for prolonged periods. For example, the elections in 2013 meant that travel in Kenya was restricted for security reasons during several months of the pre- and post-election season. Additionally, following the 2013 Westgate shopping mall attack in Nairobi and other terrorist incidents around the country, the coastal areas of Kenya were declared off limits for routine field visits. As a result, at the time of our visit in the fall of 2014, the FFP officer had not visited the cash-for-work program for over a year in the coastal areas of Kenya. In Somalia, travel to and within the country was highly restricted. For the project in Somalia that we reviewed, USAID relied on the implementing partner to perform oversight in-country. The implementing partner’s access to certain areas of the country was sometimes restricted; however, because of security concerns, the degree of accessibility varied over time. For example, in September 2014, about 40 percent of the households in Somalia were inaccessible to the implementing partner, whereas in the first 2 weeks of February 2015, 27 percent of the households were inaccessible. Additionally, the implementing partner did not have an independent third- party entity to perform oversight in restricted areas of the country. However, according to the implementing partner, it uses satellite imagery and aerial photographs to verify rehabilitation of cash for work activities. Because of security concerns, FFP staff had not been able to conduct any site visits in person in Somalia as of October 2014, the time of our field visit. In addition to access restrictions, routine security requirements such as armed escorts sometimes made it difficult for USAID staff to coordinate field visits. For example, in Niger, FFP program staff could not visit project sites without armed escorts, which can comprise up to 18 soldiers. The armed escorts were required because of security concerns arising from conflicts in neighboring countries. The coordination and the cost of the armed escorts, which could reach several hundred dollars per visit, made it difficult for staff to visit project sites. Faced with travel restrictions and security requirements, several FFP officers we visited reported they were overseeing cash and voucher projects in inaccessible areas indirectly through communication with implementing partners and in some instances hiring independent third-party oversight entities to make site visits. Because of staff limitations and security-imposed constraints, FFP officers primarily rely on implementing partners’ reports from the field and regular meetings with them, either in person or by teleconference, to determine whether a project is being executed as intended. However, USAID’s guidance to its FFP officers and its implementing partners on financial oversight and reporting is limited. For example, FFP staff in Niger stated that they have had insufficient guidance and training on financial oversight of cash-based food assistance projects. Furthermore, the FFP officers told us that USAID is not prescriptive in the financial oversight procedures it expects from its implementing partners. Additionally, they noted that USAID has not set a quantitative target for site visits by FFP officers. FFP officers in our four case study countries told us that they use a risk-based approach to select which sites to visit. For example, in Kenya, the FFP officer chooses project sites based on acute need, investment, and risk. In Niger, the FFP officers have a plan based on a risk assessment to visit projects periodically. Moving forward, USAID is developing or putting into place several processes to help its staff conduct financial oversight of EFSP programs. Examples include developing policy and guidance for financial oversight and reporting and hiring additional staff in Kenya to focus specifically on overseeing the Kenya cash and food distribution program. By increasing its in-house staff capacity, FFP officers hope to increase the number of field visits. USAID plans to have these processes in place by the end of 2015. Also, realizing the importance of performing oversight in high-risk areas, USAID recently awarded a contract to a third party for Somalia that will provide independent information about the implementation of projects funded by FFP and Office of Foreign Disaster Assistance, including periodic physical verification of project activities. Although USAID has taken steps to address concerns over its financial oversight of cash and voucher programs, limited staff resources, security restrictions, and lack of guidance hamper USAID’s ability to identify problems with cash and voucher distributions. Without systematic financial oversight of the distribution of cash and voucher activities in the field, USAID is hampered in providing reasonable assurance that its EFSP funds are being used for their intended purposes. Conclusions Cash-based food assistance, including the flexible options of delivering benefits through cash transfers or vouchers, is an important addition to USAID’s tools for addressing emerging food shortages. Cash and vouchers can be distributed more quickly than food and provide recipients with significant dietary diversity, among other advantages. Like its implementing partners and other major food assistance donors, USAID has significantly expanded its use of cash-based food assistance over the past 5 years. However, cash and voucher options present a new set of policy and financial control challenges that USAID needs to recognize and address. Cash-based food assistance may be the most appropriate option in certain situations, but this option requires additional layers of analysis in terms of decision making, starting with the decision as to which mechanism for delivering the assistance is best for a given situation. Cash-based assistance requires the availability of timely and reliable market information to know when and how modifications to the project may be warranted in response to changing market conditions. In addition, cash and vouchers present a very different set of financial control challenges from overseeing the procurement and distribution of a physical commodity. The success of USAID’s use of cash-based interventions depends on having appropriate mechanisms in place to ensure that grant proposals are sound, that those partners who are implementing projects have a good understanding of the underlying market conditions and the flexibility to make adjustments when prices change dramatically, and that proper financial controls and oversight are in place. We found that USAID followed its grant approval process. However, a significant number of grants we reviewed were modified after their initial approval, often with large increases in the resources committed. Yet USAID does not have formal guidance for reviewing and approving these modifications and thus does not know whether USAID staff are taking all necessary steps to justify the modification of awards. Efforts to review procedures and improve formal guidance for modifying awards have lagged behind the actual implementation of projects, without a time frame for completing those efforts and when enhanced guidance will be finalized. Moreover, USAID lacks formal guidance clearly delineating when and how implementing partners are to modify cash-based food assistance projects in response to changing market conditions, and thus it runs the risk that beneficiaries’ benefits may be eroded by significant price increases or that implementing partners may use scarce project funding inefficiently if prices decrease. While USAID relies on its implementing partners to oversee and ensure the financial integrity of cash-based assistance, the agency does not provide its partners with essential operational policy guidance on how they should conduct financial oversight, nor does it have the resources to monitor the implementing partners’ efforts. As we noted in this report, several instances of malfeasance have already surfaced in this program. It is essential that USAID learn from these circumstances and implement the necessary changes, including ensuring that comprehensive risk assessments are conducted and that implementing partners are given sufficient guidance and oversight. Recommendations for Executive Action To strengthen its management of cash-based food assistance projects and help ensure improved oversight of these projects, we recommend that the USAID Administrator take the following five actions: 1. Expedite USAID’s efforts to establish formal guidance for staff reviewing modifications of cash-based food assistance grant awards. 2. Develop formal guidance to implementing partners for modifying cash- based food assistance projects in response to changes in market conditions. 3. Require implementing partners of cash-based food assistance projects to conduct comprehensive risk assessments and submit the results to USAID along with mitigation plans that address financial vulnerabilities such as counterfeiting, diversion, and losses. 4. Develop policy and comprehensive guidance for USAID staff and implementing partners for financial oversight of cash-based food assistance projects. 5. Require USAID staff to conduct systematic financial oversight of USAID’s cash-based food assistance projects in the field. Agency Comments We provided a draft of this product to USAID for comment. USAID provided written comments on the draft, which are reprinted in appendix II. USAID also provided technical comments, which we incorporated throughout our report, as appropriate. In its written comments, USAID concurred with our recommendations. USAID agreed that it should formalize guidance for staff reviewing modifications of cash-based food assistance grant awards and stated that it is taking steps to do so. USAID also agreed to develop formal guidance to implementing partners on appropriate adjustments to adapt programming of cash-based food assistance projects in response to changing market conditions. In response to our recommendation to require implementing partners to conduct comprehensive risk assessments and submit the results to USAID with mitigation plans, USAID stated that while it expected applicants to address risk and risk mitigation within its application, it will formalize this requirement. With regard to our recommendation to develop policy and comprehensive guidance for USAID staff and implementing partners for financial oversight of cash-based food assistance, USAID stated that it will work with its implementing partners to improve financial oversight of cash- based food assistance projects, both through engagement on implementing partners’ policy, legal frameworks, and guidelines and through the development of guidance for USAID staff and implementing partners. Furthermore, to improve FFP officers’ capacity to oversee cash- based food assistance projects, USAID stated that it is developing training materials and will continue to explore the use of third-party monitoring contracts where security and access prevent in-person monitoring. We are sending copies of this report to appropriate congressional committees, the Administrator of USAID, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) review the U.S. Agency for International Development’s (USAID) processes for awarding and modifying cash- based food assistance projects and (2) assess the extent to which USAID and its implementing partners have implemented financial controls to help ensure appropriate oversight of such projects. To address both objectives, we analyzed Emergency Food Security Program (EFSP) data provided by USAID and its implementing partners, which include public international organizations (PIO) such as the World Food Programme (WFP), the United Nations (UN) Food and Agriculture Organization (FAO), as well as selected nongovernmental organizations (NGO). In Washington, D.C., we interviewed officials from the Department of State (State), the U.S. Department of Agriculture (USDA), and USAID. We also met with officials representing NGOs that were awarded USAID EFSP grants to serve as implementing partners in carrying out U.S. food assistance programs overseas or were the subawardees for USAID grants awarded to the PIOs. In Rome, we met with officials from the U.S. Mission to the United Nations, FAO, and WFP. We also met with the UN permanent representatives for three major donors—Canada, the European Union, and the United Kingdom. We also selected four case study countries that receive EFSP grants for review—Jordan for the Syria region, Kenya, Niger, and Somalia—and conducted fieldwork in three of these countries (Jordan, Kenya, and Niger) where we met with officials from the U.S. missions, implementing partners, vendors, financial institutions, and beneficiaries, among others. We selected these four countries on the basis of several factors including the level of USAID EFSP funding, the types of modalities and mechanisms used to transfer the assistance, implementing partners, security concerns and risks, and logistics and budget constraints. We cannot generalize our findings from these four countries to the other countries where USAID has funded cash-based food assistance projects. We interviewed staff from USAID and its implementing partners in Niger and Jordan, as well as staff based in Nairobi who had responsibility for oversight of the EFSP-funded operations in Kenya and Somalia. To provide context and background, we analyzed data from USAID and WFP to identify trends in U.S. funding for cash-based food assistance. These data include approved EFSP awards from USAID and cash and voucher amount from WFP by year. In addition, we reviewed studies, evaluations, and other documents on cash-based food assistance—its benefits and challenges—as well as various tools that USAID and its implementing partners use to facilitate their determination of the appropriate assistance delivery mechanism to address a given food insecurity situation. To address our first objective regarding USAID’s processes for awarding and modifying cash-based food assistance projects, we reviewed USAID’s Annual Program Statements (APS), grant proposals, and agreements; grant modifications; various directives and guidance, including guidance on concept notes; evaluation committee reviews; and scoring of proposals. Specifically, to determine whether USAID followed the process established in its guidance for reviewing and deciding to fund project proposals, we reviewed all 22 cash-based projects that were newly awarded and active as of June 1, 2014, making them subject to the requirements in the latest APS issued in May 2013. These awards covered 14 countries in Africa, Asia, and the Middle East; 11 of them went to nongovernmental organizations, and 11 went to public international organizations. For these 22 award files, we reviewed FFP’s files to determine whether it had documented the required program decision steps outlined in the APS for competitive proposals: the partner’s concept paper, the partner’s full proposal, and the evaluation committee’s review. For grants awarded under an expedited noncompetitive review process, we reviewed the Office of Food for Peace’s (FFP) files to determine whether they contained the appropriate action memo or memo of exception to competition. During our analysis, we found two instances in which there were no action memos or memos of exception to competition but the awards were justified under authorities in USAID’s main organization-wide guidance for expediting awards during emergencies, and we report these as a separate category. In addition, to determine the types of award modifications and the reasons for these modifications, we reviewed 21 EFSP grant awards that were awarded and active as of January 2012 to June 2014 for the four case study countries. This selection is not generalizable to the universe of all ESFP awards. A significant portion of FFP resources is approved through cost modifications, so a further review of these modifications was done, including numerous modifications for the Syria regional award. To assess the reliability of the cost modification data, we reviewed and analyzed funding data on USAID’s modification assistance awards and found the data to be sufficiently reliable for our purposes. We also determined from a list prepared by FFP’s independent management contractor the types of documents (such as the program justification, action memo, and technical evaluation report) that FFP submitted to the contractor for the cost modifications we reviewed. We also examined a version of draft guidance that USAID said it is currently reviewing as part of an effort to improve, consolidate, and streamline procedures for processing cost modifications, among other things. Furthermore, to identify periods of changing market conditions, we analyzed data on the price of key staple commodities in five selected markets in our case study countries from fiscal years 2010 through 2014. These markets were selected as illustrative examples of changes in the prices of key staple commodities, the effect on beneficiaries near those markets, and USAID’s implementing partners’ responses, if any. These markets are not meant to be representative of all other markets in our case study countries or all other markets near areas served by USAID’s implementing partners’ projects. We used data on prices from WFP’s Vulnerability Analysis and Mapping division, FAO’s Food Security and Nutrition Analysis Unit, and Jordan’s Department of Statistics. To assess the reliability of these data, we compared these data with data provided by USAID’s implementing partners, if available, and found the data to be sufficiently reliable for our purposes. We adapted WFP’s Alert for Price Spikes methodology to identify significant changes in the prices of key staple commodities. We reviewed USAID and its implementing partners’ project documents and interviewed implementing partner officials to assess how USAID’s implementing partners responded to any significant changes in the prices of key staple commodities. To the extent information was available, we calculated the effect of significant changes in the prices of key staple commodities on the cost of the beneficiaries’ food baskets. To address our second objective regarding the extent to which USAID and its implementing partners have implemented financial controls to help ensure appropriate oversight of cash based food assistance projects, we obtained the grant funding data from the grant award agreements and compared them with the funding data provided by USAID and determined that the data were sufficiently reliable for our purposes. To select the projects in our case study countries, we used a range of criteria, including the grant award date, type of delivery mechanism, and funding level (see table 1). For each project, we selected at least one distribution date that fell within the period between when the grant was awarded and when the project was completed. For example, for Jordan, the grant was awarded in July 2012 and ended in December 2014. We selected two paper voucher distributions, one that occurred in January 2013 for a governorate and one that occurred in April 2014 for a refugee camp.the areas selected for our review, like those for our case study countries, are not generalizable to all areas in the selected countries or to the broader universe of the implementing partners’ operations. We then assessed for each project selected distribution documentation and for the grant key reports (quarterly and final financial reports, quarterly performance reports, and final program reports) against the requirements listed in relevant grant agreements; USAID Annual Program Statements for 2011 and 2013; and the implementing partners’ financial policies, procedures, and guidance in place at the time of the distributions. For example, in Kenya, we reviewed planned and actual beneficiary payment distribution lists as well as reconciliation reports prepared by the implementing partner and its financial service providers in order to determine whether there were proper authorizations and segregation of duties. Additionally, we assessed whether USAID’s required reporting of key reports was completed and submitted to USAID on a timely basis. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). world. It is recognized as a leading framework for designing, implementing, and conducting internal control and assessing the effectiveness of internal control. COSO updated its framework in May 2013 to enhance and clarify the framework’s use and application. These internal control standards and frameworks describe the five components of internal control—control environment, risk assessment, control activities, information and communication, and monitoring. To address our objective, we focused on the control activities and risk assessment components in order to assess the entities’ financial oversight of cash- based food assistance projects. We did not assess the processes and procedures against the other internal control components. To determine the extent to which USAID and its implementing partners conducted comprehensive risk assessments, we reviewed their risk registers, if available, and other documents against their guidance and other standards, such as the international risk management standards published by the International Organization for Standardization (ISO), a worldwide federation of national standards bodies (ISO member bodies). This International Standard (ISO 31000) Risk Management—Principles and Guidelines provides principles and generic guidelines on risk management. We conducted this performance audit from February 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Agency for International Development Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Rathi Bose, Carol Bray, Ming Chen, Tina Cheng, Beryl H. Davis, David Dayton, Martin De Alteriis, Mark Dowling, Etana Finkler, Fang He, Teresa Abruzzo Heger, Joy Labez, Dainia Lawes, Kimberly McGatlin, Diane Morris, Valerie Nowak, Barbara Shields, and Daniel Will made key contributions to this report. Related GAO Products International Food Aid: Better Agency Collaboration Needed to Assess and Improve Emergency Food Aid Procurement System. GAO-14-22. Washington, D.C.: March 26, 2014. International Food Aid: Prepositioning Speeds Delivery of Emergency Aid, but Additional Monitoring of Time Frames and Costs Is Needed. GAO-14-277. Washington, D.C.: March 5, 2014. Global Food Security: USAID Is Improving Coordination but Needs to Require Systematic Assessments of Country-Level Risks. GAO-13-809. Washington, D.C.: September 17, 2013. E-supplement GAO-13-815SP. International Food Assistance: Improved Targeting Would Help Enable USAID to Reach Vulnerable Groups. GAO-12-862. Washington, D.C.: September 24, 2012. World Food Program: Stronger Controls Needed in High-Risk Areas. GAO-12-790. Washington, D.C.: September 13, 2012. International Food Assistance: Funding Development Projects through the Purchase, Shipment, and Sale of U.S. Commodities Is Inefficient and Can Cause Adverse Market Impacts. GAO-11-636. Washington, D.C.: June 23, 2011. International School Feeding: USDA’s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: USAID Is Taking Actions to Improve Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007.
For over 60 years, the United States has provided assistance to food-insecure countries primarily in the form of food commodities procured in the United States and transported overseas. In recent years, the United States has joined other major donors in increasingly providing food assistance in the form of cash or vouchers. In fiscal year 2014, U.S.-funded cash and voucher projects in 28 countries totaled about $410 million, the majority of which was for the Syria crisis, making the United States the largest single donor of cash-based food assistance. GAO was asked to review USAID's use of cash-based food assistance. In this report, GAO (1) reviews USAID's processes for awarding and modifying cash-based food assistance projects and (2) assesses the extent to which USAID and its implementing partners have implemented financial controls to help ensure appropriate oversight of such projects. GAO analyzed program data and documents for selected projects in Jordan, Kenya, Niger, and Somalia; interviewed relevant officials; and conducted fieldwork in Jordan, Kenya, and Niger. The U.S. Agency for International Development (USAID) awards new cash-based food assistance grants under its Emergency Food Security Program (EFSP) through a competitive proposal review or an expedited noncompetitive process; however, USAID lacks formal internal guidance for modifying awards. In its review of 22 grant awards, GAO found that USAID made 13 through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. According to USAID, the agency follows a similar process for cost modification requests. Partners may propose cost or no-cost modifications for a variety of reasons, such as an increase in the number of beneficiaries or changing market conditions affecting food prices. In a review of 13 grant awards that had been modified, GAO found that 8 had cost modifications resulting in funding for all 13 awards increasing from about $91 million to $626 million. According to USAID, draft procedures for modifying awards are under review and will be incorporated into its guidance, but it could not provide a time frame. Until USAID institutes formal guidance, it cannot hold its staff and implementing partners accountable for taking all necessary steps to justify and document the modification of awards. GAO also found that though USAID requires partners to monitor market conditions—a key factor that may trigger an award modification—it does not provide guidance on when and how to respond to changing market conditions. USAID relies on implementing partners for financial oversight of EFSP projects but does not require them to conduct comprehensive risk assessments to plan financial oversight activities, and it provides little related procedural guidance to partners and its own staff. For projects in four case study countries, GAO found that neither USAID nor its implementing partners conducted comprehensive risk assessments to identify and mitigate financial vulnerabilities. Additionally, although USAID's partners had generally implemented financial controls over cash and voucher distributions that GAO reviewed, some partners' guidance for financial oversight had weaknesses, such as a lack of information on how to estimate and report losses. In addition, GAO found that USAID had limited guidance on financial control activities and provided no information to aid partners in estimating and reporting losses. As a result, partners may neglect to implement appropriate financial controls in areas that are most vulnerable to fraud, diversion, and misuse of EFSP funding.
Background Financial derivatives are globally used financial products that unbundle exposure to an underlying asset and transfer risks—the exposure to financial loss caused by adverse changes in the values of assets or liabilities—from entities less able or willing to manage them to those more willing or able to do so. The values of financial derivatives are based on an underlying reference item or items, such as equities, debt, exchange rates, and interest rates. Since 2001, interest rate contracts have made up the vast majority of all financial derivative contracts, on average 80 percent of all derivatives in terms of notional amount outstanding, and are used to hedge against changes in the cost of capital. Parties involved in financial derivative transactions do not need to own or invest in the underlying reference items, and often do not. The most common purpose of financial derivatives is to manage the holder’s risk, and this is often accomplished by constructing financial derivative contracts that produce more favorable rather than unfavorable tax results. Financial derivatives are sold and traded either on regulated exchanges or in private, over-the-counter markets that allow highly customized transactions specific to the needs of the parties. Financial derivatives are bilateral agreements that shift risk from one party to another but can be used to structure more complicated arrangements involving multiple transactions and parties. Simple financial derivatives act as building blocks for more complex products, and can be broken down into three general categories of products, described in figure 1. Credit derivatives, depending on their structure, fall into one of these three categories, but are often measured as a separate category by government agencies. Dealers participate in the financial derivatives market by quoting prices to, buying derivatives from, and selling derivatives to end users and other dealers. They also develop customized derivative products for their clients. Commercial banks, which most often act as dealers, are typically one of the two parties involved in financial derivative transactions. In 2010 the holdings of five large commercial banks represented over 95 percent of the banking industry’s notional amounts outstanding. End users, including commercial banks, securities firms, hedge funds, insurance companies, governments, mutual funds, pension funds, individuals, commercial entities, and other dealers, often use derivatives to protect against adverse change in the values of assets or liabilities, called hedging. Hedgers try to protect themselves from risk by entering into derivatives transactions whose values are expected to change in the opposite direction from the values of their assets or liabilities. According to a 2009 survey conducted by the International Swaps and Derivatives Association, over 94 percent of the largest companies’ worldwide use financial derivatives to manage and hedge risks. End users can also use derivatives for speculation by taking on risk in an attempt to profit from changes in the values of derivatives or their reference items. Derivatives are attractive to speculators because they can be more cost-effective than transactions in the underlying reference item, due to reduced transaction costs and the leverage that derivatives provide. Financial derivatives transactions are generally leveraged since parties to these transactions most often initiate the transaction with little money down relative to the expected outcome of the transaction. In any financial transaction, the degree of permissible leverage is determined by the collateral required to secure the transaction. While a high degree of leverage has the potential for large gains, it also carries risks of large losses. As we and others have reported, the risk exposures resulting from derivatives were one of many factors that contributed to the systemic risk that led to the recent financial crisis. The market for financial derivatives has grown considerably in size over the past two decades. Two common ways to measure the size of financial derivative markets overall are total notional amount and fair market value. Total notional amount represents the value of the reference items underlying financial derivative transactions, and is the amount upon which payments are computed between parties of derivatives contracts. Notional amount does not represent money exchanged, nor does it represent the risk exposure. The second measure, fair market value, can be either the gross positive fair value or the gross negative fair value. The gross positive fair value represents the sum of the fair values of the financial derivative contracts where the commercial bank is owed money by the other party in the contract and represents the maximum losses the bank could incur if all other parties in the contracts default. According to the OCC, between the first quarter of 1999 and the fourth quarter of 2010, the total notional amount outstanding used to calculate payments for derivatives contracts held by insured U.S. commercial banks and trust companies grew over six times, from $32.7 trillion to $231.2 trillion. For those same institutions, gross positive market value grew nearly seven- and-a-half times, from $0.46 trillion to $3.87 trillion (see fig. 2). The difference in these numbers is due to the fact that the notional amount is used to calculate payments from the contracts, which are typically a small percentage of the notional amount. The net present value of these payments determine, in part, gross positive market value. Because commercial banks are one of the parties involved in over 95 percent of financial derivative contracts, these measures are good indicators of the entire U.S. market. The volatility seen in figure 2 during the latter part of 2007 and 2008 is attributed in part to turmoil in financial markets and banking consolidation. In part because different types of financial derivatives are reported differently to IRS by taxpayers and third parties, and in most cases are not clearly identified as financial derivatives, IRS told us that data are not available to estimate overall gains and losses of income earned from financial derivatives. announcements, and technical advice memorandums. Regulations, which provide taxpayers with directions on complying with new legislation or existing sections of the tax code, hold more legal weight than IRS’s other forms of guidance. Generally, regulations are first published in draft form in a Notice of Proposed Rulemaking, and final regulations are issued after public input is fully considered through written comments and potentially a public hearing. Where new or amended regulations may not be published in the immediate future, notices are often used to provide substantive interpretations of the tax code and other provisions of the law. In addition, IRS issues revenue rulings to provide official interpretations of the tax code, related statutes, tax treaties, and regulations. These interpretations are specific to how the law is applied to a particular set of facts. Revenue procedures provide return filing or other instructions concerning an IRS position. Private letter rulings are written statements issued to a single taxpayer that interpret and apply tax laws to that taxpayer’s specific set of facts. They are not officially published and may not be relied on as precedent by other taxpayers or IRS. Announcements, which generally have only immediate or short-term relevance, summarize laws and regulations without making any substantive interpretation; state what regulations will say in the future; or notify taxpayers of the existence of an approaching deadline. Finally, technical advice memorandums are developed in response to technical or procedural questions that develop during an examination or a processing in Appeals. Financial Derivatives Do Not Fit Neatly into the Tax System, Allowing Taxpayers to Use Them in Potentially Abusive or Improper Ways Unique characteristics of financial derivatives make them particularly difficult for the tax code and IRS to address. The tax code’s current approach to the taxation of financial derivatives is characterized by many experts as the “cubbyhole” approach. Under this approach, the tax code establishes broad categories for financial instruments, such as debt, equity, forwards, and options, each with its own rules governing how and when gains and losses are taxed. As new instruments are developed, IRS and taxpayers attempt to fit them into existing tax categories by comparing the new instrument to the most closely analogous instruments for which tax rules exist. However, a new financial instrument could be similar to multiple tax categories, and therefore IRS and taxpayers must choose between alternatives. This could result in inconsistent tax consequences for a transaction that produces the same economic results. Derivative contracts, particularly those traded over-the-counter, are highly flexible, allowing taxpayers to structure transactions to take advantage of the different tax rules for each tax category. Derivatives can also be coupled with each other and with other types of financial instruments, like more traditional debt or equity instruments, to create hybrid securities. Because hybrid securities often do not clearly fall within a single tax category, it can be challenging for IRS and taxpayers to determine which tax rules are appropriate, and whether the hybrid should be treated as a single instrument or broken up into multiple instruments. While the tax rules for each tax category represent Congress’s and Treasury’s explicit policy decisions, some of these decisions were made long before today’s complex financial derivative products were created. The cumulative effect of these decisions combined with the fact that many financial derivatives do not fit neatly in any one tax category can result in mistakes or opportunities for abuse by taxpayers. A Patchwork of Rules from Different Parts of the Tax Code Govern the Taxation of Financial Derivatives Tax rules governing financial derivatives can be broken down into rules governing the timing, character, and source of gains and losses, as described in table 1. These rules vary depending on a number of factors, including the type of financial derivative product, the underlying reference item, the transaction’s cash flows, the type of taxpayer, the taxpayer’s purpose for using the transaction, and applicable anti-abuse rules. Over time, as financial derivative products have been developed that do not fit neatly into existing tax categories, numerous tax rules have been created to address new financial products, sometimes without anticipating the relationship between those transactions and others. Therefore, tax rules for financial derivatives can vary widely from one transaction to another. While the source of gains and losses resulting from financial derivatives is generally that of the residence of the recipient, the tax rules for timing and character are more complicated. As stated above, some of these tax rules depend on the type of financial derivative product. For example, nonequity options not held to hedge a transaction are taxed under section 1256 of the Internal Revenue Code (IRC), which requires that the timing of gains and losses are marked-to-market at the end of the tax year, and that the character of gains and losses is treated as 60 percent long-term capital and 40 percent short-term capital. Equity options held by dealers are also taxed under section 1256. However, for equity options not held by dealers, the timing rules are that gains and losses are not realized until the contract matures. Depending on the option’s term, the character is either 100 percent short-term capital gain or loss or 100 percent long-term capital gain or loss. Some tax rules for character also depend on the underlying reference item, regardless of the transaction type. An example of this is a foreign currency contract (known as a section 988 transaction), which may be ordinary or capital depending on a variety of factors outlined in IRC section 988. The gains or losses on a section 988 transaction are ordinary to the extent they are due to changes in exchange rates. However, the taxpayer may elect capital treatment in certain instances if the contract is a capital asset in the hands of the taxpayer and not part of an offsetting position, also known as a straddle. Other timing and character rules are based on the purpose of the transaction, such as transactions used for hedging, which are generally treated as ordinary and timed according to the hedged item. Regardless of the type of transaction or reference item, if the transaction qualifies as a hedge, these rules apply. There are also timing and character rules that are based on the type of taxpayer. For example, the rules under IRC section 475 apply to dealers in securities and, if they elect, commodities dealers and traders in securities or commodities, who must generally mark-to-market securities or commodities under IRS section 475 and recognize gains and losses annually. The character of these gains and losses is ordinary. Since a securities dealer is typically one of the two parties involved in a financial derivative transaction, this often results in different tax treatment for both sides of the transaction. The dealer would generally mark-to-market annually the gains and losses from a financial derivative contract and treat the income or losses as ordinary, while the other party to the transaction would be taxed depending on the factors described in this section. Finally, the rules for the timing and character of financial derivatives can also vary for different types of payments within a single financial derivative transaction. For example, periodic payments in a NPC are treated differently than nonperiodic payments. Periodic payments are taxed as ordinary income and recognized annually on an accrual basis like interest payments. Nonperiodic payments must be amortized and recognized as ordinary income over the life of the contract. However, early termination payments in a NPC are recognized for timing purposes when they occur, and they give rise to capital gain or loss if the underlying item is capital. In contrast, nonperiodic, contingent payments do not have defined treatment; the tax rules only require taxpayers to account for the payments in a manner consistent with other tax positions. Proposed regulations issued in 2004 stated that taxpayers could use a noncontingent swap method to determine the timing and character of these payments or elect mark-to-market treatment. There are a number of anti-abuse rules that can supersede the tax rules described above, which further complicate the tax treatment of a transaction. Many sections of the tax code exist for the sole purpose of applying additional rules for certain transactions, including IRC sections 1091 (wash sales), 1092 (straddles), 1233 (short sales), 1258 (conversion transactions), 1259 (constructive sales), and 1260 (constructive ownership transactions). For example, under IRC section 1092, for two or more transactions that are offsetting positions, known as a straddle, a realized loss on one position is currently deductible only to the extent that the loss exceeds unrecognized gains in any remaining offsetting positions. A second example involves constructive sales, or transactions that are treated as sales for tax purposes even though ownership in the property may not have legally transferred. Constructive sales include when a taxpayer enters into a short sale of the same or substantially identical property, an offsetting notional principal contract with respect to the same or substantially identical property, or a futures or forward contract to deliver the same or substantially identical property. Under IRC section 1259, taxpayers are considered as having sold a position at fair market value on the date of the constructive sale. The tax rules for character, timing, and source described above can be challenging for both taxpayers and IRS to apply. Where these rules overlap or contradict one another, they can create gray areas that allow the same economic outcome to be taxed differently. Even anti-abuse rules, some of which IRS and tax experts believe are largely effective, can contribute to the uncertainty because determining when to apply them can be difficult. Financial Derivative Transactions with Economically Similar Positions Can Have Inconsistent Tax Treatments One basic principle of taxation commonly used to evaluate the tax treatment of financial derivatives is consistency, meaning that transactions with equivalent economic outcomes are taxed the same. The tax rules for financial derivatives with equivalent economic outcomes are not always consistent, in part because of their piecemeal development over time as well as the difficulty of developing tax rules for products that are constantly changing. For some types of financial derivatives, similar transactions can fall under different tax rules, particularly if the transactions do not fit well into the tax categories of the existing tax code. While the pretax economic outcome of a taxpayer using a financial derivative and actually holding the financial asset underlying the derivative may be the same, due to the inconsistent tax treatment of derivatives, the after-tax outcome can be starkly different. The flexibility in financial derivative contracts allows them to be used to mimic different economic positions. By combining the basic building blocks of financial derivatives highlighted in table 1, together with traditional instruments like debt and equity, taxpayers can virtually create any synthetic position that allows the same economic returns as the reference item without actually owning the reference item. Financial derivatives therefore allow users, in many circumstances, to structure transactions to alter the timing, character, and source of gains and losses to produce more tax-favorable results. For example, through financial derivatives taxpayers can defer gains or accelerate losses, change ordinary income into capital gains or vice versa for losses, or alter the source of the gains to avoid paying withholding taxes. While permitting taxpayers to elect a more favorable tax treatment is not uncommon, when they have done so using financial derivatives, the result has at times been disallowed by Congress. In other cases, IRS and Treasury have successfully challenged during audit or in litigation taxpayers’ treatment of financial derivatives. In certain instances, financial derivative transactions can be used to produce equivalent economic outcomes that have different tax results because one financial derivative can fall under numerous tax rules. One prominent example of this is the credit default swap (CDS), which first appeared on the market in the early 1990s. As is shown in figure 3, in a CDS, the buyer pays a periodic fee to the seller in return for compensation if a specified credit event occurs to a reference item. The reference item may be bonds or loans from a corporate entity, sovereign debt, an asset, or an index of these. The credit event may be default, bankruptcy, debt restructuring, or any number of events related to the significant loss in value of the underlying reference item. Although CDSs became prominent in the market in the 1990s, their tax treatment has remained uncertain. In the absence of guidance, taxpayers do not take a uniform treatment of CDSs, instead selecting the tax position that is most favorable. Taxpayers commonly elect NPC treatment for CDS transactions. As discussed previously, different payments from a NPC have different character treatments, and CDS users can take advantage of these differences to lower their tax liability when one party in the transaction is neutral to the tax results. For example, in a situation when a taxpayer holds a CDS that has appreciated in value and the other party is a dealer, rather than hold the contract until maturity and pay ordinary rates on the income, the taxpayer can terminate the contract early. By doing so the taxpayer receives a termination payment of the same economic value but pays lower long-term capital gains rates. Experts that we interviewed stated that the inconsistent treatment of CDSs increases compliance risk for taxpayers. In the final guidance, Treasury and IRS may determine that the tax treatment of CDSs does not align with how a taxpayer elected to treat a CDS now, and there is a risk that a different treatment could be imposed on transactions entered into prior to the new guidance. Financial derivatives also allow taxpayers to take advantage of the inconsistencies between asset classes, such as differences in deductions between payments on debt and equity. Taxpayers have done this with one type of financial derivative, by coupling a forward contract with the issuance of debt, which is one type of a mandatory convertible security. Mandatory convertibles are a broad class of securities linked to equity that automatically convert to common stock on a specific date, and allow the issuer to raise capital that is treated as debt in financial statements. However, interest payments on the issuance can be deducted, which would not be possible with dividend payments on a more traditional equity security. In the transaction, a corporation issues units of the security that consist of two components: a forward contract to purchase the corporation’s equity, and debt in the form of the corporation’s note. The purchaser of the unit pledges the note back to the corporation as collateral to pay the settlement price of the forward contract. If the note and the forward are treated as a single hybrid instrument for tax purposes, the single instrument resembles an equity position, and payments on such a position would not be deductible. Currently the note and the forward can be separated for tax purposes under certain circumstances, in which case the corporation can deduct all payments on the note as interest payments on debt, despite the presence of the forward contract. Financial derivatives also have allowed taxpayers to mimic the ownership of assets such as equities, while achieving a lower tax liability than direct ownership of those assets. One example of this was the disparate treatment of dividend payments on U.S. equity and dividend-equivalent payments from a total return equity swap (equity TRS), held by foreign entities. Foreign entities must pay 30 percent withholding taxes on any dividends received from U.S. sources because the dividend is considered U.S.-source income. However, until recently payments from a swap based on that U.S. asset would not be subject to withholding taxes, as swaps are sourced to the residency of the recipient of swap payments, the foreign entity in the case of the equity TRSs that attempt to avoid withholding taxes. In order to avoid withholding taxes on dividends, a foreign entity would enter an equity TRS, replacing the dividends with dividend-equivalent payments that arise from the swap. In this transaction, a U.S. financial institution pays the foreign entity a cash-flow equivalent to the dividends of a given stock plus any appreciation in the stock price, while the foreign entity pays a floating interest rate used to enter into the agreement plus any stock depreciation. The contract results in the foreign entity mimicking stock ownership without paying withholding tax by taking advantage of differences in source rules for dividend payments and dividend-equivalent payments. The use of equity TRS by foreign entities to avoid withholding had become standard practice since the 1990s, until the Hiring Incentives to Restore Employment (HIRE) Act statutorily required withholding on dividend-equivalent payments (see fig. 4). For tax years before the enactment of the HIRE Act, IRS is challenging equity TRS transactions through the examination process, arguing that they were used to improperly avoid withholding taxes. In addition, IRS issued an Industry Director Directive on January 14, 2010, to assist IRS agents to identify and develop cases with questionable equity TRS transactions. Another example where taxpayers have used the inconsistent tax treatment between financial derivatives and direct ownership of the underlying asset was the case of a variable prepaid forward contract (VPFC) held in combination with a share-lending agreement. Taxpayers attempted to use this transaction to defer income by mimicking a sale of equity without recognizing the gains for tax purposes. When taxpayers sell an appreciated security, they must pay short-term or long-term capital gains taxes upon sale. However, over the past decade, taxpayers have used VPFCs to monetize gains in a security’s value without paying taxes at the time of the sale. In situations where VPFCs have been used for this purpose, taxpayers agree to sell a variable number of shares to the other party in the transaction, usually an investment bank, at an agreed-upon date, typically 3 to 5 years in the future. VPFCs are customized to the investor and an option to cash-settle is usually included in the contract. The number of shares delivered (or the cash value thereof) is based on a formula involving the stock price on the contract’s expiration date. The dealer typically pays the taxpayer between 75 percent and 85 percent of the market value of the shares up front that is not required to be repaid. By manipulating differences in timing rules, the VPFC thus closely mimics the sale of stock, but the income is not recognized for tax purposes until the contract matures. Because of the variability in the number of deliverable shares, the transaction avoids anti-abuse rules that do not permit deferred recognition of prepaid sales (see fig. 5). While holding a VPFC, taxpayers still retain control of the underlying asset. To earn a greater return on the VPFC as discussed above, taxpayers sometimes couple the VPFC with a share-lending agreement. This type of agreement stipulates that taxpayers lend shares to the investment bank to sell, invest, or use in other ways the shares in its course of business. In this manner, the taxpayers have transferred substantially all of the attributes of owning the shares, but have argued that the shares have not been sold for tax purposes (see fig. 6). The current tax treatment is not the only possible method of taxing financial derivatives, and experts have suggested a number of alternatives that they believe would adopt a more consistent view of financial derivatives and potentially reduce abuse. For example, one common idea is to require mark-to-market treatment on all financial derivatives for all taxpayers, meaning that all gains and losses from financial derivatives would be recognized at the end of each tax year, and to treat all such income as ordinary income. While this approach would result in higher tax burdens for some, proponents cite benefits, which include reduced compliance costs and potential for abuse. This report does not evaluate this approach or any other alternative approaches, which would require significant changes to the tax code. We have previously developed criteria for establishing a good tax system, which include equity; economic efficiency; and simplicity, transparency, and administrability. Consistency, the criterion used in this report, is related to simplicity, administrability, and economic efficiency. While the examples above describe issues that arise from inconsistent tax rules, any alternative approach would involve tradeoffs among these criteria. In considering the effects of alternative tax rules on economic efficiency, IRS and several experts told us that one potential effect of any alternative with less favorable tax outcomes could be that certain financial sector activity might leave the United States. Because of their unique position to define policy and administer the tax code, Treasury and IRS are in the best position to recommend an alternative approach to the taxation of financial derivatives. Challenges Slow the Development of Guidance on Financial Derivatives Increasing Uncertainty and Potential for Abuse In their role of implementing tax laws enacted by Congress, Treasury and IRS play the crucial role of translating tax laws into detailed regulations, rules, and procedures. When application of the law is complex or uncertain, as is often the case for financial derivatives, guidance is an important tool for addressing tax compliance and emerging abusive tax schemes. Particularly when financial derivative products are new, how financial derivative products should be taxed under the current tax regime can be unclear. However, Treasury and IRS face a number of challenges in developing guidance for financial derivatives that may delay completion of guidance. Although taxpayers are accustomed to exercising judgment when taking a tax position for their transactions, the lack of clarity for many derivatives can lead to heightened compliance risk and abuse. Taxpayers we interviewed said that Treasury and IRS have not issued guidance on a number of financial derivative tax issues that have a significant impact on their decision making. For example, before the passage of the HIRE Act in 2010, the last guidance IRS issued on transactions that avoid withholding taxes on dividends similar to cross- border equity TRSs were final regulations in 1997. During the past two decades, the use of equity TRSs to avoid withholding taxes grew as many taxpayers interpreted the lack of tax guidance as IRS’s approval of the tax treatment of the transaction. Similarly, IRS has not issued final regulations on contingent swaps since the proposed regulations in 2004, and finalized guidance on the appropriate tax treatment of CDSs has not been issued since a notice requesting comment on their tax treatment in 2004 (see fig. 7 for a timeline). This leaves taxpayers with little clarity on how to treat gains or losses from a swap payment dependent on a contingency. Contingent swaps are swaps that contain contingent nonperiodic payments determined by the occurrence of a specified event, such as the price movement of an underlying asset. CDSs are a special type of a contingent swap, where the triggering event is a credit event, such as the default of debt issued by a third party. According to IRS, the only requirement for taxpayers is that they clearly reflect income in their method of accounting for these transactions. IRS first issued a notice soliciting comments on the tax treatment of contingent swap payments in 2001, which eventually led to a first round of proposed regulations in 2004. These proposed regulations offer two accounting methods: (1) mark-to-market treatment, or (2) annually projecting the expected value of the contingent payment and paying the appropriate tax as if it were a nonperiodic, noncontingent payment, known as the noncontingent swap method. After issuing the proposed regulations in 2004, IRS has gone through several internal iterations of draft regulations without issuing final regulations on contingent swaps. IRS first learned of CDSs in a request for a private letter ruling from a taxpayer in 2000. However, IRS did not issue any guidance on CDSs until 2004, when it requested information from taxpayers on four alternative treatments. In the absence of finalized guidance, the 2004 notice allows taxpayers to place CDSs in one of four distinct tax categories for financial instruments. Experts and practitioners told us that the tax treatment for CDSs is unclear, the alternatives do not necessarily arrive at the same tax liability, and taxpayers do not uniformly use one of the alternatives. The lack of guidance has resulted in taxpayers choosing different tax treatments, and according to some taxpayers we interviewed, deferring income recognition even when they are reasonably certain of gains. Taxpayers and experts that we interviewed also stated that the inconsistent treatment of CDSs increases the tax compliance risk they face because Treasury and IRS may determine that the final tax treatment of CDSs will not align with how some taxpayers are treating CDSs now and that determination may be applied to transactions entered into in prior years. The absence of guidance on contingent swaps and CDSs affects IRS’s ability to assess tax liability and address potential abuse. When exam teams in IRS identify a potentially abusive financial derivative used by a taxpayer, they have a number of resources to understand the tax effects and determine the appropriate tax liability. IRS has specialists in financial instruments that regularly assist revenue agents, as well as IRS attorneys who provide specialized legal advice. When an IRS exam division determines that a potential abuse has a large enough scale to warrant the necessary resources to address broadly, there are multiple avenues to raise the issue beyond the particular exam. One of these avenues is the issuance of guidelines by an IRS exam division to field examiners, such as an Industry Directive, as was the case with cross-border equity TRSs. Another mechanism is a request for nonprecedential guidance from IRS’s Chief Counsel in the form of a legal memorandum to IRS staff. Issues can also be developed into a series of cases for litigation. For issues that are broad enough, Chief Counsel can eventually issue published guidance, which differs from the previous options in that it typically has a broader legal application. The other alternatives are not generally legally binding on IRS or taxpayers, except with regard to the taxpayer involved. As another example, for VPFCs with share-lending agreements, IRS has issued guidance from exam divisions and Chief Counsel, and has also developed a number of cases for litigation. IRS and Treasury issue guidance in the form of regulations, revenue rulings, revenue procedures, notices, and announcements as well as other types of guidance. IRS Chief Counsel and Treasury’s Office of Tax Policy have established a prioritization schedule for developing and issuing guidance, known as the Priority Guidance Plan (PGP). The PGP is issued each year and identifies the guidance projects that are the current IRS and Treasury guidance priorities to be completed over a 12-month period that runs from July 1st to June 30th of that PGP year. The PGP is available to the public on IRS’s website, and is updated periodically to include additional guidance project priorities and identifies which guidance projects have been completed up to that point. However, periodic updates to the PGP do not identify guidance projects that have been removed from the plan without having any guidance issued. Not all guidance projects being worked on are on the PGP, and a number of pieces of guidance affecting derivatives were not PGP projects. For example, Notice 2002-35, which addressed tax shelters using NPCs, was not on the PGP before it was published. The PGP serves as both a public statement of the guidance taxpayers can expect to receive over a 12-month period and an internal prioritization of resources within IRS and Treasury. Given the pace at which derivative markets evolve, timely guidance that addresses tax issues is important to reduce uncertainty and opportunities for abusive tax strategies. However, Treasury and IRS face a number of challenges that are discussed below that may delay the completion of guidance. Between 1996 and 2010 IRS and Treasury Did Not Complete One-Fourth of the Priority Guidance Projects That Involved Financial Derivatives We analyzed 53 projects that involve financial derivatives that IRS and Treasury have placed on the PGP since 1996, and found that one-fourth of the projects were not completed (see table 2). Almost all of the guidance projects that were completed were published within 2 years of first appearing on the PGP. Of the 53 projects on the PGP, IRS and Treasury completed just over half (29) within their first year on the plan, and removed 5 that were not completed. Of the 19 projects that remained on the plan for 2 years or longer, just under half of those (9) were completed and 3 were not completed and removed. Only 1 of the 7 projects that were on the PGP for 3 or more years was completed as of the end of the 2010 PGP year (June 30, 2010). Some of the PGP projects that were removed or not completed from 1996-2010 dealt with tax issues related to the case studies described above on contingent swaps and equity TRSs. Figure 8 presents the completion rates for projects related to financial derivatives on the plan for 1 or more years. We found, that on the basis of our analysis, projects not issued within 3 years were more likely to be regulations, and related to more complex issues. Four projects that were on the plan for 4 years or longer without being completed were regulations addressing particularly controversial and complicated issues, including (1) capitalization of interest and charges in straddles under IRC section 263(g), (2) constructive sales rules under IRC section 1259, (3) contingent payments in notional principal contracts, and (4) elective mark-to-market accounting for certain qualifying taxpayers under IRC section 475. While it is important for IRS and Treasury to finalize guidance on these projects to provide clarity to IRS and taxpayers, there are a number of challenges involved, including the patchwork structure of the relevant tax rules and other issues discussed below. These challenges can make it difficult to issue guidance on the tax treatment of financial derivatives within the 12-month time frame established in the PGP. Challenges Specific to Financial Derivatives Slow the Guidance Process While some reasons for the delay in the issuance of guidance on financial derivatives are common to all guidance projects, financial derivatives have characteristics that present challenges to IRS and Treasury. Overcoming these challenges requires time and resources, which can cause significant delays in issuing guidance that addresses the concerns facing taxpayers and IRS. The growing sophistication of financial derivatives and the complex tax rules governing them have made it difficult for Treasury and IRS to resolve issues not addressed in legislation or existing guidance. On the one hand, financial derivative products can involve multiple transactions and entities, or terms can be altered to reach different tax results. These factors impede IRS’s ability to identify a product’s economic outcome, business purpose, and the applicable tax rules. The complexity of VPFCs is one example where IRS concluded and the courts agreed that some claimed tax results of the transactions were improper, depending on the entities involved and what other contracts the VPFCs were coupled with. On the other hand, multiple tax rules can apply to the same financial derivative product depending on certain factors such as the type of taxpayer, the underlying asset, and the context in which the product is being used. Treasury and IRS often spend years working through these complexities, and at times have been unable to reach a consensus. The tax treatment of gains and losses that are contingent on a particular event is an example of an issue that Treasury, IRS, and private sector experts have identified as particularly difficult to resolve. Treasury and IRS legal counsel have devoted considerable resources—as of April 2011 IRS alone had logged nearly 7,800 staff hours over a 9-year period—to determine the appropriate treatment of contingent swap payments, but have been unable to reach a consensus. Despite being on the PGP every year since 2004, when proposed regulations were issued, Treasury and IRS have been unable to establish the appropriate treatment of contingent nonperiodic payments in final regulations in large part due to the complexity of the timing and character issues, as well as other issues discussed earlier. CDSs have also been the subject of considerable analysis by Treasury, IRS, and experts on the appropriate tax treatment. Since issuing a notice in 2004, IRS has not issued any guidance on how CDSs should be treated for tax purposes. During this time, the structure of CDS products has diversified from products that were referenced to a single entity to products based on a pool of obligations, such as an index and others that are rolled into more complex products. IRS and Treasury have not established the appropriate treatment of a basic CDS product, a necessary first step in determining the tax treatment of more complex CDS products. The timeliness of Treasury and IRS guidance is also affected by concerns that issuing new guidance could provide new opportunities for tax abuse. This is especially true for financial derivatives, as they can easily be altered to achieve a desired tax effect. IRS told us that whether it issues further guidance depends on a careful consideration of the possible unintended effects that guidance might have, and that Treasury and IRS must carefully evaluate potential guidance changes to help ensure that while addressing problems in one area they do not raise issues in another. One example of guidance that Treasury and IRS issued that had unintended consequences was IRS Notice 97-66, which dealt with withholding taxes on dividend-substitute payments. Certain payments made by a domestic entity to a foreign entity may be subject to a 30 percent withholding tax, depending on source rules for that type of payment. Dividend payments made from owning equity and dividend substitute payments made from a securities loan are subject to withholding tax. Prior to the passage of legislation in 2010, some taxpayers and representatives took the position that dividend-equivalent payments made from an equity TRS were not subject to withholding tax. IRS had begun challenging the equity TRS transaction based on judicial doctrines before the 2010 legislation was enacted. When Treasury finalized the regulations for dividend-substitute payments in 1997, tax practitioners were concerned that the regulations could result in the cascading of withholding taxes in cases where the same shares of equity were lent between two foreign parties. As seen in figure 9, in this transaction, the actual dividend and the dividend-substitute payment would be subject to withholding tax, resulting in withholding tax exceeding the 30 percent withholding rate. In response, IRS issued Notice 97-66, intended to avoid cascading withholding on instances described in the example above. However, some financial institutions took the position that a literal reading of the IRS notice meant that a dividend-substitute payment made between two foreign parties located in jurisdictions subject to the same withholding rate was not subject to any withholding tax. As seen in figure 10, in this transaction, the dividend-equivalent payment would not be subject to withholding tax because of 1991 Treasury regulations and the dividend- substitute payment would not be subject to withholding tax based on the taxpayer’s interpretation of IRS Notice 97-66. As stated above, Congress eventually disallowed the avoidance of dividend withholding through this transaction with the passage of the HIRE Act. This example and others have made Treasury and IRS aware of the importance of weighing the need for guidance with the potential that new guidance may also provide new opportunities for taxpayers to aggressively reduce their tax liability by altering the structure of a transaction. The ability of financial market participants to react quickly to guidance means that Treasury and IRS have to consider the unintended effects that may occur when issuing guidance. As indicated in the example above regarding Notice 97-66, the time it takes Treasury and IRS to identify and mitigate any tax avoidance strategies that arise from issuing guidance can take a number of years. While timeliness is an important factor for issuing guidance, taking steps to ensure the effectiveness and the desired results of the guidance are also important factors for IRS and Treasury to consider. The considerable growth in financial derivatives markets has increased the potential economic effects of guidance issued by Treasury and IRS. IRS officials have said that in preparing guidance they do not consider the number of taxpayers taking a certain tax position on a financial derivative product, but rather they base their decisions on tax rules established in the IRC, Treasury regulations, and judicial doctrine. However, in light of the size of a product’s market, officials told us that it is important to consider the economic effects of their guidance decisions. Economic consequences of concern identified by officials and experts include losing financial business overseas to countries with more business-friendly tax regimes. An example of one of the economic risks facing IRS and Treasury surfaced during the process of issuing guidance on how withholding taxes should apply to cross-border derivative payments. When Treasury and IRS considered requiring withholding on cross-border equity TRSs in 1998, Congress, IRS, and Treasury faced numerous concerns from taxpayers that this would limit foreign investment in the U.S. Withholding has also been a concern for cross-border CDSs. In terms of notional amount outstanding, the U.S. share of the global CDS market has, on average, been about one-third of the total market since the end of 2004. IRS staff and private sector experts have said that subjecting CDSs to withholding tax presents a risk that investors will move their business overseas. IRS officials said that this has been a major impediment in the resolution of whether withholding tax should apply to CDSs, particularly in light of the rapid growth of the credit derivatives market. As Treasury and IRS work through the many complexities of issuing guidance for financial derivatives, they also must deal with institutional factors such as staff turnover, legal authority, and the different roles of Treasury and IRS that can delay the issuance of guidance. Staff turnover at IRS and Treasury can bring current market knowledge from the private sector; however, this turnover can also sometimes affect the timeliness of guidance. New staff typically have to familiarize themselves with the issues raised in ongoing guidance projects, may have a different perspective on the issues raised in the projects, or may believe the project should have a different priority. Determining whether Treasury and IRS have the necessary authority to take certain positions can also delay the development of guidance projects. Treasury and IRS have at times been reluctant to explore and ultimately issue guidance to resolve tax issues when there is concern about whether IRS has the legal authority to require a certain tax treatment for financial products. For example, although many experts consider mark-to-market treatment the most appropriate resolution for contingent swap payments, IRS had concerns about whether it could require taxpayers to follow mark-to-market treatment for contingent swap payments. IRS’s enforcement responsibilities can also affect the time it takes to complete guidance projects. Treasury and IRS may want to issue guidance on a certain issue, but if IRS is currently conducting litigation or auditing on that issue it may be difficult to consider alternative guidance positions when IRS has already taken a position in an audit or in court. For example, one of the reasons that Treasury and IRS did not attempt to address gaps in existing guidance on VPFCs with new guidance was because IRS was litigating the issue and did not want to publish guidance that might affect IRS’s case. Delayed Guidance Increases Compliance Risk and Costs, among Other Negative Consequences Delays in the issuance of guidance on financial derivatives have substantial negative consequences for both taxpayers and IRS, which are summarized in table 3. However, IRS and Treasury may also benefit by not issuing timely guidance. The ambiguity that results from a lack of clear guidance could make taxpayers less willing to take risky tax positions because of the concern that IRS may determine the position is abusive in the future. For taxpayers, one of the main tax-related consequences is the increased compliance risk associated with uncertainty. (For an example see the sidebar on Exchange-Traded Notes.) For example, if no clear guidance exists on how to treat a transaction for tax purposes, taxpayers must come up with their own position, which may be different than IRS’s approach and present increased compliance risk. Tax positions may also differ among taxpayers, which causes a consistency problem for both taxpayers and IRS. In developing tax positions where no clear guidance exists, taxpayers often look to other sources of information provided by IRS and Treasury that lack the legal status of finalized guidance. Tax experts said that they prefer written guidance to informal statements made by agency officials at conferences, which do not necessarily represent IRS’s official position on a transaction. In addition, taxpayers rely on nonprecedential advice that IRS issues to either individual taxpayers or to IRS exam teams. If IRS disagrees with a taxpayer’s position, the taxpayer is at risk of either penalties or litigation costs if the taxpayer decides to challenge the agency. If guidance is later issued that affects positions taken by taxpayers retroactively, this could put taxpayers’ current positions at risk of being noncompliant, although officials said it may be unlikely that Treasury and IRS would do this, as long as the taxpayer’s method was reasonable and consistently applied. Another consequence that the absence of guidance results in is imperfect market competition. According to market participants, because most derivatives are not tax driven, contracts may be executed even if the tax results are unclear. Taxpayers may look for other parties in a transaction who are willing to take on the additional tax risk, resulting in what one expert called a “race to the bottom” as parties vie for business by taking on riskier tax positions. In addition, all of these issues can reduce taxpayers’ confidence in the fairness of the tax system. See IRS Revenue Ruling 2008-1 and IRS Notice 2008-2. For IRS, one negative consequence of delays in guidance on financial derivatives is increases in time and resources spent on examinations and litigation. Without clarity on a tax issue, audit teams must often spend more resources examining the tax results of derivative transactions, which may include requesting advice from IRS Chief Counsel, often a time-consuming process. IRS staff told us that having clear, timely guidance can significantly reduce the amount of time and uncertainty revenue agents and IRS counsel encounter resolving tax issues during an exam. If taxpayers and revenue agents have divergent views on tax positions, a technical advice memorandum or other legal memorandum may be requested, which can increase the amount of time in exam. If IRS is unable to issue guidance on a transaction, they may pursue a litigation strategy, which itself can take years and require a great deal of resources from both IRS and the taxpayer. Another negative consequence for IRS is that in the absence of guidance taxpayers may attempt to take positions that may be abusive. (For an example see the sidebar on Variable Prepaid Forward Contracts.) For both cross-border equity TRSs and VPFCs, delays in guidance from IRS led to the transactions becoming more widespread throughout the market. The burden may be increased on exam teams to address a greater number of completed transactions. Delays in issuing guidance can also put IRS’s reputation at risk. Tax experts and practitioners that we spoke with expressed frustration at the delay in the issuance of guidance on financial derivatives and in the lack of information on the status of guidance projects, which negatively affected their perspectives of IRS. Taxpayers are Unaware of the Status of Guidance Projects for Financial Derivatives IRS and Treasury guidance priorities may change due to a number of factors, including changes in legislation, policy, market circumstances, and management agendas. Taxpayers need to know about these changes when they affect their tax planning and business decisions. As discussed earlier, one-fourth of derivative guidance projects were not completed between 1996 and 2010, and tax experts and practitioners that we spoke with were not aware of the status and prioritization of many of these guidance projects. Tax experts and practitioners stated that information about the status of projects was not publicly available, and they often only knew about a project’s status through informal statements made by IRS and Treasury officials at conferences and other meetings. IRS will purposely keep some guidance projects off the public list when the issue is legally sensitive and could negatively affect IRS’s efforts in an audit or litigation if the guidance projects were publicly announced. The current system for communicating the PGP does not allow IRS to effectively communicate the status of guidance projects to taxpayers. For most years since 1996, IRS has issued periodic updates on its website to the PGPs after initial plans were released, listing projects that were added or completed during the year. All PGPs, whether initial or updated, are potentially subject to change. However, because projects can be added to the PGP at any time without an accompanying change in the publicly available plans, changes in guidance prioritization are not always clearly communicated to taxpayers. In addition, PGPs do not include target completion dates, something IRS uses internally, which would give taxpayers a clearer timeframe for expecting guidance. Therefore, taxpayers lack clarity as to when they can expect guidance on issues that IRS and Treasury have publicly stated are priorities. While there may be challenges and risks in communicating more detailed information and updated status, particularly when there may be unanticipated setbacks in the development of guidance, other federal agencies routinely do so. Providing status information for PGP projects would require IRS to maintain reliable internal monitoring data on guidance projects. IRS Chief Counsel uses a data management system, Counsel Automated Systems Environment – Management Information System (CASE-MIS), to track progress of guidance projects and monitor interim milestones in project lifecycles. CASE-MIS has been available since 1996, and was modified in 2008. The effectiveness of this system has been critiqued multiple times over the past 10 years by the Treasury Inspector General for Tax Administration, and in response IRS has made improvements to the monitoring of projects in the database. In our own review of data used by IRS to monitor guidance projects, we found a number of data reliability issues that may impede the agency’s ability to effectively monitor guidance projects in order to report status to taxpayers. Most notable, the current status and target date of projects are not consistently recorded correctly for all projects. In addition, discerning when a project moved onto the PGP, its date of publication, and when or why it was removed without publication is difficult. This information is essential for IRS and Treasury to effectively manage the guidance process and to communicate project status to taxpayers. Although status information can be collected manually, the electronic management system is intended to improve efficiency and reporting capability by avoiding time-consuming manual data collection and processing. Since 2008, IRS has been taking steps to improve the effectiveness and reliability of CASE-MIS, including the issuance of staff memorandums and closer attention to reliable data entry, with the purpose of increasing efficiency, productivity, and decision making. Opportunities Exist for IRS to Leverage Information from SEC and CFTC on Financial Derivatives IRS Does Not Systematically or Regularly Communicate with SEC or CFTC on Financial Derivatives Currently, IRS does not systematically or regularly communicate with SEC or CFTC on financial derivatives. IRS’s 2009-2013 Strategic Plan lists strengthening partnerships across government agencies to gather and share additional information as key to enforcing the law in a timely manner to ensure taxpayers meet their obligations to pay taxes. SEC’s and CFTC’s oversight role for financial derivative markets make them key agencies for IRS to partner with on financial derivatives. Both regulatory agencies told us that opportunities may exist to share additional information on financial derivatives with IRS. However, IRS’s ability to share taxpayer information with other federal agencies is limited under IRC section 6103, which governs the confidentiality of taxpayer data. IRS officials say that the lack of reciprocal information sharing is an impediment to effective collaboration with SEC and CFTC. IRS has occasionally received information from SEC on financial derivatives that were suspected of being used for abusive tax purposes. Such information, however, is received only on an ad hoc basis, either through requests initiated by IRS or referrals from SEC. SEC officials told us that when potential tax abuses have been identified and shared with IRS, the SEC examiner involved in the case typically had some tax expertise or had worked with IRS in the past. For example, in 2008, SEC examiners discovered a strategy employed by hedge funds to structure short-term capital gains into long-term capital gains through the use of options. This information was referred to IRS because SEC staff believed that IRS may conclude that the structuring of transactions in this manner may result in an incorrect treatment of capital gains. IRS said that this information was essential to the eventual development and issuance of related guidance. However, agency officials told us that SEC and CFTC examiners often do not have tax expertise. As a result, potential tax abuses may not be identified and shared with IRS. Information from SEC and CFTC Could Help IRS Identify New Products, Emerging Trends, and Relevant Issues The proliferation of financial derivatives present a challenge for IRS in identifying potential abuses and ensuring timely guidance addresses the full range of financial derivative products. In recent years, new uses of financial derivative products have been introduced, and abusive uses have spread faster as technology developments have made it easier to create new products. In the past, IRS met regularly with a group of federal agency officials, including those from SEC and CFTC, academics, and other market experts, to discuss financial products, including financial derivatives, and market trends. The group was established by an academic institution and met for about 10 years beginning in 1990, and participants joined the group by invitation. IRS and others who were part of the group told us that academic sponsorship encouraged both federal agency and private sector experts to join the group and candidly share information on new financial derivative products and uses. According to Treasury officials, regularly participating in these meetings with officials from SEC and CFTC and the private sector helped them to (1) identify new financial derivatives, (2) improve their understanding of these new products, (3) become aware of regulatory schemes that may have tax implications, and (4) make contacts with other knowledgeable agency officials and experts in financial derivative products. IRS told us that understanding all sides of a financial derivative transaction, both tax and regulatory, helps to clarify the purpose of the transaction and reveal potential tax abuse. Since the group disbanded after the academic sponsorship dissolved, there has been no regular, coordinated communication process for sharing information on financial derivatives between IRS and SEC and CFTC. According to IRS officials, such a process could help IRS ensure they are fully using all available information to identify and address compliance issues and abuses related to financial derivatives. In addressing problems in financial markets that emerge quickly, we have found that collaboration between federal agencies is especially important for federal agencies to maximize performance and identify and resolve problems faster. IRS officials told us that they typically uncover new financial derivative abuses during an audit, meaning by the time IRS identifies the financial derivative product, and issues guidance, the market for a financial derivative product can be relatively large and developed. SEC may identify new products and emerging trends in financial derivatives trading before IRS because new products on exchanges must be approved by SEC before they can be traded, and others may be disclosed in financial statements. According to IRS officials, improved collaboration could help IRS more quickly identify and analyze emerging financial trends and new products in the financial derivatives market before taxpayers even file their tax returns. According to IRS officials, having a more regular way to obtain information about certain sales reported to SEC in disclosures of insider trading could have sped IRS’s identification of the use of VPFCs with share-lending agreements. When taxpayers deferred income recognition by not considering a VPFC and share-lending agreement as constituting a sale on their tax return, some taxpayers reported the transaction as a sale for SEC purposes. IRS officials obtained this information, but had they been regularly and systematically communicating with other agency officials on financial derivatives, problems with these transactions may have been identified earlier. IRS officials believe that because certain information on financial derivatives may be reported for both regulatory and tax purposes, reviewing certain types of transactions collaboratively with SEC and CFTC could help IRS better identify abuse. For example, IRS told us that certain information on financial derivatives from SEC Form 4s, which relate to insider trading, and 10Ks have been useful for identifying new financial derivative products and potential tax issues. Federal banking regulators, such as OCC, also have information on financial derivatives. Although the federal banking regulators do not oversee derivatives markets, their oversight of banking institutions includes evaluations of risks to bank safety and soundness from derivatives activities. For example, as we reported in 2009, their oversight captures most CDS activity because banks act as dealers in the majority of transactions and because they generally oversee CDS dealer banks as part of their ongoing examination programs. Furthermore, as OCC- regulated banks may only engage in activities deemed permissible for a national bank, the agency periodically receives requests from banks to approve new financial activities, including derivatives transactions. Information collected during these reviews may provide IRS with information on financial derivatives. As we were completing our audit work, IRS officials told us that they had recently begun developing plans to have regular meetings with SEC to discuss new products and emerging issues related to financial derivatives. In previous work, we have established best practices on interagency coordination to help maximize results and sustain collaboration. These best practices suggest that agencies should look for opportunities to enhance collaboration in order to achieve results that would not be available if they were to work separately. Federal agencies can enhance and sustain collaborative partnerships and produce more value for taxpayers by applying the following eight best practices: 1. Define and articulate a common outcome. 2. Establish mutually reinforcing or joint strategies. 3. Identify and address needs by leveraging resources. 4. Agree on roles and responsibilities. 5. Establish compatible policies, procedures, and other means to operate across agency boundaries. 6. Develop mechanisms to monitor, evaluate, and report on results. 7. Reinforce agency accountability for collaborative efforts through agency plans and reports. 8. Reinforce individual accountability for collaborative efforts through performance management systems. These best practices would support a collaborative working relationship between the IRS and SEC and CFTC. While we generally believe that the application of as many of these practices as possible increases the likelihood of effective collaboration, we also recognize that there is a wide range of situations and circumstances in which agencies work together. Following even a few of these practices may be sufficient for effective collaboration. Conclusions Although financial derivatives enable companies and others to manage risks, some taxpayers have used financial derivatives to take advantage of the current tax system, sometimes in ways that courts have later deemed improper or Congress has disallowed. The tax code establishes broad categories for financial instruments, such as debt, equity, forwards, and options, each with its own tax rules governing how and when gains and losses are taxed. However, as new financial derivative products and uses are developed, they could be similar to multiple tax categories, and therefore IRS and taxpayers must choose different tax treatments. In certain instances, this has allowed economically equivalent outcomes to be taxed inconsistently. Without changes to the approach to how financial derivatives are taxed, the potential for abuse continues. Experts have suggested alternative approaches that they believe would provide more comprehensive and consistent treatment. However, each alternative would present tradeoffs to IRS and taxpayers, including tradeoffs to simplicity, administrability, and economic efficiency. This report does not address or evaluate alternatives for taxing financial derivatives. Because of their unique role in defining policy and administering the tax code, Treasury and IRS are best positioned to study and recommend an alternative approach to the taxation of financial derivatives. Outside of any comprehensive changes to the current approach to the taxation of financial derivatives, one way that Treasury and IRS address potential abuses and provide clarity to tax issues is through its taxpayer guidance. The lack of finalized guidance has negative consequences for both IRS and taxpayers, including uncertainty that inhibits IRS staff during audits and litigation and leaves taxpayers uncertain about whether they have appropriately determined their tax liabilities. However, challenges that IRS and Treasury face in developing guidance for financial derivatives, including the risk of adverse economic effects of guidance changes and the complexity of financial derivative products, have resulted in some PGP projects taking longer than the 12-month period established in the plan. As such, uncertainty is heightened because taxpayers may not be aware when projects are going to take longer than the 12-month period and IRS does not provide public updates to the PGP as changes occur to project status, priorities, and target dates. The growth in the complexity and use of financial derivatives presents another challenge for IRS. IRS sometimes identifies new financial derivative products or new uses of existing products long after they have been introduced into the market. Consequently, IRS is not always able to quickly identify and prevent potential abuse. One way to identify new products or new uses of products in a timelier manner could be through increased information sharing with SEC and CFTC given their oversight role of financial derivative markets and products. Our prior work suggests that there may also be opportunities for bank regulators to share any knowledge of derivatives that they gain. This would be consistent with IRS’s goal of strengthening partnerships across government to ensure taxpayers meet their obligations to pay taxes. Recommendations for Executive Action To better ensure that economically similar outcomes are taxed similarly and minimize opportunities for abuse, the Secretary of the Treasury should undertake a study that compares the current approach to alternative approaches for the taxation of financial derivatives. To determine if changes would be beneficial, such a study should weigh the tradeoffs to IRS and taxpayers that each alternative presents, including simplicity, administrability, and economic efficiency. To provide more useful and timely information to taxpayers on the status of financial derivative guidance projects, the Secretary of the Treasury and the Commissioner of Internal Revenue should consider additional, more frequently updated reporting to the public on ongoing projects listed in the PGP, including project status, changes in priorities, and target completion dates both within and beyond the 12-month PGP period. To more quickly identify new financial derivative products and emerging tax issues, IRS should work to improve information-sharing partnerships with SEC and CFTC to better ensure that IRS is fully using all available information to identify and address compliance issues and abuses related to the latest financial derivative product innovations. IRS should also consider exploring whether such partnerships with bank regulatory agencies would be beneficial. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of the Treasury and the Commissioner of Internal Revenue for review and comment. Treasury disagreed with our first recommendation to undertake a study that compares the current approach to alternative approaches for the taxation of financial derivatives. Treasury cited a body of literature written by academics, practitioners, and others that considers the subject. Treasury also mentioned that Congress has resources available, such as the Joint Committee on Taxation, which could advise them about alternative approaches to the taxation of financial derivatives. Treasury said that its resources would be better spent drafting and issuing guidance on these subjects. Treasury also noted that it is available to assist the Ways and Means Committee and the Finance Committee in any undertaking concerning alternative approaches to the taxation of financial derivatives. In our report, we describe how the current approach to the taxation of financial derivatives results in inconsistent tax consequences for transactions that produce similar economic outcomes. We cite the existing body of literature and alternatives to the current approach of taxing financial derivatives proposed by some tax experts and practitioners that they believe would adopt a more consistent and comprehensive view of financial derivatives and potentially reduce abuse. However, no consensus has emerged on these issues from existing literature or from the resources available to Congress. As the tax policy setting body for the executive branch, the Treasury Department, in consultation with IRS, is uniquely suited to weigh the alternative approaches and, along with Congress, make judgments as to which is best for the economy, tax administration, and the proper application of sound tax principles. While Treasury states it would rather focus on guidance development to address tax compliance and emerging abusive tax schemes, the current piecemeal development of guidance as well as the difficulty of developing tax rules for new products has presented challenges and opportunities for abuse. We believe that as the locus of tax policy expertise in the executive branch, Treasury has a responsibility to make proposals to overcome the deficiencies to the current approach to taxing financial derivatives. Towards that end, we recommended Treasury should undertake a study that compares the current approach to alternative approaches to the taxation of financial derivatives. Regardless of whether Treasury decides it needs a study to make such proposals, achieving a more comprehensive approach is the desired end. IRS and Treasury disagreed with our second recommendation to provide more timely and useful information to taxpayers on the status of financial derivative guidance projects. IRS said that while it firmly supports transparency in the regulatory process, officials do not believe that the additional reporting recommended would be worth the additional resources such reporting would require. They believe that the annual updates provide an appropriate measure of the status of projects. Treasury also said that it would be difficult to provide precise predictions of when guidance would be issued and that attempting to pinpoint the timing of when guidance might be released would not necessarily be that helpful. We agree that it is important for IRS and Treasury to balance the usefulness of additional reporting on the status of priority guidance projects with any additional administrative burden. However, Treasury and IRS also need to ensure that taxpayers have sufficient information to make intelligent decisions. In this report, we describe that one-fourth of financial derivative guidance projects on Treasury and IRS’s PGP were not completed between 1996 and 2010. A number of the guidance projects that were not completed were on the PGP for 3 or more years, and tax experts and practitioners that we spoke with said they were not aware of the status and prioritization of many of these guidance projects. In recommending more frequent updates to the public on priority guidance projects, we recognize the difficulty in estimating how long the development of a particular piece of guidance may take. Our recommendation did not envision pinpointing the timing of when guidance may be released, but rather being timelier in officially revising estimates when the agencies know that announced time frames are no longer realistic. When it becomes likely that a project on the PGP will not be completed in the plan year because of delays or a change in priorities, the public should be alerted. Tax experts and practitioners we interviewed said that information about the status of projects was not publicly available, and that they often only knew about a project’s status through informal statements made by IRS and Treasury officials at conferences and other meetings. Such information on the status of guidance projects should be provided to all interested taxpayers as part of formal periodic updates to the PGP. Some of this information is already available in IRS’s internal guidance tracking database and providing it would, therefore, likely add little additional administrative burden for the agencies. IRS agreed with our third recommendation to improve information-sharing partnerships with the SEC and CFTC. IRS said that they recognize the benefits of systematically gathering and sharing information that would identify new financial products and the potential for abusive tax avoidance transactions. IRS’s and Treasury’s letters commenting on our report are presented in appendix III and IV. IRS also provided technical comments, which we incorporated as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. Copies are also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. Appendix I: Additional Methodology Details Criteria to Evaluate How Well the Tax System Addresses Financial Derivatives To evaluate the tax rules for financial derivatives, our criterion was consistency, meaning that economically similar transactions are taxed similarly. We identified this criterion through interviews with tax experts and a review of research articles on the taxation of financial derivatives. This was the most commonly mentioned criterion by these sources, and also the most applicable to our objectives. We evaluated the tax effects of financial derivatives based on testimonial evidence, academic studies, and our analysis of four financial derivative case studies. These case studies included cross-border total return equity swaps, variable prepaid forward contracts, credit default swaps, and contingent swaps. Through interviews with Department of the Treasury (Treasury) and Internal Revenue Service (IRS) staff, former Treasury staff, and other tax experts, we identified how the transactions were structured, when IRS first recognized these transactions, and all guidance issued by Treasury and IRS on these issues. We also identified the challenges of issuing timely guidance and the consequences for IRS and taxpayers due to delayed or absent guidance. Based on these case studies, we applied the criterion of consistency to highlight how the structure of these transactions was not in line with the criterion. Criteria and Methodology to Evaluate the Issuance of Published Guidance Related to Financial Derivatives To evaluate IRS’s and Treasury’s ability to publish timely guidance on emerging financial derivative tax issues, we analyzed guidance projects from Treasury and IRS’s Priority Guidance Plan (PGP). We also performed an in-depth study of four case studies of specific financial derivative transactions that have had delayed guidance. According to Treasury and IRS, the PGP is used each year to identify and prioritize the tax issues that should be addressed through regulations, revenue rulings, revenue procedures, notices, and other published administrative guidance. The PGP focuses resources on guidance items that are most important to taxpayers and tax administration. To measure the timeliness of guidance on financial derivative tax issues, we used the criterion established by Treasury and IRS that guidance projects on the PGP are intended to be published within the 12-month period of the PGP year. We reviewed the projects included on the PGP from 1996 to 2010, the years for which IRS had electronic records available. We submitted a data request to IRS Chief Counsel from their Counsel Automated Systems Environment-Management Information System (CASE-MIS), which the agency uses to track the development of guidance projects. We searched the database for projects, focusing primarily on the units within Chief Counsel that work closest with financial derivatives. We selected projects whose description mentioned either a type of derivative in particular (future, forward, swap/notional principal contract, or option), a section of the Internal Revenue Code that directly affects financial derivatives, or a use or abuse of financial instruments that typically involves derivatives (such as hedging or straddles). In reviewing the data from CASE-MIS, we encountered some data issues, such as the same guidance project showing up more than once on the same PGP year or guidance projects with start dates after their publication date, among other issues, which led us to conclude that the CASE-MIS data were unreliable for an analysis of all guidance projects, which did not allow a comparison of derivative projects to nonderivative projects. However, we did determine that the use of CASE-MIS data was sufficiently reliable to analyze the subset of projects dealing with financial derivatives alone. This is because the small number of derivative projects allowed us to address and resolve individually each of the data issues we encountered, something not feasible for the much larger dataset of all PGP guidance projects. After identifying the financial derivative guidance projects based on the criteria above we submitted the list to IRS Chief Counsel for their verification. They identified additional projects we had not found in our prior searches, some of which did not meet our criteria for selecting projects or our scope and were not included. In total, we identified 53 guidance projects in the PGP related to financial derivatives. To analyze the timeliness of the identified projects, we calculated completion rates for projects that were completed within the 1-year criterion, and rates for projects that were completed at any point. These calculations only included projects that were on the plan before the current PGP year. To take account of the fact that guidance projects can be censored (i.e., have not yet been completed within the time frame of the study or were dropped from the PGP before they had a chance to be completed), we estimated completion rates over time using hazard rates. Hazard rates calculate the rate at which projects are complete in a period, given that they were open at the start of that period, and therefore allow us to adequately account for censored projects. In the report, we refer to hazard rates as completion rates. The small sample size does not allow us to draw conclusions on the process for issuing guidance in IRS and Treasury more generally beyond financial derivatives or the time period under study. To further examine the IRS and Treasury guidance process and evaluate the challenges that IRS and Treasury face when issuing guidance on financial derivatives, we selected four financial derivative case studies that have been on the PGP and have been highlighted in interviews with Treasury, IRS, and tax practitioners as financial derivative transactions that presented tax abuse or tax compliance concerns. The case studies that met this criterion included contingent payment swaps, credit default swaps, variable prepaid forward contracts, and cross-border total return equity swaps. For each of the four case studies, we interviewed IRS and Treasury officials and other tax experts, and analyzed research on the taxation of derivatives, to discuss the identification and progression of these transactions as guidance projects, the challenges IRS and Treasury face issuing guidance on these transactions, and the consequences IRS and taxpayers face from a lack of guidance. Appendix II: Financial Derivative Priority Guidance Projects Description of guidance projects 1 Information reporting requirements for securities futures contracts Tax shelter using options to shift tax basis Tax shelter using foreign currency straddle Tax shelter using foreign currency straddle Tax shelter using foreign currency options Tax shelter using S corporations and warrants Tax shelter using options to toggle grantor trust status Exchange traded notes (prepaid forward contracts) This project was completed in 2007, when it was no longer on the Priority Guidance Plan (PGP). To be designated as completed for our analysis, a project must be completed while on the PGP. Appendix III: Comments from the Internal Revenue Service Appendix IV: Comments from the Department of the Treasury Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff made significant contributions to this report, Jay McTigue, Assistant Director; Kevin Averyt; Timothy Bober; Tara Carter; William Cordrey; Robin Ghertner; Colin Gray; George Guttman; Alex Katz; Natalie Maddox; Matthew McDonald; Edward Nannenhorn; Jose Oyola; Andrew Stephens; Jason Vassilicos; and James White. Glossary The process of dividing a financial instrument into its component parts. Constructive Ownership Transaction Under Internal Revenue Code (IRC) section 1260, gains from constructive ownership transactions are taxed as ordinary income and not capital gains to the extent that such gains exceed the net underlying long- term capital gains and impose accompanying interest charges. Section 1260 applies to derivatives that stimulate the return of certain assets, such as a hedge fund or another pass-though entity by offering the holder substantially all of the risk of loss and opportunity for gain from the underlying asset. Constructive Sale A transaction where a taxpayer attempts to obtain economic gains from the sale of an appreciated position without legally transferring ownership and triggering taxable income. IRC section 1259 contains rules that affect the treatment of gains from constructive sales. Contingent Swap A swap contract in which a payment is contingent or otherwise conditional on some event occurring during the period of the contract. Conversion transaction A transaction that generally consists or two or more positions taken with regard to the same or similar investments, where substantially all of the taxpayer’s return is attributable to the time-value of the taxpayer’s net investment in the transaction. IRC section 1258 contains rules for the treatment of conversion transactions. Credit Default Swap (CDS) Bilateral contract that is sold over-the-counter and transfers credit risk from one party to another. The seller, who is offering credit protection, agrees, in return for a periodic fee, to compensate the buyer, who is buying credit protection, if a specified credit event, such as default, occurs. See gross positive fair value. Forward A privately negotiated contract between two parties in which the forward buyer agrees to purchase from the forward seller a fixed quantity of the underlying reference item at a fixed price on a fixed date. Future A forward contract that is standardized and traded on an organized futures exchange. Gross Positive Fair Value The sum total of the fair values of contracts owed to commercial banks. Represents the maximum losses banks could incur if all other parties in the transactions default and the banks hold no collateral from the other party in the transaction and there is no netting of the contracts. Hedging The process whereby an entity will attempt to balance or manage its risk of doing business or investing. For tax purposes, under mark-to-market rules, any contract held at the end of the tax year will generally be treated as sold at its fair market value on the last day of the tax year, and the taxpayer must recognize any gain or loss that results. Mandatory Convertible Security linked to equity that automatically converts to common stock on a prespecified date. Notional Principal Contract (NPC) According to section 1.446-3 (c)(1)(i) of title 26, Code of Federal Regulations, a financial instrument that provides for the payment of amounts by one party to another at specified intervals calculated by reference to a specified index upon a notional principal amount, in exchange for specified consideration or a promise to pay similar amounts. Notional Amount Total notional amount represents the amount of the reference items underlying financial derivative transactions, and is the amount upon which payments are computed between parties of financial derivatives contracts. Notional amount generally does not represent money exchanged, nor does it represent the risk exposure. Option Contracts that gives the holder of the options the right, but not the obligation, to buy (call option) or sell (put option) a specified amount of the underlying reference item at a predetermined price (strike price) at or before the end of the contract. Over-the-Counter Derivatives Privately negotiated financial derivative contracts whose market value is determined by the value of the underlying asset, reference rate, or index. Short Sale This type of transaction occurs when a taxpayer borrows property (often a stock) and then sells the borrowed property to a third party. If the short seller can buy that property later at a lower price to satisfy his or her obligation under the borrowing, a profit results; if the price rises, however, a loss results. IRC Section 1233 contains rules that can affect the treatment of gains and losses realized on short sales. Straddle The value of offsetting positions moves in opposite directions so a loss on one position is cancelled out by the gain on an offsetting position. IRC Section 1092 contains rules that can affect the treatment of straddles. Total Return Equity Swap A contract that provides one party in the transaction with the total economic performance from a specified reference equity or group of equities and the other party in the transaction receives a specified fixed or floating cash flow that is not related to the reference equity. A cross- border total return equity swap is a contract that occurs between a domestic and foreign party. Variable Prepaid Forward Contract (VPFC) Agreement between two parties to deliver a variable number of shares at maturity (typically 3 to 5 years) in exchange for an up-front cash payment, which generally represents 75 to 85 percent of the current fair market value of the stock. The VPFC usually has a cash settlement option in lieu of shares at maturity. Wash Sale A wash sale is when a taxpayer acquires a stock or security within 30 days of selling a substantially similar stock or security; under IRC section 1091, the taxpayer is not generally permitted to claim a loss on such a sale.
Recently, concerns have arisen about the use of certain financial derivatives to avoid or evade tax obligations. As requested, this report (1) identifies and evaluates how financial derivatives can be used to avoid or evade tax liability or achieve differing tax results in economically similar situations, (2) evaluates Internal Revenue Service (IRS) actions to address the tax effects of investments in financial derivatives through guidance, and (3) evaluates IRS actions to identify financial derivative products and trends through information from other agencies. GAO reviewed research and IRS documents and interviewed IRS and, Department of the Treasury (Treasury) officials and other experts. GAO analyzed the completion of financial derivative projects on the agencies' Priority Guidance Plans (PGP) from 1996 to 2010. Taxpayers have used financial derivatives to lower their tax liability in ways that the courts have found improper or that Congress has disallowed. Taxpayers do this by using the ease with which derivatives can be redesigned to take advantage of the current patchwork of relevant tax rules. As new products are developed, IRS and taxpayers attempt to fit them into existing "cubbyholes" of relevant tax rules. This sometimes leads to inconsistent tax treatment for economically similar positions, which violates a basic tax policy criterion. While the tax rules for each cubbyhole represent Congress's and Treasury's explicit policy decisions, some of these decisions were made long before today's complex financial derivative products were created. Some experts have suggested alternate methods to the current approach for taxing financial derivatives. IRS and Treasury, because of their unique position to define policy and administer the tax code, are best positioned to study and recommend a new approach. When application of tax law is complex or uncertain, as is often the case for financial derivatives, guidance to taxpayers is an important tool for IRS to address tax effects and potential abuse. However, between 1996 and 2010, Treasury and IRS did not complete 14 out of 53 guidance projects related to financial derivatives that they designated as a priority on their annual PGP. While completing guidance is important in providing certainty to taxpayers and IRS and reducing the potential for abuse, challenges like the risk of adverse economic impacts of guidance changes and the transactional complexity of financial derivatives may delay the completion of guidance. Since challenges may prevent IRS from finalizing guidance within a 12-month PGP period, taxpayers need to be aware of ongoing guidance projects' status, some of which may span a number of years. IRS sometimes identifies new financial derivative products or new uses of them long after they have been introduced and gained considerable use. This slows its ability to address potential abuses. IRS's 2009-2013 Strategic Plan lists strengthening partnerships across government agencies to gather and share information as key to identifying and addressing new products and emerging tax schemes more quickly. Through their oversight roles for financial derivative markets, the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) may have information on financial derivatives that is relevant to IRS. Similarly, bank regulators may gain relevant knowledge of derivatives' use. IRS officials said such routine communications in the early 1990s did provide relevant information. Although IRS communicates with SEC and CFTC on derivatives, it does not do so systematically or regularly. Strengthening partnerships would increase opportunities for IRS to gain information on new financial derivative products and uses. Studies of interagency coordination suggest that agencies should look for opportunities to enhance collaboration in order to achieve results that would not be available if they were to work separately, and a number of best practices exist to help agencies meet this goal.
Background Each state, and the District of Columbia, imposes an excise tax on the sale of cigarettes, which vary from state to state. As of January 1, 2002, the state excise tax rates for a pack of 20 cigarettes ranged from 2.5 cents in Virginia to $1.425 in Washington (see fig.1). The liability for these taxes generally arises once the cigarettes enter the jurisdiction of the state. Many states have increased their cigarette excise taxes in recent years with the intention of increasing tax revenue and discouraging people from smoking. As a result, many smokers are seeking less costly alternatives for purchasing cigarettes, including buying cigarettes while traveling to a neighboring state with a lower cigarette excise tax. The Internet is an alternative that offers consumers the option and convenience of buying cigarettes from vendors in low-tax states without having to physically travel there. Consumers who use the Internet to buy cigarettes from vendors in other states are liable for their own state’s cigarette excise tax and, in some cases, sales and/or use taxes. States can learn of such purchases and the taxes due when vendors comply with the Jenkins Act. Under the act, cigarette vendors who sell and ship cigarettes into another state to anyone other than a licensed distributor must report (1) the name and address of the persons to whom cigarette shipments were made, (2) the brands of cigarettes shipped, and (3) the quantities of cigarettes shipped. Reports must be filed with a state’s tobacco tax administrator no later than the 10th day of each calendar month covering each and every cigarette shipment made to the state during the previous calendar month. The sellers must also file a statement with the state’s tobacco tax administrator listing the seller’s name, trade name (if any), and address of all business locations. Failure to comply with the Jenkins Act’s reporting requirements is a misdemeanor offense, and violators are to be fined not more than $1,000, or imprisoned not more than 6 months, or both. Although the Jenkins Act, enacted in 1949, clearly predates and did not anticipate cigarette sales on the Internet, vendors’ compliance with the act could result in states collecting taxes due on such sales. According to DOJ, the Jenkins Act itself does not forbid Internet sales nor does it impose any taxes. Limited Federal Involvement with the Jenkins Act and Internet Cigarette Sales The federal government has had limited involvement with the Jenkins Act concerning Internet cigarette sales. We identified three federal investigations involving such potential violations, and none of these had resulted in prosecution (one investigation was still ongoing at the time of our work). No Internet cigarette vendors had been penalized for violating the act, nor had any penalties been sought for violators. FBI has Primary Investigative Jurisdiction The Attorney General of the United States is responsible for supervising the enforcement of federal criminal laws, including the investigation and prosecution of Jenkins Act violations. The FBI has primary jurisdiction to investigate suspected violations of the Jenkins Act. However, DOJ and FBI officials were unable to identify any investigations of Internet cigarette vendors or other actions taken to enforce the act’s provisions regarding Internet cigarette sales. According to DOJ, the FBI could not provide information on actions to investigate Jenkins Act violations, either by itself or in connection with other charges, because the FBI does not have a section or office with responsibility for investigating Jenkins Act violations and does not track such investigations. Also, DOJ said it does not maintain statistical information on resources used to investigate and prosecute Jenkins Act offenses. In describing factors affecting the level and extent of FBI and DOJ enforcement actions with respect to the Jenkins Act and Internet cigarette sales, DOJ noted that the act creates misdemeanor penalties for failures to report information to state authorities, and appropriate referrals for suspected violations must be considered with reference to existing enforcement priorities. In this regard, we recognized that the FBI’s priorities have changed. In June 2002 congressional testimony, the Comptroller General noted that the FBI is at the front line of defending the public and our way of life from a new and lethal threat, that of terrorism against Americans. The Comptroller General testified that the FBI Director recognized the need to refocus priorities to meet the demands of a changing world and is now taking steps to realign resources to achieve his objectives. In May 2002, the FBI Director unveiled the second phase of a FBI reorganization, with proposed changes designed to build on initial reorganization actions taken in December 2001. A key element of the reorganization is to “redirect FBI’s agent workforce to ensure that all available energies and resources are focused on the highest priority threat to the nation, i.e., terrorism.” In light of the events of September 11, 2001, this shift is clearly not unexpected and is, in fact, consistent with the FBI’s 1998 Strategic Plan and the current DOJ Strategic Plan. Since September 11, unprecedented levels of FBI resources have been devoted to counterterrorism and intelligence initiatives with widespread public approval. The Comptroller General testified that enhancement of FBI resources for counterterrorism and other planned actions seem to be rational steps to building agency capacity to fight terrorism. ATF has Ancillary Enforcement Authority ATF, which enforces federal excise tax and criminal laws and regulations related to tobacco products, has ancillary authority to enforce the Jenkins Act. ATF special agents investigate trafficking of contraband tobacco products in violation of federal law and sections of the Internal Revenue Code. For example, ATF enforces the Contraband Cigarette Trafficking Act (CCTA), which makes it unlawful for any person to ship, transport, receive, possess, sell, distribute, or purchase more than 60,000 cigarettes that bear no evidence of state cigarette tax payment in the state in which the cigarettes are found, if such state requires a stamp or other indicia to be placed on cigarette packages to demonstrate payment of taxes (18 U.S.C. 2342). ATF is also responsible for the collection of federal excise taxes on tobacco products and the qualification of applicants for permits to manufacture tobacco products, operate export warehouses, or import tobacco products. ATF inspections verify an applicant’s qualification information, check the security of the premise, and ensure tax compliance. To enforce the CCTA, ATF investigates cigarette smuggling across state borders to evade state cigarette taxes, a felony offense. Internet cigarette vendors that violate the CCTA, either directly or by aiding and abetting others, can also be charged with violating the Jenkins Act if they failed to comply with the act’s reporting requirements. ATF can refer Jenkins Act matters uncovered while investigating CCTA violations to DOJ or the appropriate U.S. Attorney’s Office for charges to be filed. ATF officials identified three investigations since 1997 of Internet vendors for cigarette smuggling in violation of the CCTA and violating the Jenkins Act. In 1997, a special agent in ATF’s Anchorage, Alaska, field office noticed an advertisement by a Native American tribe in Washington that sold cigarettes on the Internet. ATF determined from the Alaska Department of Revenue that the vendor was not reporting cigarette sales as required by the Jenkins Act, and its investigation with another ATF office showed that the vendor was shipping cigarettes into Alaska. After ATF discussed potential cigarette smuggling and Jenkins Act violations with the U.S. Attorney’s Office for the District of Alaska, it was determined there was no violation of the CCTA. The U.S. Attorney’s Office did not want to pursue only a Jenkins Act violation, a misdemeanor offense, and asked ATF to determine whether there was evidence that other felony offenses had been committed. Subsequently, ATF formed a temporary task force with Postal Service inspectors and state of Alaska revenue agents, which demonstrated to the satisfaction of the U.S. Attorney’s Office that the Internet cigarette vendor had committed mail fraud. The U.S. Attorney’s Office agreed to prosecute the case and sought a grand jury indictment for mail fraud, but not for violating the Jenkins Act. The grand jury denied the indictment. In a letter dated September 1998, the U.S. Attorney’s Office requested that the vendor either cease selling cigarettes in Alaska and file the required Jenkins Act reports for previous sales, or come into compliance with the act by filing all past and future Jenkins Act reports. In another letter dated December 1998, the U.S. Attorney’s Office instructed the vendor to immediately comply with all requirements of the Jenkins Act. However, an official at the Alaska Department of Revenue told us that the vendor never complied. No further action has been taken. Another investigation, carried out in 1999, involved a Native American tribe selling cigarettes on the Internet directly to consumers and other tribes. The tribe was not paying state tobacco excise taxes or notifying states of cigarette sales to other than wholesalers, as required by the Jenkins Act. ATF referred the case to the state of Arizona, where it was resolved with no criminal charges filed by obtaining the tribe’s agreement to comply with Jenkins Act requirements. A third ATF investigation of an Internet vendor for cigarette smuggling and Jenkins Act violations was ongoing at the time of our work. On January 31, 2002, the Commissioner of the Connecticut Department of Revenue Services sent a letter to the Director of ATF requesting assistance in addressing the growing problem of Internet and mail order cigarette sales without Jenkins Act compliance. The ATF Director responded to the Commissioner by letter dated April 5, 2002. The ATF Director expressed concern about growing Internet cigarette sales and the impact on collection of state cigarette excise taxes. The Director highlighted three initiatives ATF is planning to help address this problem. ATF will solicit the cooperation of tobacco manufacturers and determine who is selling cigarettes to Internet and mail order companies. ATF believes the tobacco manufacturers will render support and place their distributors on notice that some of their customers’ business practices may be defrauding states of tax revenues. The Director said ATF will remind the tobacco manufacturers of Jenkins Act requirements and that sales involving Native Americans are not exempt. ATF will contact shippers/couriers to determine if they have any prohibitions against the shipment of cigarettes. ATF will also inform them of the likelihood that some of their customers are selling cigarettes on the Internet and violating the Jenkins Act, as well as potentially committing mail fraud, wire fraud, and money laundering offenses. ATF will request that the common carriers be more vigilant and conscientious regarding their customers and the laws they could be violating. According to the Director, ATF will provide technical assistance to the state of Connecticut or members of the U.S. Congress working with Connecticut on a legislative response to address the issue of tobacco sales on the Internet. ATF officials said that because ATF does not have primary Jenkins Act jurisdiction, it has not committed resources to investigating violations of the act. However, the officials said strong consideration should be given to transferring primary jurisdiction for investigating Jenkins Act violations from the FBI to ATF. According to ATF, it is responsible for, and has committed resources to, regulating the distribution of tobacco products and investigating trafficking in contraband tobacco products. A change in Jenkins Act jurisdiction would give ATF comprehensive authority at the federal level to assist states in preventing the interstate distribution of cigarettes resulting in lost state cigarette taxes since ATF already has investigative authority over the CCTA, according to the officials. The officials also told us ATF has special agents and inspectors that obtain specialized training in enforcing tax and criminal laws related to tobacco products, and, with primary jurisdiction, ATF would have the investigative authority and would use resources to specifically conduct investigations to enforce the Jenkins Act, which should result in greater enforcement of the act than in the past. States Have Taken Action to Promote Jenkins Act Compliance by Internet Cigarette Vendors, but Results Were Limited Officials in nine states that provided us information all expressed concern about Internet cigarette vendors’ noncompliance with the Jenkins Act and the resulting loss of state tax revenues. For example, California officials estimated that the state lost approximately $13 million in tax revenue from May 1999 through September 2001, due to Internet cigarette vendors’ noncompliance with the Jenkins Act. Overall, the states’ efforts to promote compliance with the act by Internet vendors produced few results. Officials in the nine states said that they lack the legal authority to successfully address this problem on their own. They believe greater federal action is needed, particularly because of their concern that Internet cigarette sales will continue to increase with a growing and substantial negative effect on tax revenues. States’ Efforts Produced Limited Results Starting in 1997, seven of the nine states had made some effort to promote Jenkins Act compliance by Internet cigarette vendors. These efforts involved contacting Internet vendors and U.S. Attorneys’ Offices. Two states had not made any such efforts. Six of the seven states tried to promote Jenkins Act compliance by identifying and notifying Internet cigarette vendors that they are required to report the sale of cigarettes shipped into those states. Generally, officials in the six states learned of Internet vendors by searching the Internet, noticing or being told of vendors’ advertisements, and by state residents or others notifying them. Five states sent letters to the identified vendors concerning their Jenkins Act reporting responsibilities, and one state made telephone calls to the vendors. After contacting the Internet vendors, the states generally received reports of cigarette sales from a small portion of the vendors notified. The states then contacted the state residents identified in the reports, and they collected taxes from most of the residents contacted. When residents did not respond and pay the taxes due, the states carried out various follow-up efforts, including sending additional notices and bills, assessing penalties and interest, and deducting amounts due from income tax refunds. Generally, the efforts by the six states to promote Jenkins Act compliance were carried out periodically and required few resources. For example, a Massachusetts official said the state notified Internet cigarette vendors on five occasions starting in July 2000, with one employee working a total of about 3 months on the various activities involved in the effort. Table 1 summarizes the six states’ efforts to identify and notify Internet cigarette vendors about the Jenkins Act reporting requirements and shows the results that were achieved. There was little response by the Internet vendors notified. Some of the officials told us that they encountered Internet vendors that refused to comply and report cigarette sales after being contacted. For example, several officials noted that Native Americans often refused to report cigarette sales, with some Native American vendors citing their sovereign nation status as exempting them from the Jenkins Act, and others refusing to accept a state’s certified notification letters. Also, an attorney for one vendor informed the state of Washington that the vendor would not report sales because the Internet Tax Freedom Act relieved the vendor of Jenkins Act reporting requirements. Apart from the states’ efforts to identify and notify Internet cigarette vendors, state officials noted that some Internet vendors voluntarily complied with the Jenkins Act and reported cigarette sales on their own. The states subsequently contacted the residents identified in the reports to collect taxes. For example, a Rhode Island official told us there were three or four Internet vendors that voluntarily reported cigarette sales to the state. Based on these reports, Rhode Island notified about 400 residents they must pay state taxes on their cigarette purchases and billed these residents over $76,000 (the Rhode Island official that provided this information did not know the total amount collected). Similarly, Massachusetts billed 21 residents for cigarette taxes and collected $2,150 based on reports of cigarette sales voluntarily sent to the state. Three of the seven states that made an effort to promote Jenkins Act compliance by Internet cigarette vendors contacted U.S. Attorneys and requested assistance. The U.S. Attorneys, however, did not provide the assistance requested. The states’ requests and responses by the U.S. Attorneys’ Offices are summarized below. In March 2000, Iowa and Wisconsin officials wrote letters to three U.S. Attorneys in their states requesting assistance. The state officials asked the U.S. Attorneys to send letters to Internet vendors the states had identified, informing the vendors of the Jenkins Act and directing them to comply by reporting cigarette sales to the states. The state officials provided a draft letter and offered to handle all aspects of the mailings. The officials noted they were asking the U.S. Attorneys to send the letters over their signatures because the Jenkins Act is a federal law and a statement from a U.S. Attorney would have more impact than from a state official. However, the U.S. Attorneys did not provide the assistance requested. According to Iowa and Wisconsin officials, two U.S. Attorneys’ Offices said they were not interested in helping, and one did not respond to the state’s request. After contacting the FBI regarding an Internet vendor that refused to report cigarette sales, saying that the Internet Tax Freedom Act relieved the vendor of Jenkins Act reporting requirements, the state of Washington acted on the FBI’s recommendation and wrote a letter in April 2001 requesting that the U.S. Attorney initiate an investigation. According to a Washington official, the U.S. Attorney’s Office did not pursue this matter and noted that a civil remedy (i.e., lawsuit) should be sought by the state before seeking a criminal action. At the time of our work, the state was planning to seek a civil remedy. In July 2001, the state of Wisconsin wrote a letter referring a potential Jenkins Act violation to the U.S. Attorney for prosecution. According to a Wisconsin official, this case had strong evidence of Jenkins Act noncompliance—there were controlled and supervised purchases made on the Internet of a small number of cartons of cigarettes, and the vendor had not reported the sales to Wisconsin. The U.S. Attorney’s Office declined to initiate an investigation, saying that it appeared this issue would be best handled by the state “administratively.” The Wisconsin official told us, however, that Wisconsin does not have administrative remedies for Jenkins Act violations, and, in any case, the state cannot reach out across state lines to deal with a vendor in another state. States Concerned about Internet Vendors’ Noncompliance and Believe Greater Federal Action is Needed Officials in each of the nine states expressed concern about the impact that Internet cigarette vendors’ noncompliance with the Jenkins Act has on state tax revenues. The officials said that Internet cigarette sales will continue to grow in the future and are concerned that a much greater and more substantial impact on tax revenues will result. One state, California, estimated that its lost tax revenue due to noncompliance with the Jenkins Act by Internet cigarette vendors was approximately $13 million from May 1999 through September 2001. Officials in all nine states said that they are limited in what they can accomplish on their own to address this situation and successfully promote Jenkins Act compliance by Internet cigarette vendors. All of the officials pointed out that their states lack the legal authority necessary to enforce the act and penalize the vendors who violate it, particularly with the vendors residing in other states. Officials in three states told us that efforts to promote Jenkins Act compliance are not worthwhile because of such limitations, or are not a priority because of limited resources. Officials in all nine states said that they believe greater federal action is needed to enforce the Jenkins Act and promote compliance by Internet cigarette vendors. Four state officials also said they believe ATF should have primary jurisdiction to enforce the act. One official pointed out that his organization sometimes dealt with ATF on tobacco matters, but has never interacted with the FBI. Officials in the other five states did not express an opinion regarding which federal agency should have primary jurisdiction to enforce the act. Most Internet Cigarette Vendors Do Not Comply with the Jenkins Act, Notify Consumers of Their Responsibilities, or Provide Information on Sales Volume Through our Internet search efforts (see app. I), we identified 147 Web site addresses for Internet cigarette vendors based in the United States and reviewed each Web site linked to these addresses. Our review of the Web sites found no information suggesting that the vendors comply with the Jenkins Act. Some vendors cited reasons for not complying that we could not substantiate. A few Web sites specifically mentioned the vendors’ Jenkins Act reporting responsibilities, but these Web sites also indicated that the vendors do not comply with the act. Some Web sites provided notice to consumers of their potential state tax liability for Internet cigarette purchases. We also found that information on vendor cigarette sales volume is very limited, and few of the Web sites we reviewed posted maximum limits for online cigarette orders. Majority of Web sites Indicate that Vendors do Not Comply with the Jenkins Act None of the 147 Web sites we reviewed stated that the vendor complies with the Jenkins Act and reports cigarette sales to state tobacco tax administrators. Conversely, as shown in table 2, information posted on 114 (78 percent) of the Web sites indicated the vendors’ noncompliance with the act through a variety of statements posted on the sites. Thirty- three Web sites (22 percent) provided no indication about whether or not the vendors comply with the act. Reasons Cited for Noncompliance with the Jenkins Act Some Internet vendors cited specific reasons on their Web sites for not reporting cigarette sales to state tax authorities as required by the Jenkins Act. Seven of the Web sites reviewed (5 percent) posted statements asserting that customer information is protected from release to anyone, including state authorities, under privacy laws. Seventeen Web sites (12 percent) state that they are not required to report information to state tax authorities and/or are not subject to the Jenkins Act reporting requirements. Fifteen of these 17 sites are Native American, with 7 of the sites specifically indicating that they are exempt from reporting to states either because they are Native American businesses or because of their sovereign nation status. In addition, 35 Native American Web sites (40 percent of all the Native American sites we reviewed) indicate that their tobacco products are available tax-free because they are Native American businesses. To supplement our review of the Web sites, we also attempted to contact representatives of 30 Internet cigarette vendors, and we successfully interviewed representatives of 5. One of the 5 representatives said that the vendor recently started to file Jenkins Act sales reports with one state. However, the other 4 said that they do not comply with the act and provided us with additional arguments for noncompliance. Their arguments included an opinion that the act was not directed at personal use. An additional argument was that the Internet Tax Freedom Actsupercedes the obligations laid out in the Jenkins Act. Our review of the applicable statutes indicates that neither the Internet Tax Freedom Act nor any privacy laws exempt Internet cigarette vendors from Jenkins Act compliance. The Jenkins Act has not been amended since minor additions and clarifications were made to its provisions in 1953 and 1955; and neither the Internet Tax Freedom Act nor any privacy laws amended the Jenkins Act’s provisions to expressly exempt Internet cigarette vendors from compliance. With regard to the Internet Tax Freedom Act, the temporary ban that the act imposed on certain types of taxes on e-commerce did not include the collection of existing taxes, such as state excise, sales, and use taxes. Additionally, nothing in the Jenkins Act or its legislative history implies that cigarette sales for personal use, or Native American cigarette sales, are exempt. In examining a statute, such as the Jenkins Act, that is silent on its applicability to Native American Indian tribes, courts have consistently applied a three-part analysis. Under this analysis, if the act uses general terms that are broad enough to include tribes, the statute will ordinarily apply unless (1) the law touches “exclusive rights of self- governance in purely intramural matters;” (2) the application of the law to the tribe would abrogate rights guaranteed by Indian treaties; or (3) there is proof by legislative history or some other means that Congress intended the law not to apply to Indians on their reservations. Our review of the case law did not locate any case law applying this analysis to the Jenkins Act. DOJ said that it also could not locate any case law applying the analysis to the Jenkins Act, and DOJ generally concluded that an Indian tribe may be subject to the act’s requirements. DOJ noted, however, that considering the lack of case law on this issue, this conclusion is somewhat speculative. ATF has said that sales or shipments of cigarettes from Native American reservations are not exempt from the requirements of the Jenkins Act. Few Web sites Provide Notice of the Vendors’ Reporting Responsibilities, but Some Provide Notice of Customer Cigarette Tax Liability Only 8 (5 percent) of the 147 Web sites we reviewed notified customers that the Jenkins Act requires the vendor to report cigarette sales to state tax authorities, which could result in potential customer tax liability. However, in each of these cases, the Web sites that provided notices of Jenkins Act responsibilities also followed the notice with a statement challenging the applicability of the act and indicating that the vendor does not comply. Twenty-eight Web sites (19 percent) either provided notice of potential customer tax liability for Internet cigarette purchases or recommended that customers contact their state tax authorities to determine if they are liable for taxes on such purchases. Three other sites (2 percent) notified customers that they are responsible for complying with cigarette laws in their state, but did not specifically mention taxes. Of the 147 Web sites we reviewed, 108 (73 percent) did not provide notice of either the vendors’ Jenkins Act reporting responsibilities or the customers’ responsibilities, including potential tax liability, with regard to their states. Minimal Information Available on Vendor Cigarette Sales Volume; Some Vendors Post Maximum Limits on Orders on Their Web sites We attempted to collect average monthly sales volume data through our interviews with representatives of Internet cigarette vendors. Two of the five vendor representatives we interviewed provided us with information on average monthly sales volume. One said that he sells approximately 500 cartons a month. The other (who operates two Web sites) referred us to information in his federal Securities and Exchange Commission (SEC) filings. We reviewed a company filing from February 2001 and found that it did not contain data on monthly volume by carton. The information did, however, indicate that the company’s revenues from cigarette sales from both Web sites averaged just over $196,000 a month in 2000. The remaining three vendor representatives we interviewed declined to answer specific questions on sales volume. Several of the representatives we spoke with said that the majority of vendors process a low number of cartons each month and that only a small number of companies sell any significant volume. Twenty-four (16 percent) of the Web sites we reviewed posted a maximum limit on the number of cigarette cartons that can be ordered through the sites. These limits ranged from a maximum of two cartons per person per order to a maximum of 300 cartons per order. Two of the 24 Web sites specified that the limits were per day and not per order (i.e., maximum purchases of 49 and 149 cartons per day). Three of the vendor representatives we interviewed, including one that does not post a maximum limit on orders, said that they monitor the size of orders and flag any order over a certain amount for manual review and processing. Three vendor representatives said that the reason they have maximum limits and/or monitoring procedures in place is to ensure that their cigarettes are sold for personal use only and not for resale. One representative told us that he believes the CCTA limits the amount of cigarettes he can sell to 300 cartons per day. Conclusions States are hampered in attempting to promote Jenkins Act compliance because they lack authority to enforce the act. In addition, violation of the act is a misdemeanor, and U.S. Attorneys’ reluctance to pursue misdemeanor violations could be contributing to limited enforcement. Transferring primary investigative jurisdiction from the FBI to ATF would give ATF comprehensive authority at the federal level to enforce the Jenkins Act and should result in more enforcement. ATF’s ability to couple Jenkins Act and CCTA enforcement may increase the likelihood it will detect and investigate violators and that U.S. Attorneys will prosecute them. This could lead to improved reporting of interstate cigarette sales, thereby helping to prevent the loss of state cigarette tax revenues. Transferring primary investigative jurisdiction is also appropriate at this time because of the FBI’s new challenges and priorities related to the threat of terrorism and the FBI’s increased counterterrorism efforts. Matters for Congressional Consideration To improve the federal government’s efforts in enforcing the Jenkins Act and promoting compliance with the act by Internet cigarette vendors, which may lead to increased state tax revenues from cigarette sales, the Congress should consider providing ATF with primary jurisdiction to investigate violations of the Jenkins Act (15 U.S.C. §375-378). Agency Comments DOJ and ATF provided written comments on a draft of this report. The agencies’ comments are shown in appendixes III and IV, respectively. Both DOJ and ATF suggested that if violations of the Jenkins Act were felonies instead of misdemeanors, U.S. Attorneys’ Offices might be less reluctant to prosecute violations. ATF further noted that individuals might be deterred from committing Jenkins Act violations if they were felonies. ATF also suggested that other legislative changes might assist states in the collection of excise taxes on cigarettes sold over the Internet: (1) amend the Jenkins Act to give states the authority to seek injunctions in federal court to prevent businesses violating the act from shipping cigarettes to their residents, similar to a recent amendment to the Webb-Kenyon Act, 27 U.S.C. 122, giving states this authority for alcohol shipments; (2) amend 18 U.S.C. 1716 (f) to prohibit the mailing of cigarettes and other tobacco products through the U.S. Postal Service as this law now does for alcoholic beverage products; and (3) enact federal law establishing requirements for the delivery of cigarettes by common carriers such as Federal Express and UPS (e.g., notify states of shipments, require proof of age before delivery) modeled after 18 U.S.C. Chapter 59 (Sections 1261, et. seq.), which restricts how common carriers may ship alcohol. Although we are not in a position to offer our judgment on whether violations of the Jenkins Act should be misdemeanors or felonies, or whether states would benefit from the legislative changes suggested by ATF, we believe this report provides information to help Congress make those decisions. DOJ also provided technical comments on the draft report, which we have incorporated into the report. We are sending copies of this report to the Chairman, House Committee on the Judiciary; the Attorney General; the Secretary of the Treasury; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-8777 or Darryl W. Dutton at (213) 830-1000. Other key contributors to this report were Ronald G. Viereck, Sarah M. Prehoda, Shirley A. Jones, and Evan B. Gilman. Appendix I: Scope and Methodology To determine actions taken by the Department of Justice (DOJ) and the Bureau of Alcohol, Tobacco and Firearms (ATF) to enforce the Jenkins Act with regard to Internet cigarette sales and factors that may have affected the level and extent of such actions, we provided written questions to DOJ and ATF headquarters requesting the needed information. We interviewed ATF officials and obtained documentation to clarify responses to some of our written questions and acquire additional information. To determine efforts taken by selected states to promote compliance with the Jenkins Act by Internet cigarette vendors, we contacted tobacco tax authorities in 11 states (Alaska, California, Hawaii, Iowa, Maine, Massachusetts, New Jersey, New York, Rhode Island, Washington, and Wisconsin) to obtain information. We selected the 10 states with the highest cigarette excise tax rates on January 1, 2002, based on the presumption these states would be among those most interested in promoting Jenkins Act compliance to collect cigarette taxes, and we selected one additional state (Iowa) that appeared, based on our Internet research and information from state officials we interviewed while planning our work, to have taken action to promote Jenkins Act compliance by Internet cigarette vendors. Using an ATF circular listing state tobacco tax contacts’ telephone numbers for questions regarding state cigarette taxes and reporting requirements, we contacted officials at the Tax Division, Alaska Department of Revenue; Excise Taxes Division, California State Board of Equalization; Department of Taxation, State of Hawaii; Compliance Division, Iowa Department of Revenue and Finance; Sales and Special Tax Division, Maine Revenue Services; Excise Tax Unit (within the Processing Division) and Legal Division, Massachusetts Department of Revenue; Office of Criminal Investigation, New Jersey Division of Taxation; Transaction and Transfer Tax Bureau, New York State Department of Taxation and Finance; Excise Tax Section, Rhode Island Division of Taxation; Special Programs Division and Legislation and Policy Division, Washington Department of Revenue; and Alcohol and Tobacco Enforcement Section, Income, Sales and Excise Tax Division, Wisconsin Department of Revenue. After contacting these state agencies, we collected information from 9 of the 11 states (New Jersey and New York did not provide the information we requested in time for it to be included in the report) by interviewing officials and obtaining documentation. We collected data on the states’ efforts to contact Internet cigarette vendors, including how they identified vendors and notified them of their Jenkins Act responsibilities, and the results of these efforts in terms of the level of response by vendors and the resulting collection of cigarette excise taxes from consumers. We collected information on contacts the states had with DOJ and ATF in carrying out efforts to promote Jenkins Act compliance by Internet cigarette vendors and reporting potential vendor noncompliance. We asked the states to identify impediments to their efforts to promote compliance with the act by Internet cigarette vendors. We also asked the states whether greater federal action is needed to promote greater compliance by Internet cigarette vendors. In addition, we asked for any estimates made by these states of the impact on state tax revenues of noncompliance with the Jenkins Act by Internet cigarette vendors. We did not independently verify the accuracy and reliability of the data provided to us by officials in the 9 states. We also collected information regarding states from two other sources. From the Federation of Tax Administrators (FTA) Internet Web site, we obtained each state’s cigarette excise tax rate that was in effect on January 1, 2002. FTA is a national organization with a mission to improve the quality of state tax administration by providing services to state tax authorities and administrators. The principal tax collection agencies of the 50 states, the District of Columbia, and New York City are the members of FTA. We also contacted Forrester Research, Inc., a private research firm, and obtained a copy of a research brief discussing Internet tobacco sales (“Online Tobacco Sales Grow, States Lose;” April 27, 2001). This brief forecasts Internet tobacco sales in the United States for each year from 2001 through 2005 and estimates the total lost state tax revenue from such sales for each of those years. To determine readily identifiable Internet cigarette vendors, including their Web site addresses and other contact information, we developed a list of Web site addresses by conducting searches using two major Internet search engines (Brint and Google). To conduct the searches, we used the key words “discount cigarettes,” “cheap cigarettes,” and “online cigarette sales” as if we were consumers. We used the results of the two searches to compile a universe of 229 Web site addresses for Internet cigarette vendors. We reviewed each of the 229 Web sites using a data collection instrument (DCI) we developed, and we collected contact information such as vendor or company names, addresses, and telephone numbers. Upon completing this review, we eliminated 82 Web sites from our universe: 35 Web sites that either did not sell cigarettes or would not open and 47 Web sites that were either located outside of the United States or represented companies, warehouses, or ordering desks located outside the United States. The remaining 147 Web site addresses make up our universe of readily identifiable Internet cigarette vendors. This universe does not necessarily represent all Internet cigarette vendors operating in the United States. Other researchers, state officials, and industry representatives have used various different methodologies and inclusion criteria to identify Internet cigarette vendors and have produced estimates ranging from 88 to about 400 vendors. To determine whether the 147 readily identifiable Internet cigarette vendor Web sites (1) indicate that the vendors comply with the Jenkins Act; (2) accurately notify potential customers of the vendors’ reporting responsibilities under the Jenkins Act and the customers’ potential tax liability; and (3) place a maximum limit on cigarette orders, we reviewed each of the 147 Web sites using our DCI. We reviewed all Web site statements and notices regarding matters such as vendor policies, practices, privacy concerns, government requirements, vendor responsibilities, vendor compliance with the act, customer responsibilities, potential customer tax liability, as well as any limits on cigarette orders. In doing so, we examined all the pages on each of the Web sites, including the ordering screens, and proceeded as far as possible in the ordering process without inputting any requested personal information. We analyzed the DCIs to derive descriptive statistics regarding the Web sites’ statements and notices, and we summarized reasons cited on the Web sites for vendors not complying with the Jenkins Act. To determine (1) whether readily identifiable Internet cigarette vendors can provide evidence of compliance with the Jenkins Act, (2) the average monthly volume of Internet cigarette sales reported by vendors, and (3) whether vendors place a maximum limit on orders to prevent large- scale tax evasion by purchasers who plan to resell cigarettes, we attempted to conduct structured interviews on the telephone with representatives of 30 of the 147 Internet cigarette vendors. We judgmentally selected 13 of these vendors based on, and to ensure diversity among, geographic location and whether or not the vendors were owned by Native Americans or located on Native American lands. We used information from our DCIs to randomly select another 17 vendors from three categories: (1) those with Web sites silent on whether or not they comply with the Jenkins Act, (2) those who placed maximum limits on cigarette orders on their Web sites, and (3) all remaining Web sites. Table 3 provides the results of our attempts to interview representatives of the 30 vendors on the telephone. We conducted our work between December 2001 and May 2002 in accordance with generally accepted government auditing standards. Appendix II: List of GAO-Identified Internet Cigarette Vendors’ Web site Addresses and Other Contact Information Appendix III: Comments from the Department of Justice Appendix IV: Comments from the Bureau of Alcohol, Tobacco and Firearms
State and federal officials are concerned that as Internet cigarette sales continue to grow and as states' cigarette taxes increase, so will the amount of lost state tax revenue due to noncompliance with the Jenkins Act. The act requires any person who sells and ships cigarettes across a state line to a buyer, other than a licensed distributor, to report the sale to the buyer's state tobacco tax administrator. The Department of Justice (DOJ) is responsible for enforcing the Jenkins Act, and the Federal Bureau of Investigation (FBI) is the primary investigative authority. However, GAO found that DOJ and FBI headquarters officials did not identify any actions taken to enforce the Jenkins Act with respect to Internet cigarette sales. Since 1997, the Bureau of Alcohol, Tobacco, and Firearms (ATF) has begun three investigations of Internet cigarette vendors for cigarette smuggling that included the investigation of potential Jenkins Act violations. Overall, seven of nine selected states have made some effort to promote Jenkins Act compliance by Internet cigarette vendors by contacting Internet vendors and U.S. Attorneys' Offices, but they produced few results. GAO's Internet search efforts identified 147 website addresses for Internet cigarette vendors based in the United States. None of the websites posted information indicating the vendors' compliance with the Jenkins Act. Conversely, information posted on 78 percent of the websites indicated the vendors do not comply with the act.
Background Interior Has Diverse Missions and IT Investments The Department of the Interior, created by Congress in 1849, is a multitiered organization that currently employs approximately 70,000 people in about 2,400 locations throughout the United States. The Secretary of the Interior heads the agency, which comprises approximately 30 offices and committees and eight bureaus. Five Assistant Secretaries support the Secretary of the Interior at the department level. One of these is responsible for Policy, Management and Budget. The others are responsible for mission-related matters including Land and Minerals Management, Indian Affairs, Fish and Wildlife and Parks, and Water and Science. At the next level of the organization, eight bureaus, aligned with these Assistant Secretaries, are responsible for achieving Interior’s diverse missions. Interior’s missions include managing approximately 500 million acres of land—about one-fifth of the total U.S. land mass—and about 1.8 billion acres of the Outer Continental Shelf; fulfilling the government’s trust responsibility to American Indians and Alaska Natives; conserving and protecting fish and wildlife; offering recreational opportunities; managing the National Park System; providing stewardship of energy and mineral resources; fostering the sound use of land and water resources; helping with the management of the National Fire Plan; ensuring the reclamation and restoration of surface mining sites; and providing scientific information on resource, natural hazard, and earth science issues. Figure 1 shows how Interior is organized. Information technology (IT) investments play a vital role in Interior’s ability to fulfill its missions. Given the diversity of these missions and operating environments, the character of these investments also varies substantially. For example, the department uses a land mobile radio infrastructure to support geographically dispersed public safety and protection missions. These missions include law enforcement on federal and tribal lands, urban and wildland firefighting, seismic monitoring, wildlife tracking management of national parks, and water reclamation activities. In contrast, Interior’s Minerals Management Service owns systems that track oil and gas production on public lands and maintains records on royalties that are due to the federal government and to American Indian tribes. Interior’s bureaus and associated program offices propose, fund, and manage these kinds of investments, while certain departmental offices— such as the Offices of Financial Management and Personnel Policy— propose and manage other systems that support administrative functions. Interior’s National Business Center is responsible for managing and operating departmental information systems on a fee-for-service basis and for providing other kinds of administrative support, such as facilities management. In fiscal year 2003, Interior invested over $850 million in IT—about 6 percent of its total budget. While the Secretary of the Interior has the ultimate responsibility for managing these investments—including overseeing and guiding the development, management, and use of information resources and information technology throughout the department—Interior’s CIO is responsible for providing leadership and oversight for IT investment management processes throughout the agency. To that end, Interior’s CIO serves as the chair of the department’s IT Management Council, which oversees “major” investments in IT. About 2,255 of Interior’s staff of about 70,000 are classified as IT professionals. Thirty-four staff provided direct support to the CIO in the department’s Office of the CIO during fiscal year 2003. Appendix I provides additional information about each bureau’s missions, functions, staffing, and total expenditures on IT for fiscal year 2003. Reviews Identified Need for Improving IT Investment Management Prior reviews of IT projects performed at Interior over the past decade—by GAO and the Office of Management and Budget (OMB) as well as Interior’s Office of the Inspector General (OIG)—have revealed significant weaknesses in IT investment management practices at both the department and the bureau levels. Over the last several years, we have issued a series of reports on Interior’s major IT investments and associated management practices. In April and July of 1999, we reported that Interior had not followed sound management practices in the early stages of its effort to acquire the Trust Asset and Accounting Management System, a system designed to manage Indian assets and land records. We also reported that, as a result of poor planning, Interior could not ensure that the system would meet financial management needs cost effectively or mitigate system development risks adequately. In September 2000, we reported that Interior still needed to address significant remaining risks. Among other things, we recommended that Interior take steps to strengthen its software development and acquisition processes and that it regularly assess the progress being made in implementing this system. Between 1995 and 2001, we reported on Interior’s efforts to acquire a land and mineral case processing system called Automated Land and Mineral Record System(ALMRS)/Modernization and raised concerns about the Bureau of Land Management’s (BLM) and the prime contractor’s abilities to complete, integrate, and test the new software system and complete the current schedule. Among other things, we recommended that BLM take steps to strengthen its IT investment management processes and systems acquisition capabilities. ALMRS was terminated in 1999, but many of the management weaknesses we had identified remained. In 2000 and 2001, we reported that BLM had been working to implement our recommendations, and we further recommended that BLM develop a plan to integrate all of the corrective actions necessary to implement our recommendations and establish a schedule for completing them. In August 2002, Interior’s OIG reported that the department did not have a process to ensure that IT capital investments or projects focused on departmental mission objectives or federal government goals and initiatives—principally because of its decentralized approach to IT investment management. The OIG further stated that only 20 investment projects—representing over 24 percent of the total—were subject to departmental review and approval in fiscal years 2002 and 2003 through submission of capital asset plans. Therefore, about $1 billion in Interior IT investment projects were not subject to department-level review and approval during those 2 years. Consistent with these reports, OMB reported in the President’s fiscal year 2003 budget that Interior was putting large sums of public funds at high risk for failure and that it had not complied with applicable legislative requirements that were established in the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996. OMB also reported that the department had not been able to adequately identify major projects within its IT portfolio or to demonstrate through adequate business cases the need for all of the major projects that it did identify. In addition, out of the 23 federal agencies included in the fiscal year 2003 budget supplemental document entitled Performance Information for Major IT Investments, the Department of the Interior was one of only two agencies that were unable to provide the type of information on the actual performance of their IT investments. In the Presidentís fiscal year 2004 budget, OMB reported that Interior had made significant strides toward more fully identifying its IT investments and strengthening the business cases that it developed for major IT projects, although 20 of its 35 initial submissions remained on OMB’s at-risk list. Information Technology Investment Management Maturity Framework Our IT Investment Management (ITIM) maturity framework, issued in May 2000, is a useful tool that can help Interior to improve its IT investment management capabilities. The ITIM framework can be used to determine both the status of an agency’s current IT investment management capabilities and what additional steps need to be taken to put more effective processes in place. The ITIM framework establishes a hierarchical set of five maturity stages. Each stage builds upon the lower stages and represents increased capabilities toward achieving both stable and effective (and thus mature) IT investment management processes. Except for the first stage—which largely reflects ad hoc, undefined, and undisciplined decision and oversight processes—each maturity stage is composed of critical processes that are essential to satisfying the requirements of that stage. These critical processes are defined by key practices that include organizational commitments (e.g., policies and procedures), prerequisites (e.g., resource allocation), and activities (e.g., implementing procedures). Key practices are the specific conditions that must be in place and tasks that must be performed for an organization to effectively implement the necessary critical processes. Figure 2 shows the five ITIM stages and a brief description of each stage. While the ITIM framework defines critical processes and key practices in general terms, our work at multitiered organizations, such as the Postal Service and the Department of Justice, showed that specific roles and responsibilities may vary by organizational tier. For example, in such organizations, department-level management has overall responsibility for a process, while component-level management is responsible for ensuring that applicable requirements defined by the department are met and that operational units such as program offices take primary responsibility for performing the day-to-day activities that are described by ITIM, in accordance with management expectations. In such an environment, the presence of well-established and managed processes at lower levels of the organization can provide a level of assurance to the department concerning the quality and reliability of proposals for new investments, information reported on the actual performance of projects, and budget requests. In an agency like Interior, in which organizations at different levels execute various aspects of IT investment management, it is essential that top agency management establish and oversee processes throughout the agency to ensure that effective investment management practices are being adhered to. Over the past decade, Congress has enacted a series of laws that require centralized management and performance reporting to ensure that agencies can demonstrate that they are making the best funding decisions to support their mission needs. The Clinger-Cohen Act of 1996 specifically requires that the head of each agency designate a CIO to implement a process that maximizes the value and assesses and manages the risk of IT investments. Under the Clinger-Cohen Act, the Department of the Interior’s CIO has the ultimate responsibility for ensuring the cost- effectiveness of decisions made by program managers to expend funds on IT in support of the agency’s mission needs. Therefore, even though individual bureaus have CIOs or similar officers, the department’s CIO must monitor and evaluate the performance of its IT investment portfolio as a whole and report to the Secretary on compliance with applicable laws and policies. Scope and Methodology To determine the department’s capabilities for managing its information technology (IT) investments, including its ability to effectively oversee bureau processes, we used several different criteria. To evaluate the underlying investment management processes we used our Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, Exposure Draft (ITIM Framework). We applied the framework as it is described in the exposure draft, except that we used a revised version of the IT Asset Inventory critical process, called IT Project and System Identification, after discussion with departmental officials at the beginning of this engagement. This revised critical process has been used in our evaluations since June 2001. At the start of our evaluation, we requested that the department conduct a self-assessment using the ITIM as criteria. Using this self-assessment and the supporting documentation as a starting point, we worked with Interior officials to further support their conclusions. Based on the department’s acknowledgement that it had only executed two of the key practices in Stage 3, we did not independently assess the capabilities at this stage or at Stages 4 and 5 of the framework. In our evaluation, an ITIM key practice was rated as “executed” only when we found sufficient evidence that the practice was already in place at the time of the review. We rated all other key practices as “not executed.” To gain additional insight into the department’s ability to oversee its components’ IT investment management processes, we reviewed documentation and conducted interviews on the department’s efforts to put the necessary management structures in place, whether the department had clearly defined what was expected of the bureaus, and whether it held the bureaus accountable to the necessary standards. In order to evaluate the success of the department’s oversight activities, we also assessed the capabilities of Interior’s components. To determine the capabilities of the components, we collected documentation describing bureau CPIC and investment management processes and spoke with responsible officials at eight bureaus (the Bureau of Indian Affairs, the Bureau of Land Management, the Bureau of Reclamation, the Minerals Management Service, the National Park Service, the Office of Surface Mining Reclamation and Enforcement, the U.S. Fish and Wildlife Service, and the U.S. Geological Survey) and the National Business Center. To assess Interior’s plans for improving its IT investment management processes—including oversight of bureau processes—and to identify potential barriers to their implementation, we obtained and evaluated documents showing what management actions had been taken and what initiatives had been planned by the department. In addition, we interviewed officials in the Offices of Acquisition and Property Management, Budget, and the Chief Information Officer. We conducted our work at Interior’s headquarters offices in Washington, D.C.; bureaus headquarters offices in Arlington, Virginia; Reston, Virginia; and Lakewood, Colorado; and at the National Business Center in Englewood, Colorado, from November 2002 through July 2003, in accordance with generally accepted government auditing standards. Interior’s Capacity to Effectively Manage IT Investments Is Limited In order to have the capabilities to effectively manage IT investments, a department should (1) have basic, project-level control and selection practices in place and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the department’s strategic goals, objectives, and mission. These practices may be executed at various organizational levels of the agency—including the bureau level—although overall responsibility for their success remains at the department level. The Department of the Interior is executing only 7 of the 38 key practices that are required by the ITIM framework to establish a foundation for IT investment management and only 2 of the 38 key practices required to manage investments as a portfolio. In addition, the department’s ability to oversee the successful implementation and execution of the required practices is limited, although a number of initiatives have been undertaken to address this issue. However, efforts to implement the reform initiatives have not moved forward as specified in implementing memoranda. Until Interior successfully implements stable investment management practices throughout the department, it will lack essential management controls over its IT investments, and it will be unable to ensure that the mix of investments it is pursuing is the best to meet the department’s strategic goals, objectives, and mission. Department Demonstrates Few Capabilities for IT Investment Management At the ITIM framework’s Stage 2 level of maturity, an organization has attained repeatable, successful investment control processes and basic selection processes at the project level. Through these processes, the organization can identify expectation gaps early and take appropriate steps to address them. According to the ITIM framework, critical processes at Stage 2 include (1) defining investment board operations, (2) collecting information about existing investments, (3) developing project-level investment control processes, (4) identifying the business needs for each IT project, and (5) developing a basic process for selecting new IT proposals. Table 1 describes the purpose for each of the Stage 2 critical processes. In a multitiered organization like Interior, the department is responsible for providing leadership and oversight for foundational critical processes by ensuring that written policies and procedures are established, repositories of information are created that support IT investment decision making, resources are allocated, responsibilities are assigned, and all of the activities are properly carried out where they may be most effectively executed. In such an organization, the CIO is specifically responsible for ensuring that the organization is effectively managing its IT investments at every level. If Interior’s bureaus do not have investment management processes in place that adequately support the department’s investment management process, its CIO must take action to ensure that the department is expending funds on IT investments that will fulfill its mission needs. The department is executing 7 of the 38 key practices associated with Stage 2 critical processes (or about 18 percent), primarily as a result of issuing the IT and Construction Capital Planning and Investment Control (CPIC) Guide in December 2002 and assigning responsibility for IT investment management functions to three oversight boards. Among other things, the CPIC Guide clearly describes the structure of the department’s IT investment review boards and how authority is to be aligned among bureau- and department-level boards; it assigns responsibility to the boards for its proposal selection process. However, the department has not executed most of the crucial key practices at the Stage 2 level. For example, information about the expected and actual cost and schedule for Interior’s IT projects, which could form the basis for selection decisions, is not being provided to the investment review boards. In addition, the department has few capabilities for overseeing IT projects and ensuring that business needs are adequately identified. Finally, in July 2003 Interior had not yet implemented most of the investment management processes that it describes in its CPIC Guide, and thus the members of its boards lacked direct experience in the execution of ITIM critical processes. Table 2 summarizes the status of the department’s Stage 2 critical processes, showing how many associated key practices the agency has executed. The department’s actions toward implementing each of the critical processes are discussed in the sections that follow. To help ensure executive management accountability and adequate oversight for IT capital planning and investment decisions, an organization should establish a governing board or boards with responsibility for selecting, controlling, and evaluating IT investments. According to the ITIM framework, effective operation of an IT investment board requires, among other things, that (1) board members have both IT and business knowledge, (2) board members understand the investment board’s policies and procedures and exhibit core competencies in using the agency’s IT investment policies and procedures, (3) the organization’s executives and line managers support and carry out board decisions, (4) the organization develop organization-specific process guidance that includes policies and procedures to direct the board’s operations, and (5) the investment board operates according to written policies and procedures. (The full list of key practices is provided later in table 3.) The department is executing two of the six key practices needed for its IT investment boards to operate effectively, as specified in the ITIM framework. Interior’s new CPIC Guide provides a conceptual framework for the operation of IT investment boards and a description of a five-phase investment process. It also specifies the membership of Interior’s IT investment boards in a way that should ensure the integration of technical and business knowledge as well as the appointment of senior-level executives to the boards. In its new CPIC Guide, Interior provides a conceptual overview of the department- and bureau-level review boards that are now responsible for overseeing IT investments. At the department level, these boards and their decision thresholds include the following: the Management Excellence Council, which is responsible for validating recommendations made to it by the Management Initiatives Team on IT investments; the Management Initiatives Team, which is responsible for reviewing, evaluating, and approving investments that are expected to cost $35 million or more, and other investments that are otherwise considered to be major; and the IT Management Council, which is responsible for reviewing, evaluating, and approving IT investments that are expected to cost between $5 million and $35 million. The Management Excellence Council, chaired by the Secretary of the Interior and comprising Assistant Secretaries and bureau heads, was created to provide leadership, direction, and accountability in meeting the administration’s goals and to provide overall direction for and oversight of the department’s management reform activities. Its IT investment management activities include validating the Management Initiatives Team’s recommendations and recommending strategic investments for the Secretary’s approval. The Management Initiatives Team, chaired by the Assistant Secretary for Policy, Management and Budget and comprising Deputy Assistant Secretaries and Deputy Bureau Directors, was established to support the Management Excellence Council in its broad activities. In the context of IT investment management, the Management Initiatives Team’s responsibilities include articulating investment strategy, validating scoring by the IT Management Council, and resolving duplication of effort. The IT Management Council, chartered in the CPIC Guide, is cochaired by the department CIO and a rotating cochair who is elected by IT Management Council annually; it is composed of the bureau CIOs and representatives from several departmental offices. The IT Management Council is responsible for scoring potential investments against a predetermined set of criteria, maintaining the planning process and the investment portfolio, and identifying duplication of effort. The department has taken steps to ensure that investment boards are established at the bureau level also. For example, Interior’s CPIC Guide requires that investment review boards be established by each of its bureaus to provide oversight for IT investments that are funded by Interior. This multilayered review of investments is designed to increase the likelihood that Interior’s IT investments will meet mission needs. However, at the time that we concluded our work in July 2003, the department could not assert that board members exhibited core competencies in using the IT investment approach because department level boards had very limited experience with IT proposal selection processes. Until the department implements an effective IT investment board process that is well established and understood throughout the agency, executives cannot be adequately assured that decisions made by the boards are being well supported and carried out by its executives and line managers or that each board is operating according to established policies and procedures. Table 3 summarizes our ratings for each key practice and the specific evidence that supports the ratings. Agency boards, managers, and staff at all levels who are responsible for decisions about IT investment management must have at their disposal information about existing investments as well as new ones that are being proposed. Besides the fundamental business justification for each of the individual investments, decision makers must also consider the interaction of each continuing or proposed project with other projects that comprise the agency’s overall IT environment. In addition, opportunities to consolidate projects or systems and avoid redundant investments may be found when proposals are evaluated in this context. The information that could be used in this analysis includes current and planned system functions, physical location, organizational owners, and how funds are being expended toward acquiring, maintaining, and deploying these assets. A project and system inventory can take many forms and does not have to be centrally located or consolidated. The guiding principles for developing the inventory are that the information maintained should be both accessible—located where it is of the most value to investment decision makers—and relevant to the management processes and decisions that are being made. In multitiered organizations, information from an IT project and system inventory should be accessible and relevant to the decision processes of boards at all levels of the organization that are responsible for ITIM activities. An IT project and system inventory is also essential to successfully implementing certain other critical processes, including IT Project Oversight and Proposal Selection, and developing a comprehensive IT investment portfolio. According to the ITIM framework, organizations at the Stage 2 level of maturity allocate adequate resources for tracking IT projects and systems, designate responsibility for managing the project and system identification process, and develop related written policies and procedures. Resources required for this purpose typically include managerial attention to the process; staff; supporting tools, such as an inventory database; inventory reporting, updating, and query tools; and a method for communicating inventory changes to affected parties. Stage 2 organizations also maintain information on their IT projects and systems in one or more inventories according to written procedures, recording changes in data as required, and maintaining historical records. Access to this information is provided on demand to decision makers and other affected parties. (The full list of key practices is provided in table 4.) However, the department is not executing any of the seven key practices in this critical process. It does not have any written standards or existing repositories of information on Interior’s IT investments that meet ITIM standards, and it has not assigned responsibility or allocated resources for this purpose. In April 2003, departmental officials indicated that they are planning to use the Exhibit 53 report they prepared for OMB as their IT project and system inventory. However, according to the same officials, the current Exhibit 53 report for Interior does not constitute a comprehensive list of its IT investments. Moreover, this report does not include information on actual project cost and schedule or other information needed to support IT investment decisions. Developing an adequate project and system inventory has only recently become a priority at Interior. As a result, Interior’s IT investment boards do not currently have the information they need to make well-informed decisions regarding selecting, controlling, and evaluating investment decisions. Without information from such an inventory, the department- and bureau-level boards cannot ensure that duplication among existing and proposed IT investments is eliminated. In addition, the boards cannot compare actual project performance with expectations and determine whether corrective actions should be taken. Table 4 summarizes our ratings for each key practice. According to the ITIM framework, effective project oversight requires, among other things, (1) having written policies and procedures for project management; (2) developing and maintaining an approved management plan for each IT project; (3) having written policies and procedures for oversight of IT projects; (4) making up-to-date cost and schedule data for each project available to the oversight boards; (5) reviewing each project’s performance by regularly comparing actual cost and schedule data to expectations; (6) ensuring that corrective actions for each under- performing project are documented, agreed to, implemented, and tracked until the desired outcome is achieved; and (7) using information from the IT projects and systems inventory. (The complete list of key practices is provided in table 5.) For all IT projects, performance reviews should be conducted at least at each major life cycle milestone. In an organization such as Interior, it is essential that the department provide leadership and oversight of IT project management even though the day-to-day management of IT investments may be handled by bureau-level staff and the National Business Center. The department is executing 1 of the 11 key practices in this critical process by operating department-level IT investment boards. However, the other 10 key practices are not being executed, such as those requiring the development of written policies and procedures for project management or management oversight of IT projects. Moreover, the department currently has no consistent way of knowing the extent to which project management plans are being developed, approved, maintained, and reviewed. As a result, the department has no mechanisms for ensuring that up-to-date information on actual costs and schedule are being provided to the IT investment boards. Finally, Interior lacks an IT projects and systems inventory to capture performance information that can be used by its boards in the investment decision process. According to Interior officials, the department is not executing many of the key practices for Stage 2 IT project oversight because it currently relies on the bureaus to perform these management functions. However, since the department has not developed policies and procedures for the bureaus to follow in conducting IT project oversight, Interior is running the risk that under performing projects will not be reported to the appropriate IT investment board. In the absence of effective board oversight, Interior executives do not have adequate assurance that projects are being developed on schedule and within budget. Table 5 summarizes our ratings for each key practice and the evidence that supports the ratings. Defining business needs for each project helps to ensure that projects support the organization’s mission goals and meet users’ needs. This critical process creates the link between the organization’s business objectives and its IT management strategy. According to our ITIM framework, effectively identifying business needs requires, among other things, (1) defining the organization’s business needs or stated mission goals, (2) identifying users for each project who will participate in the project’s development and implementation, (3) training IT staff adequately in identifying business needs, and (4) defining business needs for each project. (The complete list of key practices is provided in table 6.) The department is responsible for providing leadership and oversight for the identification and documentation of business needs for IT investments by issuing written guidance for this critical process and executing the associated key practices. However, given that knowledge of the actual business needs of Interior’s departmental offices and programs resides in the sponsors of IT investments, much of the work of identifying business processes must necessarily be performed at those levels of the organization. The department is executing two of the eight key practices for this critical process by defining mission goals in strategic planning documents and by ensuring that appropriately trained individuals identify the needs for its IT projects. However, the department is not executing the remaining key practices, such as those that involve ensuring that adequate resources are being provided and identifying all of its IT projects and systems in an inventory. As a result, the department could not identify specific users and business needs for all of Interior’s IT investments at the time of our review. In April 2003, the department provided training in linking projects to Interior’s IT strategic plan, but written policies and procedures for business needs identification have not been formalized. Also, the Exhibit 300 reports on IT investments that the department produces for OMB in support of the President’s budget—and which it identifies as the mechanism for capturing business needs—are not required for nonmajor IT investments. Because nonmajor projects comprised approximately 67 percent of Interior’s projects and 45 percent of its total IT expenditures in fiscal year 2003, business needs were not captured for many of Interior’s projects. The department was also unable to demonstrate that identified users participated in project management throughout a project’s life cycle. Office of Chief Information Officer (OCIO) officials explained that the department has not provided oversight of the process of identifying business needs, because it has historically relied on its IT investment sponsors to determine the business needs of the investments. However, until the department provides adequate leadership and oversight for this critical process that is well established and understood throughout the agency, executives cannot be adequately assured that sponsors of IT investments are consistently and objectively identifying user needs and linking investment proposals to the agency’s strategic goals. Table 6 summarizes our ratings for each key practice and the evidence that supports the ratings. Selecting new IT proposals requires an established and structured process to ensure informed decision making and management accountability. According to our ITIM framework, this critical process requires, among other things, (1) making funding decisions for new IT proposals according to an established process, (2) providing adequate resources for proposal selection activities, (3) using an established proposal selection process, (4) analyzing and ranking new IT proposals according to established selection criteria—including cost and schedule criteria—and (5) designating an official to manage the proposal selection process. While initial selection decisions may be made at the bureau level, the department should have in place clear, established criteria for selection and guidance regarding the structure and content of IT proposals. (The complete list of key practices is provided in table 7.) The department is executing two of the six key practices for this critical process by identifying the IT Management Council cochairs as the responsible authorities for the proposal selection process and by using the CPIC Guide’s funding process to make decisions on IT proposals. These achievements notwithstanding, the department has yet to implement most key practices—such as using established criteria to analyze each investment and prioritizing these investments accordingly. The CPIC Guide does contain requirements that address several of the objectives of the critical process for proposal selection, such as establishing a consistent approach to assessing the costs and benefits of proposed investments and developing clear performance expectations with quantifiable performance measures. If implemented, the CPIC Guide would satisfy many of the requirements of the key practices in this critical process. Until now, the department has focused on other aspects of its IT investment management process, such as the review of OMB Exhibit 300s for each major project, without using the selection criteria that are defined in the CPIC Guide. Moreover, while fundamental processes for proposal selection are described in the CPIC Guide, these had not been fully implemented at the time of our review. In the meantime, Interior’s bureaus have retained responsibility for selecting IT investments—without benefit of departmental review. Until the department implements the key practices described in the ITIM framework, and they are well established and understood throughout the agency, Interior cannot be adequately assured that it is consistently and objectively developing and selecting proposals that best meet the needs and priorities of the agency. Table 7 summarizes our ratings for the proposal selection critical process. Department Is Not Managing Interior IT Investments as a Portfolio An IT investment portfolio is an integrated, agencywide collection of investments that are assessed and managed collectively based on common criteria. Managing investments within the context of such a portfolio is a conscious, continuous, and proactive approach to expending limited resources on an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments with a portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. According to the ITIM framework, Stage 3 maturity includes (1) defining portfolio selection criteria, (2) engaging in project-level investment analysis, (3) developing a complete portfolio based on the investment analysis, (4) maintaining oversight over the investment performance of the portfolio, and (5) aligning the authority of the IT investment boards. Table 8 summarizes the purposes of each of the critical processes in Stage 3. The department provided evidence that it is executing 2 of the 38 key practices for Stage 3 by establishing and maintaining in its CPIC Guide written policies and procedures and associated criteria for aligning the decision-making authority of its IT investment review boards. In its self- assessment, Interior did not claim to be fully executing any other Stage 3 key practices. At the time of our review, the department’s efforts to implement ITIM were in the initial stages, since the CPIC Guide had been issued in December 2002. Moreover, OCIO efforts at IT management reform had to compete for resources with other ongoing priorities. Until now, Interior has focused its improvement activities in the preselect and select phases described by its CPIC Guide. Until the department fully implements the foundational critical processes in Stage 2 and then the critical processes for portfolio management in Stage 3, it will lack the capability to consider Interior’s investments in a comprehensive manner and determine whether it has the mix of IT investments that best meet the agency’s mission needs and priorities. Table 9 summarizes the status of the department’s Stage 3 critical processes, showing how many associated key practices the agency has executed. Department Has Limited Ability to Oversee IT Investments in the Bureaus The ability of a department-level CIO to effectively oversee IT investment management processes throughout the agency depends on the existence of appropriate management structures with adequate authorities and sufficient guidance. To its credit, Interior has taken several crucial initial steps to make this possible; it conducted a study of existing organizational structures, issued a secretarial order providing broad authorities to its CIOs, and issued a capital planning and investment control guide that provided a conceptual framework for improvements to the IT investment management process. However, Interior’s CIO has taken limited action to ensure that the secretarial order was implemented and that other required improvements to the process were made. The department had envisioned a certification process through which it would hold bureaus accountable for improving their investment management capabilities, but it has yet to implement this concept. Until sound management structures and a certification process are in place, the department’s ability to oversee the bureaus’ practices for investment management will be limited. Department and Bureau CIOs Are Not Positioned to Provide Leadership for IT Investment Processes Under the Clinger-Cohen Act of 1996, the CIO of each agency is responsible for effectively managing all of the agency’s IT resources. To comply with the act, Interior’s CIO is responsible for ensuring that the bureaus are implementing effective investment management processes that are appropriately aligned with the department’s processes. Our report on Maximizing the Success of Chief Information Officers describes the principles of successful CIO management in leading organizations. In such organizations, the CIO has been positioned for success, having been assigned clearly defined roles, responsibilities, and accountabilities. Because Interior has multiple levels of IT investment management authority, it is especially critical that the roles, responsibilities, and accountabilities of all the CIOs be clearly defined. In 2002, Interior contracted with Science Applications International Corporation (SAIC) to study the department and bureau CIO organizations and determine whether it was in compliance with the requirements of the Clinger-Cohen Act. SAIC concluded that, in the current environment, Interior’s CIO did not have adequate power—or the leverage of a formal structure with clear lines of authority and control of resources—to carry out its responsibilities under the act. The study pointed to a general lack of authority and resource control at the bureau level as well, which further inhibited the CIO’s ability to function. According to SAIC, in most of the bureaus, the CIOs lacked the authority to effect change among their subordinate IT staff and decision areas because they cannot allocate or withdraw funds and do not control hiring, training, or performance appraisals. On the basis of these findings, SAIC recommended that Interior establish formal lines of authority from the department’s CIO to the bureau CIOs and to IT staff at lower levels. On the basis of the SAIC study, and because of its desire to comply with the Clinger-Cohen Act, Interior issued Secretarial Order 3244, which acknowledged that authority and control over management of IT resources had not been fully established or coordinated in the department, resulting in significant variability among bureaus and offices in implementing IT functions and setting funding priorities. To rectify this situation, the order provides broad authorities to all of Interior’s CIOs. Among other things, the order requires all bureausto standardize their IT functional areas to achieve continuity of responsibility and accountability throughout the department. Specifically, the order calls for establishing a function described as technology management, which encompasses IT investment management. The order assigns approval authority and management responsibility for all IT assets to bureau CIOs. On the basis of the order, every Interior organization with 5,000 or more employees must have a separate CIO position at the Senior Executive Service level. The individual in this position must be a fully participating member of the executive leadership/management teams and must report to the Deputy Director or Director of the bureau. For any office that reports directly to the Secretary or the Deputy Secretary of the Interior, the department’s CIO will serve as the CIO if those offices have not designated one. Consistent with the Clinger-Cohen Act, the order states that the department’s CIO is responsible for approving all IT expenditures. Interior’s CIO issued specific direction to the bureaus in November 2002 and in January 2003, indicating how to implement Secretarial Order 3244 and establishing a process for monthly status reporting, which was to begin on January 31, 2003. However, at the time of our review, only two bureaus had provided the required monthly status reports, and none of the bureaus had fully implemented the order. This lack of responsiveness is consistent with concerns described in the SAIC report that Interior’s CIO currently lacks adequate support from bureau CIOs to ensure that departmental efforts at improving IT investment management will be effectively implemented. Department Does Not Follow Through with Certification of the Bureaus’ IT Investment Management Processes According to the Clinger-Cohen Act and Interior’s own CPIC Guide, the department should take steps to ensure that Interior's bureaus implement effective capital planning and investment control processes. To execute this responsibility according to project management best practices, the department should clearly define its expectations for these processes and then hold the bureaus accountable to the standards it has established. At the time of our review, the department had specified initial expectations for the bureaus’ processes. On January 15, 2003, the department CIO issued a memorandum that called for the bureaus to immediately begin implementing more formal IT processes, using the CPIC Guide. The department held training sessions in which bureaus were informed that the Exhibit 300s they provide to the department for review as part of the annual budget formulation process must first be reviewed by their own IT investment review boards. The department emphasized during these sessions that the bureaus should work on making their Exhibit 53 reports on IT investments more complete and reliable. Although the Exhibit 53 reports do not include adequate information for IT investment management purposes—according to the ITIM framework—improving the reports will bring Interior one step closer to identifying and tracking IT projects and systems. This is a critical aspect of the investment management process that will provide better visibility of all IT projects to the department. Despite this initial instruction on its expectations, the department has yet to fully implement a certification process through which it can hold bureaus accountable for their IT investment management processes. With the issuance of its CPIC Guide in December 2002, the department began to define some criteria for certification of these processes. The guide states that, at a minimum, a bureau’s investment review board must maintain a documented description or charter outlining the bureau’s CPIC process and the roles and responsibilities of the board, the bureau offices, and any other entities that are involved in CPIC. In addition, the guide outlines other departmental expectations—such as six steps that need to be accomplished in the short term, along with establishing a bureau-level investment review board—but it does not explicitly state whether these are required for certification. During our interviews with staff from Interior’s IT Portfolio Management Division, officials confirmed that the certification process is still only a concept at Interior and that it has not been well defined. More specifically, the department has not established a date for the certification to begin or specified what corrective action will be taken if a bureau fails to be certified. Implementation of an effective certification process will provide the department with a mechanism for ensuring that the bureaus are operating in a manner that is consistent with the policies and procedures it establishes for ITIM key practices. Departmental officials confirmed that at the time of our review, OCIO efforts were concentrated on providing training for the preparation of bureau Exhibit 300 reports, discussed above, rather than on implementing the CPIC Guide’s provisions for a certification process. Until the department focuses resources on defining and enforcing standards for certifying bureau processes, the risk is high that bureaus may implement IT investment management processes that do not sufficiently support the departmental investment management process. Only by institutionalizing effective processes at both the department and the bureau levels can Interior ensure that it is optimizing its investments in IT and effectively assessing and managing the risks of these investments. Department’s Efforts to Improve Investment Management Processes and Oversight Are Fragmented and Inadequate Achieving successful reform of IT management requires an organization to develop a complete and well-prioritized plan for systematically correcting weaknesses in its existing capabilities. To properly focus and target this plan, an organization should first fully identify and assess current strengths and weaknesses (i.e., create an investment management capability baseline). As we have previously reported, this plan should, at a minimum, (1) specify measurable goals, objectives, milestones, and needed resources and (2) clearly assign responsibility and accountability for accomplishing well-defined tasks. The plan should also be documented and approved by agency leadership. In implementing such a plan, it is important that the organization measure and report progress against planned commitments and take appropriate corrective action to address deviations. In order to develop a focus for its reform efforts, Interior has made several attempts to document existing conditions and identify weaknesses in its organization. Between 2001 and 2003, OCIO hired three different contractors to perform studies of existing IT projects and systems, organizational reporting relationships and functions, and IT investment management practices. The META Group performed the first study, after which the SAIC study, described earlier, was completed to assess the earlier results. G&B Solutions was then contracted to further elaborate and validate the earlier work, focusing on technical solutions and CIO authorities. In a separate effort in 2002, the department directed the bureaus to rate themselves in a number of areas that correspond to areas evaluated by OMB in the budget process. Further, on January 15, 2003, OCIO issued a memorandum that required bureaus to submit descriptions of their capital planning and investment control processes and IT investment board charters and to perform self-assessments of their IT investment management capabilities. However, the effectiveness of this particular effort was limited because no specific instructions were given on how to perform the self-assessments; this will lead to difficulties in comparing results across bureaus. The Department of the Interior has indicated that it intends to create a comprehensive reform plan with target goals and measurement criteria, but this plan has not been fully developed. In November 2002, the department created a Program Management Office to implement IT management reforms by pulling together various improvement efforts and prioritizing them. However, as of July 2003, the Program Management Office did not have a formal charter or a budget, and its manager did not have a clearly defined role. In addition, this individual’s attention was being diverted away from issues of IT investment management to address other concerns, such as Interior’s court-ordered efforts to resolve issues with the Indian Trust Fund and related information security problems. The lack of clear accountability and responsibility for improvement efforts that an office such as this would have provided has resulted in initiatives that are not well integrated and do not support a unified plan. For example, no steps have been taken to integrate the requirements of Secretarial Order 3244 for CIO organizations with the bureau certification process established in the CPIC Guide. In addition, the multiple efforts to develop an understanding of current conditions and identify weaknesses in the existing organization, described above, have not yielded a coherent view, despite the expenditure of considerable resources. Without committing to a plan that allows it to systematically prioritize, sequence, and evaluate improvement efforts, Interior jeopardizes its ability to establish mature investment processes, which include selection and control capabilities that would result in greater certainty about the outcomes of future IT investments. Conclusions The Department of the Interior lacks most of the fundamental IT investment management practices necessary to effectively and efficiently manage its IT resources. Only by effectively and efficiently managing these resources can the department gain opportunities to further leverage its IT investments and make better allocation decisions among many investment alternatives. Recent moves by senior executives to define an IT investment management approach—and to align the IT investment decision review process with the CIOs at both the department and bureau levels— demonstrate Interior’s realization that reform is necessary. Nonetheless, the department still finds itself without many of the capabilities it needs to ensure that Interior’s mix of IT investments best meets the agency’s mission and business priorities. Interior’s ability to guide and oversee investment practices throughout the agency is limited by its lack of mature investment management processes. The department has recognized that it needs to oversee bureau activities, and it has begun to establish the authority of bureau CIOs to manage IT investments and to implement certification of standard investment processes in the bureaus. However, until the department is able to ensure mature investment management capabilities at all levels, its ability to wisely select and effectively manage IT investments will be limited. Interior’s success in resolving the weaknesses described in this report will depend on the department’s ability to plan and execute the implementation of robust investment management and related practices throughout the agency. However, the department’s efforts have suffered from a lack of unified planning, clear implementation guidance, supporting resources, and follow-up on requirements that have been established by the CIO. Until the department develops a comprehensive plan, supported by top management, that delineates performance expectations for process improvements, Interior’s prospects will remain limited for successfully developing the management capabilities that are necessary to make prudent decisions that maximize the benefits and minimize the risks of its IT investments. Recommendations To strengthen Interior’s capabilities for IT investment management and address the weaknesses discussed in this report, we recommend that the Secretary of the Interior direct Interior’s CIO to do the following: Develop a unified, comprehensive plan for implementing departmentwide improvements to the IT investment management process that are based on the Stage 2 and Stage 3 critical processes of our ITIM framework. Ensure that the plan focuses first on the weaknesses that this report identifies in the Stage 2 critical processes, before addressing those associated with higher stages of ITIM maturity, because Stage 2 processes collectively provide the foundation for building a mature IT investment management process. Specifically: Establish a timetable for the IT Management Council, Management Initiatives Team, and Management Excellence Council to begin operating according to the guidance described in the CPIC Guide. Develop and issue policies and procedures to guide the IT project oversight as described by our ITIM framework, including the review of actual performance information against expected performance by the investment boards and the implementation of corrective actions when performance falls below acceptable levels. Implement these policies and procedures to accomplish the purpose of project oversight. Develop and issue policies and procedures to guide the project and system identification processes as described by the ITIM framework, including the specification of information required by the investment management process, the sources of such information, and the methods for collecting and retaining this information. Implement these policies and procedures to accomplish the purpose of IT project and system identification. Develop and issue policies and procedures to guide the identification of business needs as described by the ITIM framework, including the identification of business needs for all projects and the inclusion of users in project management throughout a project’s life cycle. Implement these policies and procedures to accomplish the purpose of identifying business needs. Establish a timetable for implementing IT proposal selection as described by Interior’s CPIC Guide. Ensure that the plan next focuses on Stage 3 critical processes, which are necessary for portfolio management, because, along with the Stage 2 foundational processes, these processes are necessary for effective management of IT investments. To further strengthen the department’s ability to oversee bureau investment management processes so that it may ensure that investment management is effectively carried out throughout the organization, the plan should also establish a timetable and specific implementation milestones for Secretarial Order 3244, and describe acceptable criteria for certification of bureau CPIC processes and establish a time frame for the certification of these processes at all bureaus. Ensure that the plan establishes a baseline of the agency’s capabilities, specifies measurable goals and time frames, and establishes review milestones. Establish a well-defined management structure for directing and controlling the unified plan with clear authority and responsibility. Ensure that the Management Excellence Council, which holds responsibility for department management reform activities, approves the plan. Implement the approved plan and report on progress made against the plan’s goals and time frames to the Secretary of the Interior every 6 months. Agency Comments and Our Evaluation The Department of the Interior’s Assistant Secretary for Policy, Management and Budget provided written comments on a draft of this report (reprinted in appendix II). In these comments, the Department of the Interior concurred with our recommendations and identified actions that it plans to take to improve IT investment management processes throughout the department. Specifically, it intends to leverage lessons learned in BLM’s implementation of the ITIM framework to accelerate the maturing of department practices. It also intends to develop and implement a comprehensive plan, approved by the Management Excellence Council, to address specific weaknesses that we identified in its foundational investment management practices and to move to full implementation of Secretarial Order 3244. In response to the department’s comments, we removed all descriptions of national critical infrastructure or Trust. In its comments the department also provided us with additional information that reflects the ongoing progress it is making in implementing more mature investment management practices. As we have described in this report, Interior’s progress has been evident and is ongoing. In particular, the establishment of the ITMC and the release of the CPIC Guide have provided an organizational point of focus and a set of procedures to guide IT investment management. This has enabled the department to begin to implement new practices with a departmentwide scope. The information the department provided to us in its comments on the completed evaluation reflects the continuing implementation of plans described in this report. We strongly support this ongoing progress, and we will reflect the successful execution of key practices in following up on our recommendations. We are sending copies of this report to interested committees of Congress, to the Secretary of the Department of the Interior, and to the Chief Information Officer of the Department of the Interior. Copies will be made available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at 202-512-6240 or at koontzl@gao.gov. Additional GAO contact and staff acknowledgments are listed in appendix III. . Bureau Missions, Functions, and IT Investments BIA provides federal services to approximately 1.4 million American Indians and Alaska Natives who are members of 562 federally recognized tribes in the 48 contiguous United States and in Alaska. The bureau administers 43,450,267 acres of tribally owned land, 11,000,000 acres of individually owned land, and 443,000 acres of federally owned land held in trust status. The bureau’s mission is to promote and support tribes on their future path through self-determination and to reduce administration by the bureau in nontrust areas. BLM administers over 264 million surface acres of public land, about one-eighth of the land in the U.S., and approximately 700 million acres of federal subsurface mineral estate. Most of these lands are in the West and Alaska, and they are dominated by extensive grasslands, forests, high mountains, arctic tundra, and deserts. BLM is responsible for the management and use of a variety of resources on these lands, including energy and minerals, timber, forage, wild horse and burro populations, fish and wildlife habitat, recreation sites, wilderness areas, and archeological and historical sites. BLM balances the goals of providing opportunities for environmentally responsible recreation and commercial activities; preserving natural and cultural heritage resources; reducing threats to public health, safety, and property; providing land, resource, and title information; providing economic and technical assistance to Indian tribes and island communities; understanding and planning for the condition and use of the public lands; and restoring at-risk resources and maintaining functioning systems. USFWS is the primary federal agency responsible for the protection, conservation, and renewal of fish, wildlife, plants, and their habitats. It manages migratory bird populations, restores interjurisdictional fisheries, conserves and restores wildlife habitat, administers the Endangered Species Act, and assists foreign governments with their conservation efforts. USFWS oversees the Federal Aid in Fish and Wildlife Restoration Programs, which distribute hundreds of millions of dollars earned from excise taxes on fishing and hunting equipment to state fish and wildlife agencies. USFWS is the steward for nearly 93 million acres of public lands, including 529 refuges of the National Wildlife Refuge System, and it manages 67 national fish hatcheries for the restoration of the nation's fishery resources. USFWS also works closely with partnership activities for assisting voluntary habitat development and fostering aquatic conservation for fish and wildlife habitat on nonfederal lands. MMS manages the nation's natural gas, oil, and other mineral resources on the Outer Continental Shelf. The agency also collects, accounts for, and disburses more than $5 billion per year in revenues from federal offshore mineral leases and from onshore mineral leases on federal and Indian lands. MMS includes two major programs, Offshore Minerals Management and Minerals Revenue Management. Offshore Minerals Management manages the mineral resources on the Outer Continental Shelf and has three regions: Alaska, the Gulf of Mexico, and the Pacific. Minerals Revenue Management collects, accounts for, and distributes revenues associated with mineral production from leased federal and Indian lands. NPS manages 379 parks and various historic preservation, conservation and recreation programs, and hosts 287 million visitors annually. The National Park System encompasses approximately 83.6 million acres in over three hundred areas, of which more than 4.3 million acres remain in private ownership. There are three principal categories used in classification: natural areas, historical areas, and recreational areas. NPS's four goal categories are to preserve park resources; to provide for the public enjoyment and visitors’ experience of parks; to strengthen and preserve natural and cultural resources and enhance recreational opportunities managed by partners; and to ensure organizational effectiveness in supporting NPS's mission. OSM is the lead federal agency for carrying out the mandates of the Surface Mining Control and Reclamation Act, whose goal is to protect society and the environment from the adverse effects of surface coal mining operations. OSM’s mission goal of Environmental Restoration addresses mining that occurred prior to the passage of Surface Mining Control and Reclamation Act in 1977, while its goal of Environmental Protection addresses mining since 1977. Environmental Restoration is accomplished through the Abandoned Mine Land Program, whose main purpose is to restore a safe and clean environment. As part of this, the Appalachian Clean Streams Initiative supports local efforts to eliminate environmental and economic impacts of acid mine drainage from abandoned coal mines. Environmental Protection focuses on current coal mining and is accomplished with the Surface Mining Program, which oversees 4.4 million acres of surface coal mines in 26 states and on the lands of three Indian tribes. The principal means of delivering environmental protection is through 24 primacy states that receive federal grant funding. USBR has developed and manages a limited natural water supply in the 17 western states. USBR works to meet the increasing water demands while protecting the environment and the public's investment. USBR has 348 reservoirs with a total storage capacity of 245 million acre- feet of water, 58 hydroelectric power plants, and over 300 recreation sites. USBR is the nation’s second largest producer of hydroelectric power in the western United States, generating more than 40 billion kilowatt hours of energy annually. USBR is the nation's largest water wholesaler; its water usage includes irrigation for one out of every five western farmers (140,000)—about 10 million acres of irrigated land; 10 trillion gallons of municipal, rural, and industrial water for over 31 million people; habitat support for wildlife refuges, migratory waterfowl, fish, and threatened and endangered species; and irrigation projects and potable water supplies for Indian tribes. USBR provides flood control benefits and drought contingency planning and assistance, and it provides water-based recreation activities for about 90 million visitors a year. USGS is the nation’s principal natural science and information agency. USGS conducts research, monitoring, and assessments to contribute to understanding the natural world—lands, water, and biological resources. USGS provides reliable, impartial information in the form of maps, data, and reports containing analyses and interpretations of water, energy, mineral and biological resources, land surfaces, marine environments, geologic structures, natural hazards, and dynamic processes of the Earth; this information is used to understand, respond to, and plan for changes in the environment. USGS describes, documents, and gains understanding of natural hazards and their risks through the study of earthquakes, volcanoes, landslides, geomagnetic field changes, floods, droughts, coastal erosion, tsunamis, wild land fire, and wildlife disease. Environmental and natural resources activities deal with physical, chemical, biological, and geological processes in nature and the impact of human actions on natural systems through studies including data collection, long-term assessments, ecosystems analysis, and the forecasting of future changes. Comments from the Department of the Interior GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, William G. Barrick, Joanne Fiorino, Peggy A. Hegg, Alison Jacobs, Mary Beth McClanahan, and Nik Rapelje made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
The Department of the Interior is responsible for diverse and complex missions ranging from managing America's public lands, mineral and water resources, and wildlife to providing satellite data to the military and scientific communities. To fulfill these responsibilities, Interior invests over $850 million annually--about 6 percent of its total annual budget--in communications and computing projects and systems. Interior's Office of the Secretary and its Chief Information Officer (CIO) are responsible for overseeing processes for managing these investments to ensure that funds are expended in the most cost-effective way in support of the agency's mission needs. GAO was asked to evaluate (1) departmental capabilities for managing the agency's information technology (IT) investments and (2) the department's actions and plans to improve these capabilities. The Department of the Interior has limited capability to manage its IT investments. Based on GAO's IT Investment Management (ITIM) Framework, which measures the maturity of an organization's investment management processes, the department is carrying out few of the activities that support critical foundational processes. As an initial step to improve its investment management capability, the department has issued a Capital Planning and Investment Control Guide, which describes its approach to IT investment management. However, it has thus far implemented few of the processes described in its own guide. In addition, it has yet to develop an adequate approach to identify existing projects and systems. In order to ensure strong investment management at all levels, the department has also specified a requirement for certifying bureau-level investment processes, but certification has not yet begun. Finally, in order to strengthen the CIO's ability to manage IT investments at all levels, the Secretary of the Interior has issued an order establishing the authority of the bureau-level CIOs; however, the order has not been fully implemented. In order to improve investment management processes, an organization needs to develop and implement a coherent plan, supported by senior management, which defines and prioritizes enhancements to its investment processes. While Interior has undertaken a number of initiatives designed to improve its investment management processes, the department has not yet developed a unified, comprehensive plan to achieve its objective of establishing effective investment management processes, nor has it committed the resources to successfully implement the necessary reforms. Without a well-defined process improvement plan and controls for implementing it, Interior will continue to be challenged in its ability to make informed and prudent investment decisions.
Background The Trust Fund was established by the Airport and Airway Revenue Act of 1970 (P.L. 91-258) to help fund the development of a nationwide airport and airway system and to fund FAA investments in air traffic control facilities. It provides all of the funding for the Airport Improvement Program (AIP), which provides grants for construction and safety projects at airports; the Facilities and Equipment (F&E) account that funds technological improvements to the air traffic control system; and a Research, Engineering, and Development (RED) account. In fiscal year 2002, the Trust Fund provided 79 percent of the funding for FAA Operations, which represented almost 50 percent of Trust Fund expenditures. The Trust Fund is supported by 10 dedicated excise taxes. One of the major taxes is referred to as the passenger ticket tax, which include the following 3 taxes: 7.5 percent tax on the price of domestic airline tickets, 7.5 percent tax on the value of awards of free or reduced-rate air fares (frequent flyer awards tax), and 7.5 percent tax on the price of domestic airline tickets to “qualified rural airports” (flight segment fees do not apply if this tax is levied). The remaining 7 excise taxes that finance the Trust Fund include the following: $3 on each flight segment, indexed to inflation starting in 2002; 6.25 percent tax on the price charged for transporting cargo by air; $0.043 per gallon tax on commercial aviation jet fuel; $0.193 per gallon tax on general aviation gasoline; $0.218 per gallon tax on general aviation jet fuel; $13.40 tax on international arrivals, indexed to inflation; and $13.40 tax on international departures, indexed to inflation. In fiscal year 2002, the Trust Fund received about $10 billion in revenue from these taxes and interest. As shown in figure 1, the passenger ticket tax was the largest single source of Trust Fund revenue, totaling about 47 percent of all receipts, followed by the flight segment tax at 15 percent of total receipts, and the international departure/arrival tax at about 13 percent of total receipts. Trust Fund expenditures totaled almost $12 billion in fiscal year 2002. As shown in figure 2, FAA Operations accounted for nearly half of Trust Fund expenditures, followed by AIP grant funding at 24 percent, F&E at 23 percent, and RED at almost 2 percent. FAA’s current authorization expires on September 30, 2003, and Congress is considering three proposals that would reauthorize funding for FAA. In the May 2, 2003, version of S. 824, the Senate proposes to authorize $43.4 billion from 2004 through 2006 for FAA programs, of which $34.4 billion would be funded from the Trust Fund, with the balance of $9.1 billion covered by the General Fund. In the May 15, 2003, version of H.R. 2115, the House Subcommittee on Aviation proposes to authorize $60 billion from 2004 through 2007 for FAA programs, of which $47.2 billion would be funded from the Trust Fund, with the balance of $12.8 billion covered by the General Fund. The President’s proposal authorizes $57.3 billion from 2004 through 2007 for FAA programs, of which $50.8 billion would be funded from the Trust Fund and the remaining $6.6 billion would be funded from the General Fund. Table 2 breaks down the distribution of the funding among FAA programs for each of the three expenditure scenarios through 2006. Under each proposal, the Trust Fund provides all of the funding for the AIP, F&E, and RED programs and funds between 58 and 79 percent of FAA Operations. The balance of FAA Operations is funded through the General Fund and not reflected in table 2. Projected Financial Outlook for the Trust Fund Is Positive but Depends on Realization of Forecasted Passenger Traffic Levels and Airfares Over the next 3 years, the Trust Fund is projected to have sufficient revenue to fund authorized spending and end each year with an uncommitted balance under each of the three expenditure proposals. This positive financial outlook depends on the realization of FAA’s forecasted passenger traffic levels and airfares. As shown in figure 3, under the Senate’s and House’s proposals, the Trust Fund’s year-end uncommitted balance is projected to be over $4.4 billion over the next 3 years. Under the President’s proposal, the Trust Fund’s year-end uncommitted balance is projected to range between $2.9 billion in 2004 and $1 billion in 2006. The primary reason that the Trust Fund’s uncommitted balance would be higher under the Senate’s and House’s proposals is that they use the formula created in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21) to determine how much funding for FAA Operations should come from the Trust Fund, and the President’s proposal does not. Under AIR-21, the formula sets the amount of Trust Fund revenue that will be authorized for FAA Operations and RED in a given year equal to projected Trust Fund revenues (as specified in the President’s budget) minus the authorizations for the capital accounts (AIP and F&E) in that year. Thus, under the Senate’s proposal, the Trust Fund is projected to support $14.1 billion, or 61 percent of FAA Operations from 2004 through 2006. Under the House’s proposal, the Trust Fund is projected to support $13.3 billion, or 58 percent of FAA Operations from 2004 through 2006. In contrast, the President’s proposal specifies a set amount of Trust Fund revenue to be used for FAA Operations. Therefore, if Congress enacts the President’s proposal, the Trust Fund would provide $18.3 billion for FAA Operations from 2004 through 2006, or about 79 percent of its total estimated costs for Operations. Although the Trust Fund is projected to have a surplus over the next several years under each of the expenditure proposals, this projection depends to a significant extent on the realization of forecasted commercial passenger traffic levels and airfares. If passenger traffic or yields fall below the levels that FAA projected in November 2002, the Trust Fund may not have sufficient revenues to fund projected expenditures. For example, table 3 presents the projected Trust Fund balances under each expenditure proposal and shows the impact if revenues were 5 percent or 10 percent less than currently projected. The Trust Fund could absorb these revenue shortfalls while retaining a positive balance under the Senate’s and House’s proposals because the AIR-21 formula would limit appropriations from the Trust Fund for FAA Operations. In contrast, if revenues were 5 percent lower than projected, the uncommitted balance of the Trust Fund would reach zero during 2006 under the President’s proposal; if the revenues were 10 percent lower than projected the uncommitted balance would reach zero in 2005. Suspending Some or All Taxes Accruing to the Trust Fund Would Reduce or Eliminate the Trust Fund’s Uncommitted Balance Billions of Trust Fund revenue would be forgone if all taxes accruing to the Trust Fund were suspended for 1 year. As shown in figure 4, suspending all taxes would result in almost $10 billion in forgone Trust Fund revenues. The amount of Trust Fund revenues forgone under the other tax holiday scenarios would range from approximately $447 million if the cargo tax were suspended to nearly $5.2 billion if the passenger ticket taxes were suspended. Under an all tax holiday, the Trust Fund’s uncommitted balance would reach zero by October 2003, no matter which expenditure proposal were adopted, as shown in figures 5 through 7. However, the other four tax holiday scenarios would affect the Trust Fund’s uncommitted balance in different ways under each of the three expenditure proposals. Figure 5 shows the effects of several tax holidays under the Senate’s proposal. Although the Trust Fund’s uncommitted balance would decrease under the other four tax holiday scenarios, it would not reach zero. For example, a passenger ticket tax holiday would decrease the Trust Fund’s uncommitted balance from $4.8 billion in 2002 to $2 billion in 2003 and to $2.1 billion in 2004, while a fuel tax holiday would reduce it to $4.1 billion in 2003 and to $4.2 billion in 2004. Similarly, as shown in figure 6, under the House’s proposal, the Trust Fund’s uncommitted balance would also decrease under the other four tax holiday scenarios, but it would not reach zero. For example, a flight segment fee tax holiday would decrease the Trust Fund’s uncommitted balance from $4.8 billion in 2002 to $3.5 billion in 2003 and to $3.6 billion in 2004, while a cargo tax holiday would reduce it to $4.3 billion in 2003 and to $4.3 billion in 2004. In contrast, as shown in figure 7, under the President’s proposal, the Trust Fund’s uncommitted balance would reach zero under three of the five tax holiday scenarios by the end of 2006. For example, a passenger ticket tax holiday would cause the uncommitted balance to reach zero by October 2003. A fuel tax holiday and cargo tax holiday would be the only tax holiday scenarios in which the Trust Fund’s uncommitted balance would not reach zero by 2006 under the President’s proposal. Under a fuel tax holiday, the Trust Fund’s uncommitted balance would decrease from $4.8 billion in 2002 to $135 million in 2006, a decrease of about $4.7 billion. Similarly, a cargo tax holiday would decrease to $495 million in 2006, a decrease of about $4.3 billion. A tax holiday under the President’s proposal would have a greater effect because that proposal would require the Trust Fund to support a larger percentage of FAA Operations compared with the Senate’s and House’s proposals. For example, if there were an all tax holiday and the President’s proposal was adopted, the Trust Fund would support 79 percent of FAA Operations. Under the Senate’s and House’s proposals, the amount of funding spent on FAA Operations would be reduced in response to the amount of revenues lost from a tax holiday due to the adoption of the AIR- 21 funding formula for Operations. In addition to forgone revenue and the elimination or reduction of the Trust Fund’s uncommitted balance, granting any kind of tax holiday could pose budgetary challenges for FAA. For example, as previously noted, a 1-year all tax holiday starting in April 2003 would cause the uncommitted balance of the Trust Fund to reach zero by October 2003 and might require FAA to make significant spending cuts to the aviation programs supported by the Trust Fund unless additional funding were authorized from the General Fund. If there were a 1-year all tax holiday, FAA officials said they would continue to maintain some FAA Operations, particularly air traffic control services because it is considered an emergency function that involves the safety of human life. However, according to FAA officials, the agency would have to suspend activities for the AIP, F&E, and RED programs until April 2004 and use the funds appropriated for these suspended capital programs to continue to first fund FAA Operations. According to FAA officials, additional support from the General Fund would also be needed to continue funding Operations during the first 6 months of fiscal year 2004. FAA officials also stated that if a 1-year all tax holiday under all three expenditure scenarios were granted, FAA might have to delay or terminate some multimillion dollar F&E contracts, unless Congress authorized funding from the General Fund. FAA officials stated that while their contracts have clauses that limit liability, it is their experience that any remaining obligated funds for contracts in a given fiscal year that have not actually been expended would be used to offset contract termination costs. If there were a 1-year all tax holiday, FAA estimates it could incur in excess of $1 billion in contract termination costs. For example, according to FAA officials, terminating the National Aerospace System Implementation Services contract, which provides engineering support for the implementation of programs such as the Standard Terminal Automation Replacement System, would result in termination costs of $20 million. We reviewed FAA’s data on the unobligated balances of outstanding F&E contracts and verified that the amount totaled $1.5 billion. However, we did not review individual FAA outstanding F&E contracts to confirm FAA’s statement that on the basis of its experience, any remaining obligated funds for contracts in a given fiscal year that have not actually been expended would be used to offset contract termination costs. Although FAA would not have to terminate contracts under the House’s and Senate’s proposals if there were a passenger ticket tax, flight segment tax, or fuel tax holiday, FAA’ s ability to continue to fund its programs with Trust Fund revenue would be affected under the President’s proposal if one of these holidays were granted. For example, if passenger ticket taxes were suspended for 1 year, beginning in April 2003, the uncommitted balance would reach zero by October 2003. Consequently, FAA officials stated that its AIP and F&E programs would have to be suspended from October 2003 through May 2004, if additional funds were not provided from the General Fund. However, to fully fund FAA Operations, particularly air traffic control services, Congress would have to authorize additional funding from the General Fund to offset revenue shortfalls created by these tax holidays. Agency Comments We provided the Department of Transportation with a draft of this report for its review and comment. FAA officials agreed with information contained in this report and provided some clarifying and technical comments that we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Transportation; and the Administrator, FAA. We will also make copies available to others upon request. In addition, this report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me or Tammy Conquest at (202) 512-2834 if you have any questions. In addition, Jay Cherlow, Colin Fallon, Dave Hooper, and Richard Swayze made key contributions to this report. Scope and Methodology To determine the projected financial status of the multibillion dollar Airport and Airway Trust Fund (hereafter called the Trust Fund), we obtained from the Federal Aviation Administration (FAA) the financial projections for the Trust Fund that it had developed under the expenditure proposals included in the President’s reauthorization proposal. We subsequently asked FAA to develop similar projections using the expenditure scenarios in the proposals from the Senate Committee on Commerce, Science, and Transportation and the House Committee on Transportation and Infrastructure, Subcommittee on Aviation. In addition, since the realization of FAA’s projections depends on passenger traffic levels and airfares, we asked FAA to develop two additional projections under each of the three expenditure proposals. Specifically, we asked FAA to project what would happen if tax revenues accruing to the Trust Fund from fiscal years 2003 through 2007 were 5 percent and 10 percent below the levels projected in FAA’s November 2002 forecasts. Accordingly, our findings on the financial outlook of the Trust Fund are based on FAA’s projections, rather than on any projections of our own. We reviewed the process, methodology, and sources of information used by FAA to make these projections and found them reasonable. We discussed the approach and results of our analysis with FAA officials who are responsible for making the projections, representatives from the Airports Council International and the Air Transport Association, and two academic experts. To assess the effect of various tax holidays on the financial status of the Trust Fund, we asked FAA to develop additional financial projections under various tax holiday scenarios. FAA developed these additional projections under each of the three expenditure proposals that we used in determining the financial condition of the Trust Fund. We then assessed the effect of each tax holiday scenario under each expenditure proposal by comparing the financial projection for the Trust Fund under that tax holiday scenario and expenditure proposal with FAA’s baseline projection. We used the following five tax holiday scenarios: An all taxes holiday, in which all taxes that accrue to the Trust Fund are suspended. A passenger ticket tax holiday, in which the passenger ticket tax, the rural airport tax, and the frequent flyer tax are suspended. A flight segment tax holiday, in which the passenger segment tax holiday is suspended. A fuel tax holiday in which the commercial aviation, general aviation gasoline, and general aviation jet fuel taxes are suspended. A cargo tax holiday in which the cargo waybill taxes are suspended. The following assumptions were also included in the analyses: As requested in March 2003, we based our analysis on hypothetical tax holidays that would have begun on April 1, 2003, and ended on April 1, 2004. The FAA projections presented do not account for budgetary responses by FAA to the drop in revenues resulting from a tax holiday. Unless each dollar of lost revenue resulting from a tax holiday was replaced by General Fund revenues, FAA would adjust its spending plans, which in turn would have a direct effect on FAA’s projections. In addition, in projecting the effect of any particular tax holiday on the Trust Fund’s revenues, FAA set the tax rate to zero for the tax or taxes that were being suspended while keeping all other factors in its forecast model unchanged. That is, FAA’s projections do not take into account changes that might cause the Trust Fund’s revenues from one tax to increase when another tax was suspended (i.e., feedback effects). For example, a suspension of the passenger ticket taxes might lead to lower fares for air travelers, which in turn might cause more trips to be made, thereby increasing the Trust Fund’s revenues from the flight segment tax. We discussed with FAA officials the possibility of preparing additional projections that incorporated feedback effects to more thoroughly analyze the impact of tax holidays. However, we chose not to make such a request because preliminary analysis that we and FAA officials conducted indicated that these feedback effects would likely not be large enough to change our findings. Finally, to assess the effect that tax holidays would have on FAA’s ability to continue to use Trust Fund revenue to support its programs, we interviewed FAA officials. We reviewed FAA data on the outstanding Facilities and Equipment (F&E) contracts that had unobligated balances and verified that they totaled $1.5 billion. However, we did not review individual FAA outstanding F&E contracts to confirm FAA’s statement that on the basis of its experience, any remaining obligated funds for contracts in a given fiscal year that have not actually been expended would be used to offset contract termination costs. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
The multibillion dollar Airport and Airway Trust Fund (Trust Fund) provides most of the funding for the Federal Aviation Administration (FAA). The Trust Fund relies on revenue from 10 taxes, including passenger ticket, fuel, and cargo taxes. Concerns about the financial outlook of the Trust Fund have emerged recently given the downturn in passenger air travel, requests from the airlines to suspend some of the Trust Fund taxes, and the need to reauthorize FAA's major programs in 2003. GAO was asked to determine (1) the projected financial outlook of the Trust Fund and (2) how a 1- year suspension of various taxes accruing to the Trust Fund (i.e., a tax holiday), would affect its financial status. We were asked to assess five potential tax holidays that would have begun on April 1, 2003, and ended on April 1, 2004. GAO used a model developed by FAA that made financial projections for the Trust Fund using expenditure assumptions that were based on (1) the Senate Committee on Commerce, Science, and Transportation's May 2, 2003, and the House Subcommittee on Aviation's May 15, 2003, reauthorization proposals authorizing over $34 billion and (2) the President's proposal authorizing almost $38 billion from the Trust Fund. For each of these proposals, GAO asked FAA to model the effects of five different tax holidays. Over the next 3 years, with no change in tax rates and assuming that FAA's passenger traffic and airfare projections are valid, the Trust Fund is expected to continue to have sufficient revenue to cover authorized spending and end each year with a surplus, or an "uncommitted balance" as it is usually called, under each of the three expenditure scenarios we analyzed. For fiscal years 2004 through 2006, the potential uncommitted balances would range from over $4.4 billion (if Congress adopted either the House or the Senate proposal) to $1 billion, if the President's proposal were adopted. Suspending some or all of the taxes that accrue to the Trust Fund for 1 year would reduce or eliminate the Trust Fund's uncommitted balance. As depicted below, if all taxes accruing to the Trust Fund were suspended, effective April 1, 2003, almost $10 billion in tax revenue would be forgone and the uncommitted balance would be eliminated by October 2003. The status of the Trust Fund would also differ according to the reauthorization proposal adopted and the taxes suspended. For example, suspending the passenger ticket tax and adopting either the House or Senate proposal would reduce the uncommitted balance to $1.8 billion and $2 billion, respectively, in 2006. However, suspending the same tax and adopting the President's proposal would eliminate the uncommitted balance by October 2003. The budgetary consequences of the remaining potential tax holidays would vary substantially. FAA officials stated that under the President's proposal, a passenger ticket tax holiday might require spending cuts to its capital programs, while a cargo tax holiday would require few if any spending cuts to its programs. In its comments on a draft of this report, FAA agreed with the report's findings and provided some clarifying comments that we incorporated where appropriate.
Documentation Was Produced to Support Undercover Investigation As part of our undercover investigation, we produced counterfeit documents before sending our two teams of investigators out to the field. We found two NRC documents and a few examples of the documents by searching the Internet. We subsequently used commercial, off-the-shelf computer software to produce two counterfeit NRC documents authorizing the individual to receive, acquire, possess, and transfer radioactive sources. To support our investigators’ purported reason for having radioactive sources in their possession when making their simultaneous border crossings, a GAO graphic artist designed a logo for our fictitious company and produced a bill of lading using computer software. With Ease, Investigators Purchased, Received, and Transported Radioactive Sources Across Both Borders Our two teams of investigators each transported an amount of radioactive sources sufficient to manufacture a dirty bomb when making their recent, simultaneous border crossings. In support of our earlier work, we had obtained an NRC document and had purchased radioactive sources as well as two containers to store and transport the material. For the purposes of this undercover investigation, we purchased a small amount of radioactive sources and one container for storing and transporting the material from a commercial source over the telephone. One of our investigators, posing as an employee of a fictitious company, stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detectors. Suppliers are not required to exercise any due diligence in determining whether the buyer has a legitimate use for the radioactive sources, nor are suppliers required to ask the buyer to produce an NRC document when making purchases in small quantities. The amount of radioactive sources our investigator sought to purchase did not require an NRC document. The company mailed the radioactive sources to an address in Washington, D.C. Two Teams of Investigators Conducted Simultaneous Crossings at the U.S.- Canadian Border and U.S.-Mexican Border Northern Border Crossing On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their rental vehicle. Our investigators – acting in an undercover capacity – drove to an official port of entry between Canada and the United States. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our investigators were signaled to drive through the radiation portal monitors and to meet the CBP inspector at the booth for their primary inspection. As our investigators drove past the radiation portal monitors and approached the primary checkpoint booth, they observed the CBP inspector look down and reach to his right side of his booth. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them where they lived. One of our investigators on the two-man undercover team handed the CBP inspector both of their passports and told him that he lived in Maryland while the second investigator told the CBP inspector that he lived in Virginia. The CBP inspector also asked our investigators to identify what they were transporting in their vehicle. One of our investigators told the CBP inspector that they were transporting specialized equipment back to the United States. A second CBP inspector, who had come over to assist the first inspector, asked what else our investigators were transporting. One of our investigators told the CBP inspectors that they were transporting radioactive sources for the specialized equipment. The CBP inspector in the primary checkpoint booth appeared to be writing down the information. Our investigators were then directed to park in a secondary inspection zone, while the CBP inspector conducted further inspections of the vehicle. During the secondary inspection, our investigators told the CBP inspector that they had an NRC document and a bill of lading for the radioactive sources. The CBP inspector asked if he could make copies of our investigators’ counterfeit bill of lading on letterhead stationery as well as their counterfeit NRC document. Although the CBP inspector took the documents to the copier, our investigators did not observe him retrieving any copies from the copier. Our investigators watched the CBP inspector use a handheld Radiation Isotope Identifier Device (RIID), which he said is used to identify the source of radioactive sources, to examine the investigators’ vehicle. He told our investigators that he had to perform additional inspections. After determining that the investigators were not transporting additional sources of radiation, the CBP inspector made copies of our investigators’ drivers’ licenses, returned their drivers’ licenses to them, and our investigators were then allowed to enter the United States. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. Southern Border Crossing On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their vehicle. Our investigators drove to an official port of entry at the southern border. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our two-person undercover team was signaled by means of a traffic light signal to drive through the radiation portal monitors and stopped at the primary checkpoint for their primary inspection. As our investigators drove past the portal monitors and approached the primary checkpoint, they observed that the CBP inspector remained in the primary checkpoint for several moments prior to approaching our investigators’ vehicle. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them if they were American citizens. Our investigators told the CBP inspector that they were both American citizens and handed him their state-issued drivers’ licenses. The CBP inspector also asked our investigators about the purpose of their trip to Mexico and asked whether they were bringing anything into the United States from Mexico. Our investigators told the CBP inspector that they were returning from a business trip in Mexico and were not bringing anything into the United States from Mexico. While our investigators remained inside their vehicle, the CBP inspector used what appeared to be a RIID to scan the outside of the vehicle. One of our investigators told him that they were transporting specialized equipment. The CBP inspector asked one of our investigators to open the trunk of the rental vehicle and to show him the specialized equipment. Our investigator told the CBP inspector that they were transporting radioactive sources in addition to the specialized equipment. The primary CBP inspector then directed our investigators to park in a secondary inspection zone for further inspection. During the secondary inspection, the CBP inspector said he needed to verify the type of material our investigators were transporting, and another CBP inspector approached with what appeared to be a RIID to scan the cardboard boxes where the radioactive sources was placed. The instrumentation confirmed the presence of radioactive sources. When asked again about the purpose of their visit to Mexico, one of our investigators told the CBP inspector that they had used the radioactive sources in a demonstration designed to secure additional business for their company. The CBP inspector asked for paperwork authorizing them to transport the equipment to Mexico. One of our investigators provided the counterfeit bill of lading on letterhead stationery, as well as their counterfeit NRC document. The CBP inspector took the paperwork provided by our investigators and walked into the CBP station. He returned several minutes later and returned the paperwork. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. Corrective Action Briefings We conducted corrective action briefings with CBP and NRC officials shortly after completing our undercover operations. On December 21, 2005, we briefed CBP officials about the results of our border crossing tests. CBP officials agreed to work with the NRC and CBP’s Laboratories and Scientific Services to come up with a way to verify the authenticity of NRC materials documents. We conducted two corrective action briefings with NRC officials on January 12 and January 24, 2006, about the results of our border crossing tests. NRC officials disagreed with the amount of radioactive material we determined was needed to produce a dirty bomb, noting that NRC’s “concern threshold” is significantly higher. We continue to believe that our purchase of radioactive sources and our ability to counterfeit an NRC document are matters that NRC should address. We could have purchased all of the radioactive sources used in our two undercover border crossings by making multiple purchases from different suppliers, using similarly convincing cover stories, using false identities, and had all of the radioactive sources conveniently shipped to our nation’s capital. Further, we believe that the amount of radioactive sources that we were able to transport into the United States during our operation would be sufficient to produce two dirty bombs, which could be used as weapons of mass disruption. Finally, NRC officials told us that they are aware of the potential problems of counterfeiting documents and that they are working to resolve these issues. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. Contacts and Acknowledgments For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Given today's unprecedented terrorism threat environment and the resulting widespread congressional and public interest in the security of our nation's borders, GAO conducted an investigation testing whether radioactive sources could be smuggled across U.S. borders. Most travelers enter the United States through the nation's 154 land border ports of entry. Department of Homeland Security U.S. Customs and Border Protection (CBP) inspectors at ports of entry are responsible for the primary inspection of travelers to determine their admissibility into the United States and to enforce laws related to preventing the entry of contraband, such as drugs and weapons of mass destruction. GAO's testimony provides the results of undercover tests made by its investigators to determine whether monitors at U.S. ports of entry detect radioactive sources in vehicles attempting to enter the United States. GAO also provides observations regarding the procedures that CBP inspectors followed during its investigation. GAO has also issued a report on the results of this investigation (GAO-06-545R). For the purposes of this undercover investigation, GAO purchased a small amount of radioactive sources and one secure container used to safely store and transport the material from a commercial source over the telephone. One of GAO's investigators, posing as an employee of a fictitious company located in Washington, D.C., stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detection pagers. The purchase was not challenged because suppliers are not required to determine whether prospective buyers have legitimate uses for radioactive sources, nor are suppliers required to ask a buyer to produce an NRC document when purchasing in small quantities. The amount of radioactive sources GAO's investigator sought to purchase did not require an NRC document. Subsequently, the company mailed the radioactive sources to an address in Washington D.C. The radiation portal monitors properly signaled the presence of radioactive material when our two teams of investigators conducted simultaneous border crossings. Our investigators' vehicles were inspected in accordance with most of the CBP policy at both the northern and southern borders. However, GAO's investigators, using counterfeit documents, were able to enter the United States with enough radioactive sources in the trunks of their vehicles to make two dirty bombs. According to the Centers for Disease Control and Prevention, a dirty bomb is a mix of explosives, such as dynamite, with radioactive powder or pellets. When the dynamite or other explosives are set off, the blast carries radioactive material into the surrounding area. The direct costs of cleanup and the indirect losses in trade and business in the contaminated areas could be large. Hence, dirty bombs are generally considered to be weapons of mass disruption instead of weapons of mass destruction. GAO investigators were able to successfully represent themselves as employees of a fictitious company present a counterfeit bill of lading and a counterfeit NRC document during the secondary inspections at both locations. The CBP inspectors never questioned the authenticity of the investigators' counterfeit bill of lading or the counterfeit NRC document authorizing them to receive, acquire, possess, and transfer radioactive sources.
Background As reported in our high-risk series, as much as 10 percent of health expenditures nationwide are lost to fraud and abuse. In this regard, the HHS Office of the Inspector General (HHS/OIG) reported that in fiscal year 1997, an estimated 11 percent, or $20 billion of Medicare fee-for-service payments were improper. The Congress enacted HIPAA, in part, to respond to the problem of health care fraud and abuse. HIPAA consolidated and strengthened ongoing efforts to attack fraud and abuse in health programs and provided new criminal and civil enforcement tools, as well as expanded resources for fighting health care fraud, including $104 million in fiscal year 1997 for HCFAC. Under the joint direction of the Attorney General and the HHS Secretary (acting through the HHS/OIG), HCFAC is to achieve the following: coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; conduct investigations, audits, and other studies of the delivery and payment for health care in the United States; facilitate the enforcement of the civil, criminal, and administrative statutes applicable to health care; provide guidance to the health care industry, including the issuance of advisory opinions, safe harbor notices, and special fraud alerts; and establish a national database of adverse actions against health care providers. Funds for the HCFAC program are appropriated from the trust fund to a newly created expenditure account, referred to as the Health Care Fraud and Abuse Control Account, maintained within the trust fund. The Attorney General and the Secretary of HHS jointly certify that the funds transferred to the control account are necessary to finance health care anti-fraud and -abuse activities, subject to limits for each fiscal year as specified in HIPAA. Annual minimum and maximum amounts are earmarked specifically for HHS/OIG activities for the Medicare and Medicaid programs. For example, of the $104 million available in fiscal year 1997, a minimum of $60 million and maximum of $70 million was earmarked for the HHS/OIG. By earmarking funds specifically for the HHS/OIG, the Congress ensured continued efforts by the HHS/OIG to detect and prevent fraud and abuse in the Medicare and Medicaid programs. DOJ and HHS refer to the difference between the maximum annual HCFAC appropriation and the maximum amount earmarked for the HHS/OIG as the “wedge amount.” If the HHS/OIG is allocated less than the maximum statutory amount, that difference is added to the wedge amount, which is available to fund health care fraud and abuse activities at other HHS entities and DOJ. Funds that HHS and DOJ do not spend to administer and operate the program may be made available to other federal, state, and local agencies engaged in health care fraud and abuse activities. See appendix I for additional detail regarding HCFAC funding. HIPAA also requires amounts equal to the following types of collections to be deposited in the trust fund: criminal fines recovered in cases involving a federal health care offense, including collections pursuant to section 1347 of Title 18, United States Code, civil monetary penalties and assessments imposed in health care fraud amounts resulting from the forfeiture of property by reason of a federal health care offense, including collections under section 982(a)(6) of Title 18, United States Code, and penalties and damages obtained and otherwise creditable to miscellaneous receipts of the Treasury’s general fund obtained under the False Claims Act (sections 3729 through 3733 of Title 31, United States Code), in cases involving claims related to the provision of health care items and services (other than funds awarded to a relator, for restitution or otherwise authorized by law). HIPAA also authorizes the trust fund to accept unconditional gifts and bequests. Scope and Methodology To meet our first objective of identifying amounts deposited to the trust fund in fiscal year 1997 pursuant to HIPAA and the sources of these amounts, we reviewed HHS and DOJ’s fiscal year 1997 joint HCFAC report. We also obtained the trust fund’s fiscal year 1997 income statement, which received an unqualified opinion from the independent auditors, Cotton and Company. We compared amounts shown in the joint report as deposits of penalties and multiple damages, criminal fines, civil monetary penalties, and gifts and bequests with the respective amounts reported on the trust fund’s audited income statement. In addition, we selected 17 deposit transactions, focusing on large dollar amounts. We tested the selected transactions to determine whether they were correctly classified as deposits to the trust fund. Further, we interviewed personnel at various HHS and DOJ entities to gain an understanding of procedures and controls related to collecting and reporting deposits. To satisfy our second objective of identifying amounts appropriated from the trust fund in fiscal year 1997 for the HCFAC program and the reported justification for expenditures of such amounts, as well as our third objective of identifying expenditures from the trust fund for HCFAC activities not related to Medicare, we reviewed the joint report. We also reviewed documents supporting the allocation of the HCFAC appropriation, such as HHS’ and DOJ’s funding decision memorandum, proposals for the wedge amount, and reallocation documents. In addition, we selected nine expenditures and three obligations, focusing on large dollar amounts. We tested the selected transactions to determine whether they were justified for fraud- and abuse-related activities. Also, because payroll costs were predominantly allocated to the HCFAC appropriation, rather than accounted for directly, we reviewed the allocation methodologies at two entities—HHS/OIG and DOJ’s Criminal Division—to determine whether the methodologies were reasonable. Further, we interviewed personnel at various HHS and DOJ entities to gain an understanding of their procedures for allocating the HCFAC appropriation and reporting related expenditures, including non-Medicare related expenditures. To identify any savings to the trust fund, as well as any other savings, resulting from expenditures from the trust fund for the HCFAC program, which was our fourth objective, we reviewed the joint report. We reviewed all recommendations and the resulting cost savings as reported in the HHS/OIG’s fiscal year 1997 semiannual reports to determine whether such cost savings related to the HCFAC program. In addition, we selected 10 cost savings items from the fiscal year 1997 semiannual reports and reviewed supporting documentation to determine whether such cost savings related to fiscal year 1997 and were adequately substantiated. Further, we interviewed HHS/OIG personnel to determine their methodology for estimating cost savings. We reviewed deposit, appropriation, and savings information reported in HHS’ and DOJ’s joint report. Our review of reported trust fund deposit, appropriation, and savings information was conducted in accordance with standards established by the American Institute of Certified Public Accountants (AICPA)—Attestation Standards sections 100.03 through 100.52. A review is substantially less in scope than an audit. Accordingly, we do not express an opinion on amounts reported in the HHS and DOJ 1997 joint report. In response to HIPAA’s provision that we report on other aspects of the operation of the trust fund as we consider appropriate, we reviewed the status of HIPDB, the national health care fraud and abuse data collection program, which is a requirement of HIPAA. To gain an understanding of the program’s implementation and current status, we reviewed the program status report prepared by Health Resources and Services Administration (HRSA). We also interviewed knowledgeable personnel responsible for HIPDB’s development and reviewed various documents relating to the program. Specifically, we discussed the planning and current status of the data bank with personnel from HHS/OIG and HRSA, the organizations primarily responsible for HIPDB. We did not perform a systems review or consider whether HIPDB, as planned, will be compliant with Year 2000requirements. We performed work and contacted officials at the Health Care Financing Administration in Baltimore, Maryland; HHS Headquarters, the HHS/OIG, the Administration on Aging, and DOJ’s Justice Management Division, Executive Office of the United States Attorneys, Criminal Division, and Civil Division in Washington, D.C.; HRSA and HHS’ Program Support Center in Rockville, Maryland; and the Pennsylvania Eastern District United States Attorneys Office in Philadelphia, Pennsylvania. We conducted our work from February 1998 through May 6, 1998 in accordance with generally accepted government auditing standards, which incorporate AICPA standards and provide additional audit standards. We requested comments on a draft of this report from the Secretary of HHS and the Attorney General or their designees. The Inspector General of HHS provided us with written comments, which are reprinted in appendix II. On May 15, the DOJ Chief, Health Care Fraud Unit provided us with oral comments. Both agencies’ comments are discussed in the “Agency Comments” section. Amounts Deposited to the Trust Fund HHS and DOJ reported total deposits of $130.7 million to the trust fund in fiscal year 1997 pursuant to HIPAA. These deposits are reported as resulting almost totally from penalties and multiple damages obtained under the False Claims Act and criminal fines. Table 1 presents the total reported deposits to the trust fund in fiscal year 1997 pursuant to HIPAA as reported by HHS and DOJ in their fiscal year 1997 joint HCFAC report. We found no material weaknesses in the procedures that HHS and DOJ have put in place for identifying and reporting deposits pursuant to HIPAA. In addition, nothing came to our attention to suggest that HHS and DOJ did not accurately classify HIPAA deposits in their fiscal year 1997 joint HCFAC report, except for a $5 million overstatement of criminal fines that they noted in the joint report. Penalties and damages obtained under the False Claims Act and criminal fines resulting from health care fraud cases, as reported by HHS and DOJ, comprised about 68 percent and 32 percent, respectively, of the deposits to the trust fund pursuant to HIPAA. DOJ’s Civil Division in Washington, D.C., and Financial Litigation Units in United States Attorneys Offices located throughout the country collect penalties and damages resulting from health care fraud cases. They report collection information to DOJ’s Debt Accounting Operations Group, which in turn centrally accounts for collections of penalties and multiple damages and reports to the Department of the Treasury the amounts to be deposited to the trust fund. Clerks of the Administrative Office of the United States Courts located throughout the country collect criminal fines resulting from health care fraud cases and report these collections to the Financial Litigation Unit associated with their districts. The Financial Litigation Units report criminal fine collections to DOJ’s Executive Office of the United States Attorneys in Washington, D.C., which centrally reports the amount of criminal fines collected to the Department of the Treasury. We found that the amounts shown in the joint report as deposits of penalties and multiple damages, criminal fines, civil monetary penalties, and gifts and bequests agreed with the respective amounts reported on the trust fund’s audited fiscal year 1997 income statement. In addition, we found that the 17 deposit transactions we reviewed totaling approximately $43 million were accurately reported and classified as deposits to the trust fund. We also found that deposits reported in fiscal year 1997 primarily resulted from actions initiated prior to the creation of HCFAC. According to DOJ officials, investigation and litigation of health care fraud cases generally span several years. Once cases are settled, it may take several more years before any resulting fines, penalties, and damages are paid in full. Consequently, deposits to the trust fund reported in fiscal year 1997 pursuant to HIPAA essentially resulted from prior years investigation and litigation efforts. For example, in fiscal year 1997, DOJ reported nearly $41 million in deposits to the trust fund pursuant to HIPAA as a part of a $319 million settlement with Smithkline Beecham Clinical Labs, which was the result of a 3-year task force effort targeting unbundling schemes perpetrated by independent clinical laboratories. Also, a 7-year investigation of home health agency fraud resulted in a $255 million settlement with First American Home Health Care of Georgia, formerly ABC Home Health Services, in fiscal year 1997. DOJ reported almost $20 million in deposits to the trust fund pursuant to HIPAA in fiscal year 1997 as a result of this settlement. Similarly, investigation and litigation activities initiated in fiscal year 1997 will most likely result in collections in future years. Amounts Appropriated From the Trust Fund In fiscal year 1997, the Attorney General and HHS Secretary certified the entire $104 million appropriation as necessary to carry out the HCFAC program. We found no material weaknesses in HHS’ and DOJ’s process for allocating the HCFAC appropriation for fraud and abuse control purposes. The Attorney General and HHS Secretary entered into a memorandum of understanding which laid the groundwork for allocating funds among program participants. In applying for funds, applicants were required to explain how proposed activities conformed to the statute and the HCFAC program and to provide a spending plan. HHS and DOJ jointly reviewed proposals and made funding decisions for the HCFAC funds. In this first year, HHS and DOJ did not make the final funding decisions for allocating the wedge funds until December 1996. Also, HHS and DOJ did not grant funds to other federal, state, and local agencies until July 1997, after requesting, reviewing, and approving proposals for HCFAC funds. Table 2 presents fiscal year 1997 allocations and obligations for the HCFAC program. The HHS/OIG was allotted $70 million, the maximum statutory amount authorized, to build upon the policies and practices of Operation Restore Trust (ORT), while strategically increasing resources dedicated to fraud activities, enhancing existing Medicare fraud protection activities, and pursuing new anti-fraud initiatives. The HHS/OIG reported that HCFAC funding allowed it to open six new investigative offices and three new audit offices and increase its staff levels by approximately 240 in fiscal year 1997. The following briefly summarizes the reported allocation of the $34 million wedge amount and the purposes for which those funds were used. The Department of Justice was allocated a total of $22.2 million primarily to increase efforts to litigate health care fraud cases and provide fraud training courses. DOJ reported that in fiscal year 1997, it had established a total of 208 new positions for health care fraud enforcement, including 116 attorneys, 26 paralegals, and 66 support positions. HCFA was allocated $5.3 million—$1.8 million to expand its survey and certification of Medicare coverage reviews tested under ORT, $1.9 million for the HCFA Customer Information System, and $1.6 million to extend a contract with the Los Alamos National Laboratory. Medicare coverage review surveys are intended to provide fiscal and regional home health intermediaries, who process and pay Medicare claims, with better information to assess overpayments and implement collection procedures. The HCFA Customer Information System is designed to identify potential targets for Medicare fraud and abuse investigations. HCFA received funding to purchase hardware for the system and provide connectivity to the HHS/OIG and DOJ. HCFA also received HCFAC funding to extend its contract with the Los Alamos National Laboratory to develop algorithms that could be developed as software to identify suspicious providers and patterns of abuse. HRSA was allocated $2 million to design and implement the adverse action database, referred to as the Healthcare Integrity and Protection Data Bank (which is discussed later in this report). The HHS Office of the General Counsel was allocated $1.8 million primarily to support the expected increased litigation workload resulting from HIPAA. The Administration on Aging was allocated $1.1 million to continue its outreach, education, and training efforts demonstrated in ORT. Eleven other federal and state agencies were allocated a total of $1.5 million for health care fraud and abuse prevention and detection activities. We found no material weaknesses in HHS and DOJ procedures to identify and report HCFAC expenditures. HCFA performs the accounting for the control account, from which HCFAC expenditures are made. HCFA sets up allotments in its accounting system for each of the HHS and DOJ entities receiving HCFAC funds. The HHS and DOJ entities account for their HCFAC obligations and expenditures in their respective accounting systems and report them to HCFA on a monthly basis. HCFA records the obligations and expenditures against the appropriate allotments in its accounting system. We reviewed supporting documentation, such as obligating documents and invoices, for nine expenditure transactions totaling $726,000. We also reviewed three obligation transactions totaling $3.3 million for which no expenditures had been made in fiscal year 1997. In addition, we reviewed the methodology used to allocate payroll costs to the HCFAC program at two entities—the HHS/OIG and DOJ’s Criminal Division—whose fiscal year 1997 payroll costs allocated to HCFAC totaled $38 million. We found that (1) the expenditures and obligations we tested related to HHS and DOJ funding decisions and the proposals approved for HCFAC funds and (2) the transactions appeared justified for fraud and abuse activities. In addition, we found no material weaknesses in the payroll cost allocation methodologies we reviewed. Non-Medicare Expenditures We were not able to identify HCFAC program expenditures from the trust fund not related to Medicare because the HHS/OIG and DOJ do not separately account for or monitor such expenditures. HIPAA restricts the HHS/OIG’s use of HCFAC funds to the Medicare and Medicaid programs. According to HHS/OIG officials, they use HCFAC funds only for audits, evaluations, or investigations related to Medicare or Medicaid. The officials also stated that while some activities may be limited to either Medicare or Medicaid, most activities are generally related to both programs. Because HIPAA does not preclude the HHS/OIG from using HCFAC funds for Medicaid efforts, the HHS/OIG does not believe it is necessary or beneficial to account for such expenditures separately. Similarly, DOJ officials do not believe that it is practical or beneficial to separately account for non-Medicare expenditures due to the nature of health care fraud cases. HIPAA permits DOJ to use HCFAC funds for health care fraud activities involving other health care programs. According to DOJ officials, health care fraud cases usually involve several health care programs, including Medicare and health programs administered by other federal agencies, such as the Department of Veterans Affairs, the Department of Defense, and the Office of Personnel Management. Consequently, it is difficult to separately charge personnel costs and other litigation expenses to specific parties in health care fraud cases. Also, according to DOJ officials, even if Medicare is not a party in a health care fraud case, the case may provide valuable experience in health care fraud matters, allowing auditors, investigators, and attorneys to become more effective in their efforts to combat Medicare fraud. Neither HHS nor DOJ have plans to identify these expenditures in the future. Savings to the Trust Fund In this first year of the HCFAC program, we were unable to quantify the savings to the trust fund, or any other savings, resulting from expenditures from the trust fund due to the nature of health care anti-fraud and -abuse activities. As discussed earlier, audits, evaluations, and investigations can take several years to complete. Once they are completed, it can take several more years before recommendations or initiatives are implemented. Likewise, it is not uncommon for litigation activities to span many years before a settlement is reached. Consequently, any savings resulting from health care anti-fraud and -abuse activities funded by the HCFAC program in fiscal year 1997 will likely not be realized until subsequent years. In their joint report, HHS and DOJ reported approximately $6.1 billion of cost savings during fiscal year 1997 resulting from implementation of HHS/OIG recommendations and other initiatives. However, as the recommendations and other initiatives relate to actions that predate the HCFAC program, the cost savings cannot be associated with expenditures from the trust fund pursuant to HIPAA. We found that $4 billion of the reported $6.1 billion of cost savings related to the Medicare program. The remaining $2.1 billion were specific to the Medicaid program and, thus, would not impact the trust fund since the Medicaid program is funded with general appropriations. We have discussed this with the HHS/OIG, which has agreed to provide cost savings information related to the Medicare and Medicaid programs separately in future reports. Status of HIPDB In response to HIPAA’s provision for us to report on other aspects of the trust fund, we determined the status of development and implementation of the Healthcare Integrity and Protection Data Bank (HIPDB). This database was to provide a way of tracking criminal convictions, civil judgments, and other adverse actions against healthcare providers, suppliers, and practitioners on a nationwide basis. As required by HIPAA, the database was to be established by January 1, 1997. While HHS did not believe it could develop the database by that date, it planned to have an interim database operational by March 1998. As mentioned previously, HHS and DOJ allocated $2 million of the fiscal year 1997 HCFAC appropriation for this effort. HHS officials advised us that the project has been redirected. They stated that there will not be an interim database as previously planned and that the database will not be operational until at least May 1999. We plan to include an evaluation of the database project in our next report under HIPAA. Agency Comments and Our Evaluation In commenting on this draft report, DOJ stated that it had no formal comments. HHS did not raise any issues relating to the facts presented in the report. However, HHS offered a technical comment that DOJ agreed with in its oral comments. We have incorporated the agencies’ comments as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Judiciary, the Secretary of HHS, the Attorney General, and other interested parties. Copies will be made available to others on request. Please contact me at (202) 512-4476 if you or your staff have any questions. Major contributors to this letter are listed in appendix III. Additional Detail on HCFAC Funding HIPAA provided for appropriations of up to $104 million to the control account for fiscal year 1997, of which $60 million (the minimum) to $70 million (the maximum) was earmarked for HHS/OIG activities involving Medicare and Medicaid. In addition, HIPAA authorizes the HHS/OIG to retain for current use certain reimbursements for the costs of conducting investigations and audits and monitoring compliance plans. In fiscal year 1997, these reimbursements were reported to total $540,000. According to HHS/OIG officials, these funds will be used to open two new investigative offices in Vermont and New Hampshire in fiscal year 1998. Also, HIPAA provided for appropriating an additional $47 million from the general fund of the Treasury to the control account for transfer to the Federal Bureau of Investigation (FBI) to (1) prosecute, investigate, and audit health care matters and (2) develop and deliver provider and consumer education regarding compliance with fraud and abuse provisions. HIPAA provides for annual increases of 15 percent in HCFAC funding through the year 2003, after which the appropriation for HCFAC and the amount earmarked for the HHS/OIG remains the same. Table I.1 summarizes the HCFAC funding limits for fiscal years 1997 through 2003. Table I.2 presents the HHS/OIG’s annual funding in fiscal year 1996, prior to HIPAA, through fiscal year 2001. Hospital Insurance Trust Fund transfer 1996 (actual) 1997 (actual) 1998 (est.)1999 (est.)2000 (est.)2001 (est.) The projected amounts for fiscal years 2000 and 2001 are based on the assumption that the HHS/OIG’s discretionary funding will remain at the estimated fiscal year 1999 level of $29 million and the HHS/OIG will be allotted the maximum statutory amount authorized. Comments From the Department of Health and Human Services The following are GAO’s comments on HHS’ letter dated May 18, 1998. GAO’s Comments 1. Now footnote d of table 1. The footnote has been modified to reflect the agency’s comment. 2. Now footnote 4. The footnote has been modified to reflect the agency’s comment. Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Deborah A. Taylor, Assistant Director Maria Cruz, Senior Audit Manager Anastasia Kaluzienski, Senior Audit Manager Vera Seekins, Audit Manager Diane Morris, Senior Auditor Sandra Silzer, Auditor Maria Zacharias, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the first joint report issued by the Department of Justice (DOJ) and the Department of Health and Human Services (HHS) on the fiscal year (FY) 1997 deposits to the Federal Hospital Insurance Trust Fund and the allocation of the Health Care Fraud and Abuse Control Program (HCFAC) appropriation, focusing on: (1) the amounts deposited to the trust fund and the sources of such amounts; (2) the amounts appropriated from the trust fund for the HCFAC program and the justification for the expenditure of such amounts; (3) expenditures from the trust fund for HCFAC activities not related to Medicare; and (4) any savings to the trust fund, as well as any other savings, resulting from the trust fund for the HCFAC program. GAO noted that: (1) the HHS and DOJ joint report for FY 1997 reported that $130.7 million was deposited to the trust fund pursuant to the Health Insurance Portablity and Accountability Act (HIPAA); (2) the sources of these deposits were primarily penalties and damages and criminal fines resulting from health care fraud audits, evaluations, investigations, and litigation activities initiated prior to implementation of the HCFAC program; (3) the joint report also stated that $104 million was appropriated from the trust fund for the HCFAC program in FY 1997; (4) of the $104 million, HHS and DOJ allocated the maximum--$70 million--to thhe HHS Office of Inspector General (OIG) to increase its Medicare and Medicaid fraud activities; (5) the remaining $34 million was allocated to: (a) DOJ, which received $22.2 million primarily to increase litigative efforts and to provide fraud training; (b) the Health Care Financing Administration (HCFA), which received $5.3 million for various initiatives related to health care fraud and abuse, including the development of a new information system to identify potential targets for fraud investigations; and (c) other federal and state agencies, which received the remaining $6.5 million for a variety of activities, including increased litigation; development of a new adverse action data bank; and outreach, education and training; (6) GAO found no material weaknesses in HHS' and DOJ's processes for accumulating this information, and nothing came to GAO's attention to lead it to believe that the amounts related to HIPAA deposits and the allocation of the HCFAC appropriation reported by HHS and DOJ in their joint report were inaccurate or unsupported; (7) GAO could not identify expenditures from the trust fund for HCFAC activities not related to Medicare because neither HHS OIG nor DOJ separately account for or monitor those expenditures; (8) HIPAA restricts HHS OIG's use of HCFAC funds to Medicare and Medicaid activities; (9) furthermore, health care fraud cases often involve more than one health care program; (10) thus, it is difficult to identify non-Medicare related expenditures; (11) GAO also could not determine the magnitude of savings to the trust fund, or other savings, resulting from trust fund expenditures for the HCFAC program during FY 1997; and (12) finally, the implementation of the Healthcare Integrity and Protection Data Bank, which is to be an important tool to keep unscrupulous providers from having access to Medicare and other health care programs, has been delayed.
Background DOD relies on its research laboratories and test facilities as well as industry and academia to develop new technologies and systems that improve and enhance military operations and ensure technological superiority over adversaries. Yet, historically, DOD has experienced problems in bringing technologies out of the lab environment and into real use. At times, technologies do not leave the lab because their potential has not been adequately demonstrated or recognized. In other cases, acquisition programs—which receive the bulk of DOD’s funding in research, development, testing and evaluation of technology—are simply unwilling to fund final stages of development of a promising technology, preferring to invest in other aspects of the program that are viewed as more vital to success. Other times, they choose to develop the technologies themselves, rather than rely on DOD labs to do so—a practice that brings cost and schedule risk since programs may well find themselves addressing problems related to technology immaturity that hamper other aspects of the acquisition process. And often, DOD’s budgeting process, which requires investments to be targeted at least 2 years in advance of their activation, makes it difficult for DOD to seize opportunities to introduce technological advances into acquisition programs. In addition, it is challenging just to identify and pursue technologies that could be used to enhance military operations given the very wide range of organizations inside and outside of DOD that are focused on technology development and the wide range of capabilities that DOD is interested in advancing. In recognizing this array of challenges, DOD and Congress have established a number of “technology transition” programs, each with a particular focus. (See table 1.) The Advanced Concept Technology Demonstration (ACTD) program, for example, was initiated by DOD in 1994 as a way to get technologies that meet critical military needs into the hands of users faster and at less cost than the traditional acquisition process. Under this program, military operators test prototypes that have already been developed and matured in realistic settings. If they find the items to have military utility, DOD may choose to buy additional quantities or just use the items remaining after the demonstration. In 1980, DOD established the Foreign Comparative Testing (FCT) Program to identify, evaluate, and procure technologies that have already been developed and tested in other countries—saving DOD the costly burden of maturing the technology itself. Other programs include those that seek to quickly identify and solve production problems associated with technology transition (the Manufacturing Technology Program—MANTECH) and to partner with the commercial sector in completing projects that are useful to both military and industry (the Dual Use Science and Technology program). Even taken together, however, these programs represent a very small portion of DOD dollars spent on applied research and advanced technology development—about $9 billion annually—and considerably less of total money spent on the later stages of technology development, which includes an additional $60 billion spent on advanced component development and prototypes, largely within weapons acquisition programs. As such, they cannot single-handedly overcome transition problems, but rather demonstrate various ways to ease transition and broaden participation from the industrial base. Three of the more recent initiatives include the TTI and DACP, both established by Congress in fiscal year 2003, and the Quick Reaction Fund, established by DOD the same year. TTI is focused on speeding the transition of technologies developed by DOD’s S&T programs into acquisition programs, while DACP is focused on introducing innovative and cost-saving technologies developed inside and outside DOD. The Quick Reaction Fund is focused on field testing technology prototypes. All three programs are managed by DOD’s Office of Defense Research and Engineering, which reports to the Under Secretary of Defense for Acquisition, Technology and Logistics. Together, these three programs received about $64 million in fiscal year 2005–a fraction of the $9.2 billion DOD invested in applied research and advanced technology development the same year and a relatively small budget compared to some of the other transition programs. Nevertheless, DOD has been increasing its investment in these programs and plans to further increase it over the next few years. (See figure 1.) Table 2 highlights similarities and differences between DACP, TTI, and Quick Reaction Fund. Table 3 provides examples of projects that have already been funded. Technology Transition Programs Offering Benefits, but It Is too Early to Determine Their Impact The three transition programs, which are being implemented consistent with congressional intent, reported that benefits can already be seen in many projects, including improvements to performance, affordability, manufacturability, and operational capability for the warfighter. While such benefits may have eventually been achieved through normal processes, program officials believe the three transition programs enabled DOD to realize them sooner due to the immediate funding that was provided to complete testing and evaluation as well as attention received from senior managers. DOD officials also emphasized that these programs are calling attention to emerging technologies that have the potential to offer important performance gains and cost savings but, due to their size and relative obscurity, may otherwise be overlooked when competing against other, larger-scaled technologies and/or technologies already deemed as vital to a particular acquisition program’s success. Another benefit cited with the DACP is an expansion of the Defense industrial base, because the program invites participation from companies and individuals that have not been traditional business partners with DOD. Nevertheless, it is too early for us to determine the impact that these programs have had on technology transition. At the time we selected projects to review, few projects had been completed. In addition, the programs had limited performance measures to gauge success of individual projects or track return on investment over time. The following examples highlight some of the reported benefits of individual projects. Host Weapons Shock Profile Database—DOD spends a significant amount of time and resources to test new accessories (e.g., night vision scopes) for Special Operations Forces weapons. Currently, when new accessories are added, they must go through live fire testing to determine if they work properly and will meet reliability standards. This process could take several months to complete as the acquisition office must schedule time at a test range to complete the testing. Program officials must also identify and pay for an expert to conduct the testing and pay for ammunition that will be used in the test. The DACP is funding the test and evaluation of a database that will simulate the vibration or shock of various machine guns in order to test new accessories for that gun. This will eliminate almost all of the testing costs mentioned above and greatly reduce the amount of time needed for testing. The project office estimates that it will save almost $780,000 per year in ammunition costs alone. Enhanced Optics for the Rolling Airframe Missile—The Rolling Airframe Missile is part of the Navy’s ship self-defense system to counter attacks from missiles and aircraft. However, the missile experiences operational deficiencies in certain weather conditions, and the program has had problems producing components for the optics. The DACP is providing funding to a small business to test and evaluate a new sapphire dome and optics for the missile to resolve these problems. Program officials estimate that program funding will accelerate the development of a solution 1 to 2 years earlier than anticipated. If the DACP project is successful, an added benefit will be that the dome material will be readily available from manufacturers in the United States instead of a single overseas supplier, as is currently the case. Water Purification System— For tactical situations in which deployed troops do not have quick and easy access to potable water, the pen will allow soldiers to treat up to 300 liters of any available, non-brackish water source on one set of lithium camera batteries and common table salt. The pen eliminates the risk of the soldiers’ exposure to diseases and bio-chemical pollutants. TTI funding was used to purchase approximately 6,600 water pens for distribution to the military services. In addition, TTI funding enabled this item to be placed on a General Services Administration schedule, where approximately 8,600 additional water pens have been purchased by DOD customers. DOD and the company that produces the pen donated hundreds of these systems to the tsunami relief effort in Southeast Asia. Dragon Eye—The Dragon Eye is a small, unmanned aerial vehicle with video surveillance capabilities used by the marines. To address the concerns over a chemical and biological threat to troops in Iraq, the Quick Reaction Fund funded the integration of a small chemical detection and biological collection device on the Dragon Eye. The low- flying Dragon Eye can tell troops in real time where and when it is collecting samples, and in cases where a plume is detected, it can determine the direction the plume is moving. According to program officials, Quick Reaction funding allowed the chemical and biological detection capability to be developed 2 years ahead of schedule. The technology was available to a limited number of Special Operations Forces at the beginning of the Iraqi conflict. Despite the evident benefits of certain projects, it is too early to determine the programs’ impact on technology transition. At the time we selected projects for review, only 11 of 68 projects started in fiscal years 2003 and 2004 had been completed, and, of those, only 4 were currently available to warfighters. These include one TTI project—a miniaturized water purification system that is now being offered through a General Services Administration schedule to the warfighter—and three projects under the Quick Reaction Fund, including the Dragon Eye chemical and biological sensor, planning software used by Combatant Commanders dealing with weapons of mass destruction targets, and special materials that strengthen unmanned aerial vehicles. Since the time we selected projects, 20 have been reported as completed and 13 have been reported as available to warfighters. The latest project completion information by program is shown in Table 4. It is important to note that, even though 20 TTI and Quick Reaction Fund projects are considered to be complete, not all of the capabilities have reached the warfighter. For example: The T58 Titanium Nitride Erosion Protection is a TTI project that has transitioned to an acquisition program but has not yet reached the warfighter. The project is being developed to improve the reliability of T-58-16A helicopter engines used in Iraq. While the compressor blades are designed for 3000 operating hours, the Marine Corps has had to remove engines with fewer than 150 operational hours due to sand ingestion. The project received funding from the TTI in fiscal years 2003 and 2004 to develop a titanium nitride coating for engine blades that would significantly mitigate erosion problems in a desert environment. According to program documents, blades with the new coating will be included in future production lots beginning in July 2005. Modification kits will also be developed for retrofitting engines already produced. Program officials expect the project will double the compressor life of the engine in a sand environment and save about $12 million in life-cycle costs through fiscal year 2012. The Ping project, funded by the Quick Reaction Fund, is an example of a project that is considered complete, but a prototype was never field tested by the warfighter. The Air Force had hoped to broaden the capability of the microwave technology it used to identify large objects such as tanks or cars to also detect concealed weapons or explosives— such as suicide vests. However, the project was cancelled after some initial testing revealed that the technology was not accurate enough to determine the microwave signatures of small arms or suicide vests that could have numerous configurations and materials. DOD officials stated that, even though the project was unsuccessful, they gained a better understanding of microwave technologies and are continuing to develop these technologies for other applications. The long-term impact of the programs will also be difficult to determine because the technology transition programs have a limited set of metrics to gauge project success or the impact of program funding over time. While each funded project had to identify potential impact in terms of dollar savings, performance improvements, or acceleration to the field as part of the proposal process, actual impact of specific projects as well as the transition programs as a whole is not being tracked consistently. The value of having performance measures as well as DOD’s progress in adopting them for these transition programs is discussed in the next section of this report. Selection, Management and Oversight, and Assessment Processes Could Be Improved by Adopting Additional Practices To ensure that new technologies can be effectively transitioned and integrated into acquisitions, transition programs need to establish effective selection, management and oversight, and assessment processes. For example, programs must assure that proposals being accepted have established a sound business case, that is, technologies being transitioned are fairly mature and in demand and schedules and cost for transition fit within the program’s criteria. Once projects are selected, there needs to be continual and effective communication between labs and acquisition programs so that commitment can be sustained even when problems arise. To assure that the return on investment is being maximized, the impact of programs must be tracked, including cost and time savings as well as performance enhancements. Our work over the past 7 years has found that high-performing organizations adopt these basic practices as a means for successfully transitioning technologies into acquisitions. Moreover, several larger DOD technology transition programs, such as the ACTD program and some Defense Advanced Research Projects Agency (DARPA) projects, embrace similar practices and have already developed tools to help sustain commitment, such as memorandums of agreement between technology developers and acquirers. Both DARPA and ACTD manage budgets that are considerably larger than the programs included in this review. As such, the level of detail and rigor associated with their management processes may not be appropriate for TTI, DACP, or Quick Reaction Fund. However, the concepts and basic ingredients of their criteria and guidance could serve as a useful starting point for the smaller programs to strengthen their own processes. The three programs we reviewed adopted these practices to varying degrees. Overall, the DACP had disciplined and well-defined processes for selecting and managing, and overseeing projects. The TTI had disciplined and well-defined processes for selecting projects, but less formal processes for management and oversight. The Quick Reaction Fund was the least formal and disciplined of all three, believing that success was being achieved through flexibility and a high degree of senior management attention. All three programs had limited performance measures to gauge progress and return on investment. Generally, we found that the more the programs adopted structured and disciplined management processes, the fewer problems they encountered with individual efforts. Selection Success in transitioning technologies from a lab to the field or an acquisition program hinges on a transition program’s ability to choose the most promising technology projects. This includes technologies that can substantially enhance an existing or new system either through better performance or cost savings and those with technologies at a fairly mature stage, in other words, suitable for final stages of testing and evaluation. A program can only do this, however, if it is able to clearly communicate its purpose and reach the right audience to submit proposals in the first place. It is also essential that a program have a systematic process for determining the relative technical maturity of the project as well as for evaluating other aspects of the project, such as its potential to benefit specific acquisition programs. Involving individuals in the selection process from various functions within an organization—e.g., technical, business, and acquisition—further helps to assure that the right projects are being chosen and that they will have interested customers. An analytical tool that can be particularly useful in selecting projects is a technology readiness level (TRL) assessment, which assesses the maturity level of a technology ranging from paper studies (level 1), to prototypes that can be tested in a realistic environment (level 7), to an actual system that has proven itself in mission operations (level 9). Our prior work has found TRLs to be a valuable decision-making tool because it can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. As further detailed in table 5, the DACP program has a fairly robust selection process. The program relies on internet-based tools to communicate its goals and announce its selection process and ensure a broad audience is targeted. As a result, it receives a wide array of proposals from which the program office assesses their potential for generating improvements to existing programs as well as actual interest from the acquisition community. The DACP also solicits technical experts from inside and outside DOD to assess potential benefits and risks. Once the number of projects is whittled down, the program takes extra steps to secure commitments from acquisition program managers as well as program executive officers. The program’s popularity, however, has had some drawbacks. For example, the sheer number of proposals have tended to overwhelm DACP staff and slowed down the selection process, particularly in the first year. In addition, while technology benefits and risks are assessed in making selection decisions, DACP does not formally confirm the technology readiness levels being reported. The TTI program also has a fairly rigorous selection process, with specific criteria for selection, including technology readiness, and a team of representatives of higher-level DOD S&T officials in charge of disseminating information about the program in their organization, assessing their organization’s proposals based on TTI criteria as well as other criteria they developed, and ranking their top proposals. The program, which is focused on reaching DOD’s S&T community rather than outside industry, had been communicating in a relatively informal manner and it was unclear during our review the extent to which the TTI was reaching its intended audience. The program, however, has been taking steps to strengthen its ability to reach out to the S&T community. In addition, TTI does not confirm TRLs. At the time of our review, the Quick Reaction Program selection process was much less structured and disciplined than DACP and TTI. This was by design, because the program wants to select projects quickly and get them out to the field where they can be of use in military operations in Iraq, Afghanistan, and elsewhere. However, the program experienced problems related to selection and as a result—for example, significant gaps in knowledge about technology readiness led to the cancellation of one project. To program officials, the risk associated with less formal selection is worth the benefit of being able to move rapidly evolving technologies into an environment where they can begin to immediately enhance military operations and potentially save lives. Nevertheless, the program is now taking steps to strengthen selection processes. Selecting promising projects for funding is not enough to ensure successful transition. Program managers must also actively oversee implementation to make sure that project goals are being met and the program is working as intended and to identify potential barriers to transition. They must also sustain commitment from acquirers. Moreover, the transition program as a whole must have good visibility over progress and be positioned to shift attention and resources to problems as they arise. A tool that has proven particularly useful for other established DOD technology transition programs is designating individuals, preferably with experience in acquisitions or operations and/or the S&T world, as “deal brokers” or agents to facilitate communication between the lab and the acquisition program and to resolve problems as they arise. DARPA employs such individuals, for example, as well as some Navy-specific transition programs. Both have found that these agents have been integral to transition success. Another tool that is useful for sustaining commitment from the acquirers is a formal agreement. Our previous work found that best practice companies develop agreements with cost and schedule targets to achieve and sustain buy-in and that the agreements are modified as a project progresses to reflect more specific terms for accepting or rejecting a technology. DARPA develops similar agreements that describe how projects will be executed and funded as well as how projects will be terminated if the need arises. The agreements are signed by high-level officials, including the director of DARPA and senior-level representatives of the organizations DARPA is working with. The ACTD program develops “implementation directives” that clarify roles and responsibilities of parties executing an ACTD, time frames, funding, and the operational parameters by which military effectiveness is to be evaluated. The agreements are also signed by high-level officials. DACP has fairly robust management and oversight mechanisms. Status is monitored via formal quarterly reporting as well as interim meetings which, at a minimum, involve the customer, the developer, and the DACP project manager. The meetings provide an opportunity to ensure the acquisition program is still committed to the project and to resolve problems. Though formal memoranda of agreements are not usually employed, the program establishes test and evaluation plans that detail pass/fail criteria so that funding does not continue on projects that experience insurmountable problems. TTI also employs periodic status reports and meetings; however, communication has not been as open. In two cases, projects ran into significant problems, such as loss of acquisition program office support in one case and logistics issues that had not been addressed to transition a technology smoothly in the other, which had not come to the attention of the TTI program office. As a result, the TTI office thought the projects had transitioned when in actuality, significant problems still needed to be addressed. Per legislation, TTI had also established a formal council comprised of high-level DOD officials to help oversee the program; however, the Council has only met once in 2 years, while the act requires that it meet at least semiannually. In addition, there is some confusion among Council members and others we spoke with as to what the purpose of the Council should be—that is, focused on TTI only or broader transition issues. Congressional officials expressed that they intended for the Council to focus on broader transition issues and how best to solve them. Although the Quick Reaction Fund does not require status reports to assess progress, project managers are required to submit after-action reports. However, these were not regularly reviewed by the office. We identified several problems that arose during transition that were not known to the Quick Reaction Fund program manager. The program manager is currently taking steps to improve the management and oversight of projects. For example, a website has been developed to help monitor and execute the program. Among other things, the website will allow for the automatic collection of monthly status reports. Though the transition programs we reviewed are relatively small in scale compared to other transition programs in DOD, the government’s investment is still considerable and it will continue to grow if DOD’s funding plans for the programs are approved. As a result, it is important that these programs demonstrate that they are generating a worthwhile return on investment—whether through cost savings to acquisition programs, reduced times for completing testing and evaluation and integrating technologies into programs, and/or enhanced performance or new capabilities. Developing such information can enable transition program managers to identify what is or is not working well within a program; how well the program is measuring up to its goals, as well as to make trade-off decisions between individual projects. On a broader level, it can enable senior managers and oversight officials to compare and contrast the performance of transition programs across DOD. Finding the right measures to use for this purpose is challenging, however, given the wide range of projects being pursued, the different environments to which they are being applied, and difficulties associated with measuring certain aspects of return on investment. For example, measuring long-term cost savings could be problematical because some projects could have impacts on platforms and systems that were not part of the immediate transition effort. As a result, the best place to start may be with high-level or broad metrics or narratives that focus on the spectrum of benefits and cost savings being achieved through the program, complemented by more specific quantifiable metrics that do not require enormous efforts to develop and support, such as time saved in transition or short-term cost savings. At this time, however, the transition programs have limited measures to gauge individual project success and program impact or return on investment in the long term. At best, they are collecting after action reports that describe the results of transition projects, and occasionally identify some cost savings, but not in a consistent manner. In addition, there are inconsistencies in how the reports are being prepared, reviewed, and used. The Quick Reaction Fund program manager, in fact, had trouble just getting projects to submit after action reports. Officials from all three transition programs we reviewed as well as higher level officials agreed that they should be doing more to capture information regarding return on investments for the programs. They also agreed that there may already be readily available starting points within DOD. For example, the Foreign Comparative Testing Program has established metrics to measure the health, success, and cost-effectiveness of the program and has developed a database to facilitate return on investment analyses. The program also captures general performance enhancements in written narratives. The program has refined and improved its metrics over time and used them to develop annual reports. The specific metrics established by the FCT program may not be readily transferable to DACP, TTI, or the Quick Reaction Fund because the nature of FCT projects is quite different—technologies themselves are more mature and costs savings are achieved by virtue of the fact that DOD is essentially avoiding the cost of developing the technologies rather than applying the technologies to improve larger development efforts. However, the process by which the program came to identify useful metrics as well as the automated tools it uses could be valuable to the other transition programs. In addition, DOD has asked the Naval Post Graduate School to study metrics that would be useful for assessing the ACTD program. The results of this study may also serve as a starting point for the transition programs in developing their own ways to assess return on investment. Conclusions The ability to spur and leverage technological advances is vital to sustaining DOD’s ability to maintain its superiority over others and to improve and even transform how military operations are conducted. The three new transition programs are all appropriately targeted on what has been a critical problem in this regard—quickly moving promising technologies from the laboratory and commercial environment into actual use. Moreover, by tailoring processes and criteria to focus on different objectives, whether that may be saving time or money or broadening the industrial base, DOD has had an opportunity to experiment with a variety of management approaches and criteria that can be used to help solve transition problems affecting the approximately $69 billion spent annually on advanced stages of technology development. Already, it is evident that an element missing from all three programs is good performance measurement. Without having this capability, DOD will not be able to effectively assess which approaches are working best and whether the programs individually or as a whole are truly worthwhile. In addition, it is evident that having well-established tools for selecting and managing projects as well as communicating with technology developers and acquisitions helps programs to reduce risk and achieve success, and that there are opportunities for all three programs for strengthening their capabilities in this regard. In light of its plans to increase funding for the three programs, DOD should consider actions to strengthen selection and management capabilities, while taking into account resources needed for implementing them as well as their impact on the ability of the programs to maintain flexibility. Recommendations for Executive Action We recommend that the Secretary of Defense take the following five actions: To optimize DOD’s growing investment in the Technology Transition Initiative, the Defense Acquisition Challenge Program, and the Quick Reaction Fund, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to develop data and measures that can be used to support assessments of the performance of the three transition programs as well as broader assessments of the return on investment that would track the long-term impact of the programs. DOD could use measures already developed by other transition programs, such as FCT, as a starting point as well as the results of its study on performance measurement being conducted by the Naval Post Graduate School. To complement this effort, we recommend that DOD develop formal feedback mechanisms, consisting of interim and after action reporting, as well as project reviews if major deviations occur in a project. Deviations include, but are not limited to, changes in the technology developer, acquirer, or user, or an inability for the technology developer to meet cost, schedule, or performance parameters at key points in time. We also recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to implement the following, as appropriate, for each of the transition programs: (1) formal agreements to solidify up-front technology development agreements related to cost, schedule, and performance parameters that must be met at key points in time and (2) confirmation of technology readiness levels as part of the proposal acceptance process. In addition, we recommend that DOD identify and implement mechanisms to ensure that transition program managers, developers, and acquirers are able to better communicate to collectively identify and resolve problems that could hinder technology transition. There may be opportunities to strengthen communication by improving the structure and content of interim progress meetings and possibly even designating individuals to act as deal brokers. Lastly, as DOD considers solutions to broader technology transition problems, we recommend that Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to assess how the Technology Transition Council can be better used. Agency Comments and Our Evaluation DOD provided us with written comments on a draft of this report. DOD partially concurred with four of the five recommendations and concurred with one recommendation. The reason DOD only partially concurred with four of the recommendations is because it does not believe the Quick Reaction Fund fits the definition of a transition program. However, we continue to believe it is important for DOD to institute better management controls and have better visibility of the Quick Reaction Fund as it increases its investment in this program over the next several years. DOD comments appear in appendix I. DOD partially concurred with our recommendation that the programs develop data and measures that can be used to support assessments of the performance of the three transition programs as well as broader assessments of return on investment that would track the long term impact of the programs. DOD agreed that performance measures for the DACP and TTI programs could be improved but does not believe that measuring the impact of the Quick Reaction Fund is necessary because it does not technically fit the definition of a transition program. We disagree. DOD should track the progress of its various programs to determine if the programs are worthwhile and should be continued, if the program should receive additional funding, or if changes should be made in the selection or implementation process that could result in better outcomes. Further, failure to track even the most basic information, such as the number of projects completed, could result in a lack of ability to manage the program properly and poor stewardship of taxpayer money. DOD partially concurred with our recommendation that the three programs develop formal feedback mechanisms consisting of interim and after action reporting, as well as project reviews if major deviations occur in a project. DOD agrees that the TTI and DACP can be improved and has recently taken steps in this regard. However, DOD believes that due to the limited scope and duration of Quick Reaction Fund projects, formal feedback mechanisms may not be necessary for this program. We believe that regular feedback on the progress of each program is important to help program managers mitigate risk. As stated in the report, the Quick Reaction Fund program manager was unaware that one project ran out of funding prior to field testing the technology. Had the program manager been aware of the problem, money that had not yet been allocated could have been used to finish the project. In addition, based upon our discussions with the current program manager, DOD is planning to require monthly status reports for funded projects. DOD partially concurred with our recommendation that the programs implement, as appropriate: (1) formal agreements to solidify up-front technology development agreements related to cost, schedule, and performance parameters that must be met at key points in time and (2) confirmation of technology readiness levels as part of the proposal acceptance process. DOD indicated that it recently implemented Technology Transition Agreements for the TTI, and the DACP program also uses formal agreements. However, DOD does not believe formal agreements are necessary for the Quick Reaction Fund because it is not intended to be a transition program. Also, it does not believe TRLs should be a factor in the proposal acceptance process. As stated in the report, we agree that formal agreements may not be appropriate for Quick Reaction Fund projects. However, TRLs should be considered during the selection process. Since the goal of this particular program is to prototype a new technology in 12 months or less, it is important that DOD has some assurance that the technology is ready to be field tested. As discussed in the report, the Quick Reaction Fund had to cancel one project, after $1.5 million had already been spent, because it had only achieved a TRL 3. Had the selecting official known the TRLs of each proposed project during the selection phase, he may have decided to fund another, more mature project instead. DOD also partially concurred with our recommendation that the programs identify and implement mechanisms to ensure that transition program managers, developers, and acquirers better communicate and collectively identify and resolve problems that could hinder technology transition. DOD established a Transition Overarching Integrated Product Team earlier this year to provide the necessary oversight structure to address this issue, but does not believe this recommendation applies to the Quick Reaction Fund program. We believe that if DOD receives monthly status reports on the Quick Reaction Fund, as planned by the program manager, it should be in a good position to identify and resolve problems that could hinder the testing of new technology prototypes. DOD concurred with our recommendation that the Under Secretary of Defense (Acquisition, Technology and Logistics) assess how the Technology Transition Council can be better used as DOD considers solutions to broader technology transition problems. Although DOD did not indicate how it plans to do this, the Deputy Under Secretary of Defense (Advanced Systems and Concepts) has a goal that the Council not limit itself to just the TTI program, but look at broader technology transition issues across DOD. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (937) 258-7915. Key contributors to this report were Cristina Chaplain, Cheryl Andrew, Art Cobb, Gary Middleton, and Sean D. Merrill.
The Department of Defense (DOD) and Congress both recognize that Defense technology innovations sometimes move too slowly from the lab to the field. Three new programs have been recently created in DOD to help speed and enhance the transition of new technologies. A report accompanying the fiscal year 2003 National Defense Authorization Act required GAO to review two of these programs--the Technology Transition Initiative (TTI) and Defense Acquisition Challenge Program (DACP). The first is designed to speed transition of technologies from DOD labs to acquisition programs and the second is designed to introduce cost-saving technologies from inside and outside DOD. We were also asked to review the Quick Reaction Fund, which is focused on rapidly field testing promising new technology prototypes. We assessed the impact the programs had on technology transition and the programs' selection, management and oversight, and assessment practices. The ability to spur and leverage technological advances is vital to sustaining DOD's ability to maintain its superiority over others and to improve and even transform how military operations are conducted. The three new transition programs we reviewed are all appropriately targeted on what has been a critical problem in this regard--quickly moving promising technologies from the laboratory and commercial environment into actual use. Moreover, by tailoring processes and criteria to focus on different objectives, whether that may be saving time or money or broadening the industrial base, DOD has had an opportunity to experiment with a variety of management approaches and criteria that can be used to help solve transition problems affecting the approximately $69 billion spent over the past 3 years on later stages of technology development. However, it is too soon for us to determine the impact the three new DOD technology transition programs are having. At the time of our review, the programs--the TTI, DACP, and Quick Reaction Fund--had completed only 11 of 68 projects funded in fiscal years 2003 and 2004; of those, only 4 were providing full capability to users. Additionally, the programs have limited measures to gauge success of individual projects and return on investment. Nonetheless, reports from the programs have pointed to an array of benefits, including quicker fielding of technological improvements, cost savings, and the opportunity for DOD to tap into innovative technologies from firms that are new to defense work. Some sponsored technologies are bringing benefits to warfighters, such as a small, unmanned aircraft that can detect chemical and biological agents, and a device the size of an ink pen that can be used to purify water on the battlefield or in disaster areas. Furthermore, DOD officials credit the programs with giving senior leaders the flexibility to rapidly address current warfighter needs and for highlighting smaller technology projects that might otherwise be ignored. Long-term success for the programs likely will depend on how well the programs are managed and overseen. The programs must have effective processes for selecting the best projects, and management and oversight processes that will catch potential problems early. Thus far, of the three programs, the DACP has adopted the most disciplined and structured process for selecting and managing projects, and has encountered few problems managing projects. However, the program has had some difficulties processing the large number of proposals it receives. The TTI has also established selection criteria and processes, but it is unclear the extent to which it is reaching its intended audience and has had less success in tracking its projects. The Quick Reaction Fund has the least structured processes of the three programs--a deliberate approach seen as providing the flexibility needed to field innovations rapidly. It has had some difficulty selecting, managing and tracking projects.
Background DOD has been successfully using unmanned air vehicles such as the Global Hawk and Predator to gather intelligence and perform surveillance and reconnaissance missions for military purposes. Beginning in the mid-1990s, DOD began to conceive of a different type of unmanned air vehicle—the unmanned combat air vehicle or UCAV—which would be capable of performing dangerous, lethal combat missions, including suppression of enemy air defenses (SEAD). Unlike other unmanned air vehicles, UCAVs would carry weapons as well as electronic jammers to confuse enemy radars. DOD also envisioned that the air vehicle would operate more autonomously than other unmanned air vehicles, requiring little or no human input from ground stations to complete their missions or change flight paths. In addition, UCAVs would be stealthy and capable of flying in groups or with manned aircraft. The potential of these weapons has garnered high interest from both Congress and DOD. In the fiscal year 2001 Defense Authorization Act, Congress set a goal that by 2010, one-third of DOD’s deep strike force be unmanned in order to perform this dangerous mission. In addition to the potential for saving lives on risky missions, the UCAV could provide mission capability at less cost than manned aircraft. Program officials initially aimed for the UCAV’s acquisition cost to be one-third of the joint strike fighter and operations and support costs to be at least 75 percent lower. Because of the promise of unmanned air vehicles, the Office of Secretary of Defense has established a joint-service unmanned air vehicles task force to help promote the development and fielding of these systems, including making sure that there is multiservice cooperation. This task force is responsible for outlining the future of DOD’s unmanned air vehicles. In the late 1990s, DARPA and the Air Force began pre-acquisition efforts to conduct advanced technology demonstrations to show the technical feasibility of using UCAVs to penetrate deeply into enemy territory to attack enemy targets. Boeing Corporation was selected in 1999 to develop and demonstrate two demonstrator UCAVs—designated X-45A. (See fig. 1.) The DARPA-Air Force UCAV original plan also called for building and demonstrating two prototypes during the pre-acquisition phase, called X-45B, that are larger and incorporate low observable technology. (See fig. 2.) These air vehicles were expected to be more representative of the operational air vehicle that the Air Force planned to field. Initially, the Air Combat Command, which establishes mission and performance requirements, determined that the X-45B should be focused on performing SEAD missions within the air superiority mission area. This decision was made to address the limited inventory of current assets in the air superiority mission area and to counter the challenges and deficiencies associated with conducting SEAD missions. As of February 2003, 55 of 160 planned demonstrations have been completed. Most of the demonstrations designed to validate the basic flight characteristics of the air vehicle have been completed. Only a small number of the demonstrations needed to validate the ability of a single air vehicle to perform a preemptive destruction mission have been completed. The more demanding demonstrations—those designed to demonstrate technologies and software for highly autonomous, multivehicle operations (with both manned aircraft and unmanned air vehicles), and the more difficult aspects of the SEAD mission against mobile targets—have not begun. Importance of Matching Resources with Requirements before Product Development The product development decision that DOD is approaching for its UCAV program represents a commitment by the product developer to deliver a product at established cost, schedule, and performance targets and identifies the amount of resources that will be necessary to do so. Our studies of leading companies have shown that when requirements and resources were matched before product development was started, the more likely the development was able to meet performance, cost, and schedule objectives. When this took place later, programs encountered problems such as increased cost, schedule delays, and performance shortfalls. A key to achieving this match is to ensure that the developer has the resources—technology, design and production knowledge, money, and time—needed to design, test, manufacture, and deliver the product. It is not unusual for a customer to initially want a high-performing product that does not cost much or take too long to develop. But such an expectation may exceed the developer’s technology or engineering expertise, or it may be more costly and time-consuming to create than the customer is willing to accept. Therefore, a process of negotiations and trade-offs is usually necessary to match customer requirements and developer resources before firming requirements and committing to product development. Our work has shown that successful programs will not commit to product development until needed technologies are ready to satisfy product requirements. In other words, technology development is separated from product development. If technology is not sufficiently mature at the beginning of a product development program, the program may need to spend more time and money than anticipated to bring the technology to the point to which it can meet the intended product’s performance requirements. Testing is perhaps the main instrument used to gauge technology maturity. Testing new technologies before they enter into a product development program, as DOD is doing now by demonstrating the two X-45A demonstrators, enables organizations to discover and correct problems before a considerable investment is made in the program. By contrast, problems found late in development may require more time, money, and effort to fix because they may require more extensive retrofitting and redesign as well as retesting. These problems are further exacerbated when the product development schedule requires a number of activities to be done concurrently. The need to address one problem can slow down other work on the weapon system. Figure 3 illustrates the timing of the match between a customer’s requirements and a product developer’s resources for successful and problematic programs we have reviewed. Gap between UCAV Resources and Requirements Was Increased in 2002 During 2002, significant requirements were added to the UCAV program after the schedule was accelerated by 3 years. This step put the program at considerable risk because it increased the gap between requirements and resources. The program added two new requirements—one for electronic attack capability and one for increased flying range—while reducing a critical resource, time, to mature key UCAV technologies. As a result, the Air Force and DARPA anticipated that most of the 15 key technologies, system attributes, or processes supporting the aircraft’s basic capabilities would move from all low risk to mostly medium risk of achieving desired functionality by the time a product development decision was reached; one would be at high risk. UCAV Requirements Increased During 2002 The UCAV program’s original requirements were difficult to meet because they posed significant but manageable technical challenges to building an air vehicle that is, at once, affordable throughout its life cycle, highly survivable, and lethal. In the last year, both air vehicle and mission equipment requirements were increased. The original requirements called for a UCAV that would have a low life-cycle cost, survivable design; a mission control station that can fly single or multiple UCAVs at one time; a secure command, control, and communications network; completely autonomous vehicle operation from takeoff to landing; off-board and on-board sensors with which to locate targets; and human involvement in targeting, weapons delivery, and target damage assessment. Once these requirements were established, the UCAV contractor identified 15 technologies, processes, and system attributes the UCAV would have to possess to meet those requirements. These elements became a way to gauge the level of knowledge (in terms of risk) that the contractors had. Right now, technologies that support some of these capabilities, such as autonomous operation, are not yet mature. We used their risk assessments and criteria for the 15 technologies, processes, and system attributes to determine current system integration risk as well as technology risk. We believe technology readiness levels would have provided a more precise gauge of technology maturity, but program officials did not provide them. Currently, 10 technologies, processes, and system attributes are considered to be medium risk by the Air Force and DARPA. Medium risk means that there is a 30 to 70 percent probability of achieving the desired functionality for the initial operational UCAV. Moreover, 5 are currently considered to be high risk, that is, there is less than 30 percent probability of achieving their functionality. Table 1 provides the current risk level of the 15 UCAV technologies, processes, and system attributes for original UCAV objectives. Originally, the UCAV program was tasked with providing an air vehicle that would perform both preemptive and reactive SEAD missions against fixed and mobile targets that are extremely demanding from both a mission and capability perspective. The reactive mission is more demanding than the preemptive mission because the UCAV will have less time to find and engage mobile targets. When DOD decided to accelerate delivery of the initial UCAVs, the program was relieved of meeting the requirement for reactive SEAD, making for a better balance between requirements and available resources. However, requirements were subsequently added that increased the challenge of matching requirements with resources. These requirements include an electronic attack mission and increased combat range and endurance. Electronic attack: DOD’s electronic attack mission is currently performed by the Navy’s aging EA-6B Prowler aircraft. Electronic attack confuses enemy radars with electronic jammers. In 2001, the Navy conducted an analysis of alternatives for replacing the Prowler. Air Combat Command planners determined that the UCAV could fill portions of this role quickly and added the requirement to the program. As currently structured, the program does not plan to demonstrate electronic attack technologies on UCAV demonstrator or prototype vehicles before product development begins. According to program officials, the biggest additional challenge associated with this change is the integration of existing electronic attack technologies into a low observable air vehicle. Program officials are also concerned that downsizing and repackaging current electronic warfare technology to fit into a smaller space, with sufficient cooling and power, and incorporating antennas and other apertures into the low observable signature of the UCAV may pose additional challenges. Program officials also stated that the addition of electronic attack adds uncertainty to overall program costs. It may reduce the number of initial UCAVs planned for initial production because additional work will be required to integrate this capability into air vehicles, given the current schedule and funding. Longer range and endurance: According to program officials, Air Force leadership would like to have a larger UCAV with longer range and greater endurance than that currently being designed in the X-45B to perform strategic lethal strike and nonlethal intelligence, surveillance, and reconnaissance missions. However, increasing UCAV’s range forced the program to abandon a key design concept expected to lower operating and support costs significantly over that of a manned aircraft—one of the program’s original critical requirements. The initial UCAV concept was a design that allowed the wings to be detached from the air vehicle and stored in a crate for up to 10 years, a concept which was expected to contribute to a greater than 75 percent reduction in operation and support costs. When needed, the UCAV could be shipped to the theater of operations, assembled, and readied for use. Adding range and endurance required redesigning the air vehicle with fixed or permanently attached wings, in order that the inside of the wings could be used as fuel tanks. This would allow the UCAV to carry more fuel and give it the ability to fly farther. Since the wings would no longer be detachable, the long-term storage approach had to be abandoned. Schedule Compression Created Greater Technical and Cost Risks The schedule for the UCAV program has changed several times during the pre-acquisition phase. In 2000, the Air Force anticipated that product development would start in 2007 and initial deliveries would begin in 2011. After several schedule changes, the Air Force set product development in 2004 and initial delivery of aircraft in 2007. (See table 2.) The net effect of the changes was a 3-year reduction in time to mature technologies before product development. This reduction created the potential for costly and time-consuming rework in product development since the Air Force would still be in the process of maturing technologies as it undertook other product development activities. Moreover, the concurrency that comes with the schedule changes would have left little room for error. Under the original schedule, the UCAV program would essentially have 3 more years prior to the beginning of product development to test and mature technologies. As a result, all 15 of the technologies, processes, and system attributes would be at low risk by the launch of product development indicating a match between requirements and resources. By contrast, under the late 2002 schedule, the program would not have enough time to mature technologies to a low risk prior to the launch of product development in 2004. In fact, most technologies, processes, and system attributes would still be either medium or high risk by the time product development was launched indicating that requirements exceeded resources. The overlap of technology development and product development, introduced by the acceleration of product development, also raised risks for the UCAV program. The late 2002 schedule allowed less time for discovering and correcting problems that may have arisen during technology demonstrations prior to product development launch. Importantly, all of the air vehicle military utility demonstrations would have been completed after the beginning of product development. Under the original schedule most of these demonstrations would have been completed prior to the start of product development. Increasing the overlap of technology development and product development added risk to the program. Problems found during those demonstrations might have to be fixed during product development—problems made more likely given the lower maturity level of the key technologies. Figure 4 shows that the concurrency between technology development and product development increased by approximately 18 months under the late 2002 schedule—from a 6-month approximate overlap to a 24-month approximate overlap. Also, this acceleration increased the program risk for the start of product development from all low to mostly medium risk for the 15 technologies, processes, and system attributes being tracked. As figure 4 indicates, the UCAV technology and product development phases had been shortened from a plan with little concurrency between technology and product development to a plan with significant concurrency between the two. The push to deliver the product sooner compressed the time in which technologies will be matured and integrated into the UCAV weapon system. The resulting approximate 24-month overlap between technology and product development caused by accelerating the beginning of UCAV’s product development program had the potential to create “late cycle churn,” or the scramble to fix significant problems discovered late. We have found that when problems are uncovered late in product development, more time and money is required to rework what is already finished. Recent DOD Decision to Restructure Program Can Reduce Risks The Office of the Secretary of Defense recently restructured the UCAV program to a joint program structure to meet the needs of the Navy as well as the Air Force. The Office of the Secretary of Defense cancelled plans to build the X-45B prototypes and now anticipates that the joint UCAV program will focus on a family of vehicles derived from the larger Boeing X-45C and Northrop Grumman X-47B prototypes designs. The details of the program are still being decided, but it appears likely that while content will increase, the start of product development will be delayed. This approach represents a substantial improvement over the late 2002 plan in that it lowers risks significantly. However, keeping requirements and resources in balance and funding intact until product development starts will be a challenge. The proposed prototypes will be larger than the X-45A or X-45B and thus more capable of supporting requirements for greater combat range and endurance. Also, both the proposed X-45C and X-47B prototypes will have a larger wing area, allowing them to carry increased payload and internal fuel. Just as the X-45B would have been more capable than the X-45A, the X-45C is projected to be more capable than the X-45B as shown in Table 3 below. We did not obtain specific data on the X-47B prototype. Further, by adopting a design that increases internal space on the air vehicle, DOD could more readily incorporate electronic attack equipment and other sensors and avionics. In addition, the plan would reintroduce competition into the UCAV program by assessing two different designs. This competition would increase DOD’s ability to pursue the best technical solution. On the other hand, acquisition cost for the larger air vehicles are expected to increase as will operating and support costs due to the abandonment of the storage-in-the-box concept. Also, meeting the Navy’s need for carrier-based operations could place additional demands on the air vehicle design. Figures 5 and 6 show illustrations of Boeing and Northrop Grumman proposed joint UCAV designs. In addition, more time will be added under the joint program to conduct demonstrations by delaying the start of product development by several years. Some of this added time—up to a year—will be needed to develop and deliver the new prototypes. As shown in figure 7, delaying the beginning of product development could reduce technical risks since DOD would have more time to test prototypes. However, these delays may postpone initial operational capability beyond what DOD and the Congress originally anticipated, which was at the end of the decade. But recognizing this upfront to put the program on a sounder footing may be preferable to proposing a higher risk approach—like the 2002 plan—that is more susceptible to unplanned delays. Drawing on the experience of the UCAV to date as well as other programs, DOD will face challenges in keeping the requirements for the new joint design balanced with available resources. One challenge relates to requirements. As mentioned above, more demands could be made of the air vehicle to meet the needs of both the Air Force and the Navy. Prior to the new joint approach, the Navy’s top mission for the UCAV has been conducting intelligence, surveillance, and reconnaissance. When considering the Air Force’s missions of reactive and preemptive SEAD and electronic attack, it is foreseeable that the program will face pressures to meet multiple missions. One approach to meeting this challenge is to delay the start of product development until resources—such as technology maturity—are available to meet all requirements. This would delay the program significantly and could raise funding issues. Alternatively, adhering to an evolutionary acquisition approach and developing the different mission capabilities in sequence could meet the challenge, so that the initial capability can be fielded sooner. Another challenge relates to funding. Past and present programs have been susceptible to such funding issues. Moreover, other programs that dwarf the UCAV program—such as the F-22 and the Joint Strike Fighter—will be competing for investment funds at the same time. We have found in earlier work that although the Office of the Secretary of Defense provides some funding for advanced technology development efforts, the military services and defense agencies are ultimately responsible for financing the acquisition and support of equipment or items that result from the efforts. At times, however, the military services have not wanted to fund the transition process. This action either slowed down the acquisition process or resulted in no additional procurements. Specifically, military services have not wanted to fund technologies focusing on meeting joint requirements because those technologies do not directly affect their individual missions, and there are specific projects that they would prefer to fund. At the same time, Office of the Secretary of Defense officials told us that they lack a mechanism for ensuring that decisions on whether to acquire items with proven military utility are made at the joint level, and not merely by the gaining organizations, and that these acquisitions receive the proper priority. The UCAV has already experienced some funding challenges. Recently, during preparations for the fiscal year 2004 budget cycle, the Air Force budget proposal eliminated all UCAV funding beyond that needed to finish work on two prototypes already on contract. The Air Force based this action on its belief that the X-45B UCAV was too small for the role the Air Force believed was most needed. To keep the UCAV program on track, the Office of the Secretary of Defense stepped in to resolve requirements and funding challenges and maintained a strong oversight over it. While the Office of the Secretary of Defense increased the challenge by accelerating the delivery date for the first UCAVs, it allowed the Air Force to defer the reactive SEAD requirement and fended off more radical changes to the UCAV’s missions. In addition, the Office of the Secretary of Defense has taken the lead in brokering the agreement on the joint program proposal, adding development time to the proposal and working out a joint effort that could result in a single design for the Air Force and Navy. Sustaining the role played by the Office of the Secretary of Defense is likely to be important to meeting future challenges the UCAV may face. Conclusion UCAVs offer a potential for DOD to carry out dangerous missions without putting lives at stake and to find cost-effective ways of replacing DOD’s aging tactical aircraft fleet. However, up until recently, pre-acquisition decisions had collectively increased requirements and reduced resources, putting the program in a riskier position to succeed. The decision to create a joint program could make for a better program if the gap between resources and requirements can be closed. The joint program faces a challenge in managing the demands of multimission requirements with the desire to field an initial capability in a reasonable time. Accepting increased requirements and accelerating fielding at the same time, as was previously done, will hinder the ability of the joint UCAV program to succeed. The program also faces the challenge of sustaining funding support from both services at a time when it is competing against other large aircraft investments. Regardless of which direction the new program takes, the role played by the Office of the Secretary of Defense will continue to be instrumental in helping to negotiate requirements, to assure the right resources are provided, and to make further difficult tradeoff decisions throughout the program. Recommendations for Executive Action We recommend the Secretary of Defense develop an acquisition approach for the joint UCAV program that enables requirements and resources to be balanced at the start of product development. This approach should provide mechanisms for brokering the demands of multiple missions, for ensuring that the product developer maintains a voice in assessing the resource implications of requirements, and for preserving the integrity of evolutionary acquisition. Reinstating the use of technology readiness levels may be very valuable in facilitating necessary tradeoffs. We also recommend that the Secretary formalize the management role performed by his office and the attendant authority to perform that role; ensure that the services are fully involved in the process; and work to develop an efficient approach to transitioning the UCAV from DOD’s technology development environment to the services’ acquisition environment so the needs of the war fighter can be met more quickly. Agency Comments and Our Evaluation DOD provided us with written comments on a draft of this report. The comments appear in appendix I. DOD provided separate technical comments, which we have incorporated as appropriate. DOD concurred with our recommendation that the Secretary of Defense develop an acquisition approach for the joint UCAV program that enables requirements and resources to be balanced at the start of product development. It has directed the formation of a Joint Systems Management Office to define near-term requirements and to conduct robust operational assessments. DOD partially concurred with our recommendation that the Secretary formalize a management role performed by his office and the attendant authority to perform that role; ensure that the services are fully involved in the process; and work to develop an efficient services’ acquisition environment so the needs of the war fighter can be met more quickly. DOD noted that the Secretary is organizing the management function as he deems suitable. DOD did state that the department’s UAV Planning Task Force would continue to provide oversight over all DOD UCAV program activities. We believe this is important because it was this organization that was instrumental in refocusing the DOD UCAV effort into a joint program that we believe will significantly improve the probability of successfully fielding UCAVs. Scope and Methodology To achieve our objectives we examined Air Force UCAV program solicitations and agreements, the demonstration master plan, trade studies, technology demonstration plans and results, status of critical technologies, plans to further enhance maturity of critical technologies, and plans to move UCAV to the Air Force for product development. We interviewed DARPA and Air Force program managers and technical support officials at DARPA program offices in Arlington, Virginia, and the Air Force’s Research Lab and Aeronautical Systems Center at Wright Patterson Air Force Base, Dayton, Ohio, to document current development efforts and the maturity status of critical technologies and other attributes. To determine options that may be available to UCAV program managers in making changes to requirements or resources, we examined the program’s risk assessments of its 15 technologies, processes, and system attributes to identify risk associated with beginning product development at different points in time. We interviewed Air Force Air Combat Command officials at Langley Air Force Base, Virginia, concerning UCAV requirements, and air staff officials in Arlington, Virginia, concerning program objectives and resources. We also interviewed a number of officials from the Office of Secretary of Defense having responsibility for UCAV oversight and funding. We conducted our work from February 2002 through May 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretaries of the Air Force and Navy, the Director of the Office of Management and Budget and other congressional defense committees. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-2811 if you or your staff has any questions concerning this report. Key contributors to this report were Mike Sullivan, Jerry Clark, Matt Lea, Kris Keener, Travis Masters, Cristina Chaplain, Lily Chin, Bob Swierczek, and Maria-Alaina Rambus. Appendix I: Comments from the Department of Defense
The Department of Defense (DOD) is developing a new unmanned combat air vehicle (UCAV) that can suppress enemy air defenses and conduct other air-to-ground attacks, particularly against heavily defended targets. Because it may perform these missions at a relatively low cost, the UCAV could be used to replace some of DOD's aging tactical aircraft fleet. A key to UCAV's success will lie in DOD's ability to match users' needs, or requirements, with the developer's resources (technology and design knowledge, money, and time) when product development begins. Our work shows that doing so can prevent rework and save both time and money. Therefore, we assessed DOD's ability to make this match. GAO conducted its work on the basis of the Comptroller General's authority and addresses the report to the Subcommittee on Tactical Air and Land Forces, House Committee on Armed Services because of its interest and jurisdiction in the program. The UCAV program's original performance objectives posed manageable challenges to build an affordable, highly survivable, and lethal weapon system. The Air Force, however, added requirements for electronic attack and increased flying range after DOD accelerated the program's product development schedule by 3 years. These changes widened the gap between the customer's requirements and the developer's resources, specifically time, reducing the probability that the program would deliver production aircraft on cost, on schedule, and with anticipated performance capabilities. DOD has recently decided to adopt a new joint service approach to UCAV development that provides more time to close the requirements--resource gap before product development starts. It appears DOD may add new content because it is proposing to build a new prototype that would be a larger air vehicle, capable of flying and carrying out combat missions for longer periods of time. To reduce technical risk, DOD anticipates delaying the start of product development for several years in order to address new requirements. As a gap between resources and requirements widened in 2002, risks projected for the start of product development with UCAV's 15 technologies, processes and system attributes increased significantly. The new joint plan brings the risks back down. This action also allows competition back into the UCAV development effort. DOD will still face challenges in controlling joint, multimission requirements and ensuring that both services continue to provide funds for the program while also funding other large aircraft investments. If these challenges are not met, the gap between requirements and resources could resurface. DOD's role will continue to be instrumental in helping to negotiate requirements, assure resources are in place, and make difficult program trade-offs.
Background The BMDS is designed to defend the United States homeland and our regional friends and allies against attacks from ballistic missiles of all ranges—short, medium, intermediate, and intercontinental. Since ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing a variety of systems, known as elements or programs, that, when integrated, provide multiple opportunities to destroy ballistic missiles in flight. The BMDS includes space-based sensors; ground- and sea-based radars; ground- and sea- based interceptor missiles; and a command and control system that provides communication links to the sensors and interceptor missiles. Description of BMDS Elements The BMDS is comprised of several systems, which MDA calls elements or programs. Table 1 provides a brief description and status of the BMDS elements assessed in this report. See appendixes II-IX for more detailed information. MDA’s Acquisition Flexibilities and Steps Taken to Address Transparency When MDA was established in 2002, it was granted exceptional flexibility in setting requirements and managing the acquisition. The BMDS was to be developed as a single program designed to quickly deliver a set of integrated defensive capabilities. This decision deferred application of DOD acquisition policy to the BMDS until a mature capability is ready to be handed over to a military service for production and operation. Because the BMDS program has not yet formally entered the DOD acquisition cycle, application of laws and policies that are designed to facilitate oversight and accountability of major defense acquisition programs and that are triggered by phases of this cycle, such as the engineering and manufacturing development phase, have also effectively been deferred. These laws and policies include such things as: Documenting program parameters in an acquisition program baseline that has been approved by a higher-level DOD official prior to the program’s entry into the engineering and manufacturing development phase or program initiation, whichever occurs later. Measuring the program against the approved baseline or obtaining the approval of a higher-level acquisition executive before making changes. Reporting certain increases in unit cost measured from the original or current program baseline. Obtaining an independent life-cycle cost estimate prior to beginning engineering and manufacturing development, and/or production and deployment. Regularly providing detailed program status information to Congress, including information on cost, in Selected Acquisition Reports. Congress and DOD have taken actions to address oversight of MDA. For example, in the NDAA for Fiscal Year 2008, Congress required MDA to establish acquisition cost, schedule, and performance baselines for each system element that has entered the equivalent of the engineering and manufacturing development phase of acquisition or is being produced or acquired for operational fielding. MDA reported its newly-established resource, schedule, test, operational capacity, technical, and contract baselines for certain BMDS components for the first time in its June 2010 BMDS Accountability Report (BAR). Since that time, Congress has continued to alter MDA’s baseline reporting requirements in the NDAA for Fiscal Years 2011 and 2012. Additionally, to enhance oversight of the information provided in the BAR, MDA continues to incorporate suggestions and recommendations from GAO on the content and clarity of the information reported in the BAR to include: 1) the addition of information to explain the major changes experienced by each program over the past year; 2) the addition of buy/delivery information for each program that has advanced to Product Development, Initial Production, or Production; 3) a description of cost items not included in program Resource Baselines; and 4) a summary of critical schedule milestones with their respective initial baseline dates and dates from the previous BAR to facilitate tracking. High-Risk Approach to Acquisitions Has Affected Certain Outcomes Successful programs that deliver promised capabilities for the estimated cost and on schedule use a disciplined, knowledge-based approach where knowledge supplants risk over time. In our past work examining weapon system acquisition and best practices, we have found that successful commercial firms pursue an acquisition approach that is anchored in knowledge, whereby high levels of product knowledge are demonstrated at critical points in the acquisition process. This approach recognizes that programs require an appropriate balance between schedule and risk, but does not include an undue amount of what is often referred to as acquisition concurrency, where overlap occurs between technology development and product development or between product development and production of a system. Instead, programs take steps to gather knowledge prior to moving from one acquisition phase to another. These steps for a program include: Demonstrating its technologies are mature and that allotted resources match the program’s requirements before deciding to invest in product development. Demonstrating its designs are stable and perform as expected before deciding to build and test production-representative prototypes. Demonstrating its production processes are in control and meet cost, schedule, and quality targets before deciding to produce first units. Since 2002, MDA has developed, demonstrated, and fielded a limited homeland and regional ballistic missile defense capability, but has fallen short of its goals, in part, because of its acquisition practices. Some of these practices include initiating new programs without robustly assessing alternative solutions, incorporating high levels of concurrency, and fielding capabilities prior to completing flight testing. While some concurrency is understandable, committing to product development before requirements are understood and technologies are mature, as well as committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. We previously found that although these practices enabled MDA to quickly ramp up efforts in order to meet tight, presidentially-directed deadlines, they were also high risk and resulted in problems that caused some programs to be cancelled or significantly disrupted. For example: In July 2013, we found that the Secretary of Defense decided to cancel an MDA satellite system program, called the Precision Tracking Space System, based on the results of a departmental review of the program which determined that the program had significant technical, programmatic, and affordability risks. We previously found that MDA did not consider a broad range of alternatives prior to its decision to start the program, was relying on a highly-concurrent acquisition approach despite significant technical and operational challenges, and was projecting a lower total program cost by increasing risk to the warfighter. Although MDA gained some technical knowledge from the effort, it also expended significant resources—approximately five years and $231 million. The sensor coverage gaps, such as persistent tracking from space, that the program was intended to address persist. In April 2014, we found that a series of GMD test failures in conjunction with a highly concurrent CE-II development, production, and fielding strategy caused major disruptions to the program. Because the program moved forward with producing and fielding interceptors before completing its flight test program, test failures exacerbated the disruptions to the program, causing the program to fall several years behind on its flight test program and increasing the cost to demonstrate the CE-II from $236 million—the cost of GMD’s first CE-II flight test—to $1.981 billion—the cost to resolve the test failures and implement a retrofit program. See appendix VII for more detailed information. MDA Conducted Several Key Tests and Continued to Deliver Assets, but Did Not Achieve All Planned Goals for Fiscal Year 2014 MDA made progress, but did not achieve all of its planned test and asset delivery goals for fiscal year 2014, and has not fully met its test goals since first reporting baselines in its 2010 BAR. MDA utilizes ground, non- intercept, and intercept tests to provide it with knowledge on the operational effectiveness, suitability, and survivability of an asset or capability. Ground tests use simulations and scenarios when flight testing may be impractical or cost-prohibitive. Flight tests—intercept and non- intercept—evaluate an asset’s ability to defend against a specific threat. Intercept tests include active engagement of one or more targets, while non-intercept tests do not include active engagement of a target. Moreover, non-intercept tests can assess specific aspects of an asset to potentially reduce risks for future intercept tests. Completing planned testing is a key step to enable the delivery of assets and capabilities, in line with GAO best practices. However, despite testing delays, shortfalls, and failures, MDA has continued to deliver assets. Without completing planned testing, MDA is delaying or foregoing the full breadth and depth of knowledge it planned to have attained prior to the delivery of its assets. MDA Conducted Some Tests in Fiscal Year 2014 as Planned In fiscal year 2014, MDA conducted four out of ten planned flight tests (as shown below in table 2). It also conducted an additional flight test in June 2014 that was inserted into the schedule to retest and confirm a capability that failed during a prior test. MDA conducted two intercept and three non-intercept flight tests in fiscal year 2014 that demonstrated an increased capability for the Aegis BMD and the GMD programs. The three non-intercept tests evaluated Aegis Ashore’s ability to launch and guide an SM-3 interceptor, as well as the SM-3 Block IIA interceptor booster performance and tracking capabilities for the Aegis BMD Weapon System. One intercept test supported the Aegis BMD program’s full rate production decision for the SM-3 Block IB interceptor by demonstrating the capability to intercept a medium- range ballistic missile target. The other intercept test—FTG-06b—was inserted into the test schedule to retest and demonstrate the performance of the CE-II interceptor, which failed its prior two attempts in 2010. MDA successfully executed FTG-06b in June 2014, which was a major accomplishment for the program as it was the first successful intercept attempt with the CE-II interceptor, ending a five-and-a-half year period without a successful intercept for the GMD program. For further details about the Aegis BMD and GMD programs, see appendixes II, III, IV, V, and VII. MDA did not conduct six planned flight tests in fiscal year 2014, and it has been unable to conduct all of its planned tests since fiscal year 2010 because, as we previously reported, its test plan is ambitious and success-oriented, which makes it difficult to adjust the schedule if necessary and results in frequent changes and disruptions to its test plan. MDA officials have told us that they do not plan for target failures, test failures, or potential retests when developing the test schedule, and that there is no flexibility to absorb these issues. We previously recommended that MDA include sufficient margin in its test schedule based on recent test outcomes and forecasted testing demands so it could better meet its testing goals. However, MDA has not implemented this recommendation. Consequently, when MDA encounters challenges, it does not have the flexibility to accommodate changes and falls short of its testing goals and hinders oversight. According to MDA officials, the reasons that the six flight tests in 2014 were not conducted as planned include: sequestration that limited the funds available for testing, target availability, and retests to address previous failures. To address these challenges, MDA made the decision to remove or delay some planned tests. For instance, of the four Aegis BMD program tests that were planned but not conducted, one was cancelled and one was delayed due to sequestration, and two were delayed due to lack of target availability. Some of these tests are designed to assess MDA’s regional ballistic missile defense approach for Europe, also called the European Phased Adaptive Approach (EPAA). As a result, MDA risks discovering performance shortfalls with some of its assets after they are fielded and declaring future phases without all of the information it initially planned to have. Of the two GMD program flight tests that were planned but not conducted, the program cancelled one because the test objectives were met through prior tests, and the other test was significantly changed, including a new name and test objectives, after the successful FTG-06b test. MDA came closest to achieving its testing goals in fiscal year 2010 when it conducted five out of seven, or 71 percent, of its planned tests (see figure 1). Each year as MDA falls short of its testing goals due to target failures, test failures, or retests, it takes steps to recoup by delaying and removing tests. As a result, MDA is delaying, and in some cases, not collecting knowledge about the asset’s capabilities and limitations prior to delivery. For example, FTX-19—a significant flight test of Aegis BMD’s ability to coordinate two ships to track and engage multiple threats—has been delayed twice from its original planned date in fiscal year 2013, once to fiscal year 2014 and then again to fiscal year 2015. Additionally, since 2010, Aegis Ashore has removed five of its seven flight tests designed to assess its capability for EPAA Phase 2. According to program officials, the program is leveraging data from sea-based Aegis BMD tests, but conditions at sea are different than on land, as are the system configurations (for more information see appendix III). Moreover, frequent changes to the test schedule make it difficult to track what MDA has and has not accomplished in terms of testing and system capability. MDA Delivered Some Assets in Fiscal Year 2014 as Planned In fiscal year 2014, MDA continued to deliver its BMDS assets (see table 3). However, some of these assets were delivered without completing planned testing, which increases risks for an individual system and the BMDS as a whole. For example, Aegis BMD continued to deliver SM-3 Block IB interceptors—11 more than originally planned—although it is still working to address its past test failures, including redesigns to one of its components. Also, THAAD delivered assets to meet urgent warfighter needs although there were changes incorporated to address obsolescence issues, and these will not be tested until the fourth quarter of fiscal year 2015. We have previously recommended that MDA synchronize its testing and asset delivery schedules to ensure that assets are tested before they are delivered. If assets are delivered without testing, it could lead to costly and time-consuming retrofits if the asset does not perform as intended. Also, all of MDA’s programs have complex interactions and interdependencies, so delivering problematic or underperforming assets could not only affect the performance or capability of one system, but others as well, and could compromise the overall operational performance of the BMDS. MDA Can Do More to Improve Its Acquisition Outcomes by Reducing Risk In fiscal year 2014, MDA undertook several risk reduction efforts designed to achieve or improve its acquisition outcomes, such as being able to deliver assets on time and that perform as expected. However, uncertainty exists as to whether the agency will be able to achieve such outcomes because it continues to undertake other efforts that are either high risk or lack a sound acquisition basis. Even with the risk reduction efforts, the agency’s acquisition outcomes may be on a similar trajectory to that of prior years because it missed some risk reduction opportunities in fiscal year 2014. MDA Took Some Actions in Fiscal Year 2014 to Improve Acquisition Outcomes by Reducing Risk Several BMDS programs took actions in fiscal year 2014 to reduce risks to help the agency achieve or improve its desired acquisition outcomes. In March 2014, we identified knowledge-based acquisition practices based on our prior work on best product-development practices and found that successful programs take steps to gather knowledge to confirm technology maturity and design stability. Aegis BMD reduced testing and production risks for its SM-3 Block IIA by achieving full design maturity at its critical design review—a key knowledge point juncture for acquisition programs considering whether to start building and testing production-representative prototypes. As we previously reported, the Aegis BMD program has taken steps aligned with this best practice by revising its SM-3 Block IIA schedule to alleviate compressed events and include additional time for subsystem reviews before conducting the critical design review to resolve any issues. As such, the program conducted the critical design review in October 2013 with no major issues identified and 100 percent of its design drawings completed—a key indication that the interceptor’s design is stable. This allows the program to move forward with flight testing and into initial production with assurance of design maturity. Also in fiscal year 2014, GMD took additional actions to reduce development and testing risk by incorporating an additional non-intercept flight test in fiscal year 2015. After successfully conducting FTG-06b in June 2014, the GMD program planned to conduct its next flight test—an intercept test called FTG-09—in the third quarter of fiscal year 2015. This test, in part, was designed to demonstrate two redesigned components intended to address prior issues discovered in flight test failures. However, the program subsequently encountered delays developing the redesigned components and could not support the planned test date for FTG-09. According to program officials, the Director, MDA decided to repurpose FTG-09 as a non-intercept flight test, called GMD Controlled Test Vehicle (GM CTV)-02+, to provide the program with additional time to complete development for the redesigned components and to test additional objectives, such as the capability to discriminate the target from other objects during an engagement. The program previously conducted a non-intercept flight test, GM CTV-01, prior to conducting FTG-06b, which significantly contributed to the intercept flight test’s success. Adding the non-intercept flight test GM CTV-02+ is a positive step as it allows the program to collect valuable data on how the redesigned components operate in the in-flight environment, which reduces risk for the next intercept flight test. The Targets and Countermeasures program reduced BMDS testing risks by using a non-intercept flight test for a new target prior to its use in more complex and costly intercept tests. New, untested targets introduce higher risks of failures and, if a target fails, it often means costly and time- consuming re-tests, which could further delay the delivery of the capability to the warfighter. In 2013, we recommended that MDA add risk reduction flight tests for each new target type. Risk reduction flight tests are conducted to confirm that the target works as intended and to discover and resolve issues prior to its use in an intercept test. MDA has not fully implemented this recommendation. However, the Targets and Countermeasures program successfully conducted a non-intercept flight test in October 2014 using a new target called the Medium-Range Ballistic Missile Type 3 (MRBM T3) prior to its first planned intercept test in fiscal year 2016. This non-intercept flight test reduces testing risks, such as potential target failures, by giving the program insight into the target’s performance, and provides about a year to address any issues that may emerge. If the program continues to integrate non-intercept flight tests into the test schedule prior to intercept tests when new target types are introduced, it may reduce the risks for failures in intercept test events. The Targets and Countermeasures program also adopted contracting types aimed at providing incentives for the successful performance of targets. Such measures may help prevent cost growth and performance problems seen in the past and minimize risk to the government. Potential Exists to Improve Acquisition Outcomes for Several MDA Efforts While MDA took actions to reduce risk, some of its elements are still using fundamentally risky acquisition strategies. MDA missed opportunities in fiscal year 2014 to further reduce risk and is planning to undertake efforts in the future that are either high risk or lack a sound acquisition basis as a result of not following some knowledge-based acquisition practices. We have previously identified several of these knowledge-based practices in our assessment of major defense acquisition programs. However, opportunities remain for MDA to reduce risk in these future planned efforts, which would help the agency achieve its acquisition goals. Aegis BMD: Opportunity Exists to Insert an Additional Flight Test to Assess Redesigned Component for the SM-3 Block IB Prior to a Multiyear Procurement Decision Aegis BMD is currently redesigning a key component of its SM-3 Block IB interceptor to address prior test failures, but has no plans to flight test it before incorporating it into the interceptor. An SM-3 Block IB interceptor failed during a flight test in September 2013, when two SM-3 Block IB interceptors were launched against a single target (the first of which successfully intercepted the target). Although a failure review investigation is ongoing to determine the root cause of the failure, preliminary findings indicate that the third-stage rocket motor—the component that controls the final maneuvers of the interceptor— experienced a failure similar to that which occurred in September 2011. As a result of the interceptor failures during the two flight tests, Aegis BMD is redesigning components in the third-stage rocket motor and expects to complete and accept the final redesign specifications in the second quarter of fiscal year 2015. The Aegis BMD program is currently not planning to flight test the SM-3 Block IB with the redesigned components of the third-stage rocket motor before it is incorporated into the production line and deployed, in part, to support the regional defense of Europe. According to program officials and contractor representatives that produce the SM-3 Block IB interceptors, the effort to redesign components in the rocket motor is considered to be relatively straightforward and low risk. They also indicated that they do not believe that a flight test to demonstrate the redesigned rocket motor components is necessary, as plans are in place to conduct ground tests. However, without flight testing the redesigned rocket motor components, MDA may not fully understand the interceptor’s performance and capabilities and whether it works as intended. Additionally, Director, Operational Test and Evaluation officials stated that the environments for a flight and ground test are very different and that MDA has not been able to replicate the SM-3 Block IB interceptor failure through ground tests. As we have previously reported, both failures occurred during flight tests, not ground tests. Moreover, different issues with that same component have contributed to previous SM-3 Block IB program schedule delays and production disruptions which resulted in a delayed production decision. The Aegis BMD program is also moving forward with plans to initiate SM- 3 Block IB full rate production in fiscal year 2015 and plans to enter into a multiyear procurement contract in fiscal year 2016. Both the full-rate production decision and multiyear procurement contract represent major commitments by the program and carry significant cost and schedule risks as the SM-3 Block IB with the redesigned third-stage rocket motor components have not been demonstrated through flight testing. When used appropriately, multiyear contracting can save money compared to a series of annual contracts by allowing contractors to use their resources more efficiently. However, multiyear procurement can limit DOD’s budget flexibility and also entails certain risks that must be balanced against potential benefits, such as the increased costs to the government should the multiyear contract be changed. As MDA progresses with the full rate production of the interceptors and upcoming interceptor acquisition decisions, Aegis BMD still has an opportunity to insert a flight test into its test plan prior to inserting the redesigned components of the third-stage rocket motor into its production line. Until a flight test confirms that the redesigned components work as intended, MDA does not know if or how the changes will affect the interceptor’s performance or if other changes are needed. Since the redesign of the third-stage rocket motor components are not finalized, MDA has not accounted for the potential costs associated with it. Without knowing the extent of modifications needed to the SM-3 Block IB, the agency may not realize the full potential of benefits associated with the multiyear procurement strategy. GMD: Opportunity Exists to Reduce Risk Caused by the Program’s Use of a Concurrent Strategy to Meet Its Goal of Fielding 44 Interceptors by 2017 The GMD program currently has multiple variants of its interceptor at different stages of development and production as a result of its developmental challenges and flight test failures. The production and integration of the CE-II interceptor was previously suspended following a failure in its December 2010 flight test. As we have previously reported, this flight test failed because of excessive vibration in the inertial measurement unit (IMU)—a component of the kill vehicle’s guidance system. The program subsequently modified the IMU to mitigate the excessive vibration; demonstrated the modified IMU’s effectiveness in the non-intercept flight test; and performed a successful intercept with a CE-II configured with the modified IMU during FTG-06b in June 2014. Following the successful flight test, the GMD program resumed integration and production of the CE-II interceptor. In addition to modifying the IMU, according to the Director, MDA, the program is also developing alternate divert thrusters (ADT)—a component that steers the kill vehicle in flight— to address the systemic problem of in-flight vibration. The program plans to implement this new component, along with changes to components in the booster, such as the flight computer, into new interceptor production in fiscal year 2017. In addition to changes to the kill vehicle, table 4 below describes the current fleet of fielded interceptor versions and the program’s plans to upgrade, retrofit and redesign the CE-II interceptor. CE-II with Modified IMU: The GMD program experienced a number of setbacks in fiscal year 2014 that increased risk to the program’s goal of fielding 44 interceptors by 2017. For example, the program experienced delays with restarting interceptor production for the current interceptor version—the CE-II with the modified IMU. Defective wiring harnesses were identified on all CE-II interceptors, including those previously fielded and those currently undergoing production. It was determined that an improper soldering application was used during wiring harness assembly that could later cause corrosion, which could have far reaching effects because of the component’s power and data interfaces with the kill vehicle’s IMU. The program previously experienced problems with the wiring harness and the issue was resolved, but the corrective actions were not passed along to other suppliers. MDA assessed the likelihood for the component’s degradation in the operational environment as low and decided to accept the component as-is, which helped mitigate the schedule delay, but increased the risk for future reliability failures. An assessment conducted by the Defense Contract Management Agency found that any deviation from the program’s kill vehicle delivery schedule of one kill vehicle per month could jeopardize the program’s chances of meeting its goal of fielding 44 interceptors by 2017. CE-II Block I: The GMD program is following a high risk approach for acquiring the CE-II Block I, but an opportunity exists for the program to reduce risk by flight testing the CE-II Block I prior to starting the interceptor’s production. In July 2014, we found that the program planned to start production of CE-II Block I interceptors for operational use almost two years before it conducts Flight Test GMD (FTG)-15—a demonstration flight test planned to occur in the fourth quarter of fiscal year 2016 to determine if the new interceptor components work as intended. According to acquisition best practices reported in our July 2002 assessment of DOD’s weapon system acquisition process, the demonstration flight test should be conducted before production for operational use. As we testified last year, the GMD program has had many years of significant and costly disruptions caused by production getting well ahead of testing and then discovering issues during testing. Even though assets have already been produced, MDA has had to add tests that were not previously planned and delay tests that are necessary to understand the system's capabilities and limitations. By continuing to follow a concurrent acquisition approach, it is likely that the GMD program will continue to experience delays, disruptions, and cost growth. In addition, the GMD program has encountered issues with a number of the component modifications being developed for the CE-II Block I. The developmental issues have caused the program to delay necessary design reviews, generated significant schedule compression, and has pushed out the completion of CE-II Block I deliveries to the second quarter of fiscal year 2018. For example, in November 2013, the program experienced an ADT qualification test failure as a result of design changes that were not verified prior to qualification testing. By omitting steps in the design process, the program increased the risk for costly, time-consuming problems to occur later in development. These risks materialized when the program failed the qualification test, resulting in a one-year delay to the ADT development effort, which the Defense Contract Management Agency assessed as having left the program with no schedule margin for performing the next flight test, GM CTV-02+, according to the program’s current schedule. Although the recent delays to the CE-II Block I design reviews put the program behind schedule, it also provides the program with additional decision time—should program officials choose to use it—to assess the merits of conducting FTG-15 prior to starting CE-II Block I production for operational use. GMD: Opportunity Exists to Incorporate Results of Alternatives Assessment Which Provide Valuable Knowledge for Its Kill Vehicle Redesign Plans MDA is moving forward with the Redesigned Kill Vehicle (RKV) program—a new effort intended to address concerns about GMD’s interceptor fleet reliability—prior to considering the benefits and risks of a broad range of options. Both the Director, Operational Test and Evaluation and the Under Secretary of Defense for Acquisition, Technology, and Logistics have previously voiced concerns with the CE- II’s reliability. MDA validated these concerns when it acknowledged that the current kill vehicle design is costly to produce and sustain and requires the warfighter to fire more interceptors to overcome anticipated in-flight reliability failures. In the fall of 2013, DOD’s Office of Cost Assessment and Program Evaluation began conducting a study to assess options, called an analysis of alternatives (AOA), for improving, augmenting, or providing an alternative interceptor to improve homeland ballistic missile defense. The assessment continued through fiscal year 2014 and is expected to be completed in fiscal year 2015. We previously reported that a key challenge facing MDA was improving investment decisions, and that an AOA can help establish a sound basis for new acquisition efforts. Robust AOAs are a sound investment practice because they objectively compare the costs, performance, effectiveness, and risks of a broad range of alternatives, which aid congressional and DOD decision makers in making an impartial determination to identify the most promising and cost-effective approach to pursue. We also found that MDA did not conduct AOAs for its new programs, which placed its programs at risk for cost, schedule, and technical problems as a result of pursuing potentially less than optimal solutions. MDA began the RKV program, complete with a five-year funding request and schedule goals, before the AOA for homeland missile defense was completed. MDA began the RKV program to replace currently fielded interceptors with ones that are more testable, reliable, producible, and cost effective. According to MDA, this effort began in July 2013 and options for the RKV program were based on interim results from an ongoing GMD fleet assessment and an interim analysis MDA produced in support of the homeland missile defense AOA. MDA defined the RKV design parameters and assessed design concepts provided by industry. MDA proceeded to incorporate the RKV effort into GMD’s current program of record and increased GMD’s budget request for fiscal years 2015 through 2019 by over $700 million to fund the RKV’s development. In addition, MDA added two RKV flight tests to the GMD test plan and collaborated with industry to finalize the RKV concept. MDA developed plans to conduct the first RKV flight test in fiscal year 2018 and begin delivering interceptors in fiscal year 2020. Although several plans have been established, MDA has not finalized its acquisition strategy for the RKV and, as such, the agency’s plans are subject to change. While redesigning the GMD kill vehicle may be justifiable, MDA did not have the results of the AOA prior to making the determination to pursue the redesign effort. By not making the AOA a major part of the RKV effort, MDA runs the risk of starting the effort on an unsound acquisition footing and pursuing a kill vehicle that may not be the best solution to meet the warfighter’s needs within cost, schedule, and technical constraints. In September 2009, we found that the effectiveness of AOAs for some major defense acquisition programs were limited because decision makers locked into a solution before an AOA was conducted and the results of AOAs came too late in the process. However, in April 2014, the Director, MDA committed to following a knowledge-based approach to acquire the RKV, which is an encouraging sign that the agency intends to take actions to place this new investment on a sound acquisition footing. Moreover, the agency has made several design decisions, but it has not yet finalized the RKV’s requirements or begun development activities. Thus, a window of opportunity still exists for MDA to make the AOA a major part of the redesign effort. MDA Provides Limited Insight Into the Overall BMDS Integrated Capability Goals The NDAA for fiscal year 2012 requires MDA to report capability delivery goals and progress at the element level, which enables Congress to track acquisition plans and progress of individual BMDS elements, including those at high risk of cost and schedule growth. However, this law does not require MDA to externally report key aspects of integrating two or more elements and delivering integrated BMDS capabilities, which allow the BMDS to achieve performance levels not realized by individual elements working independently. For example, integrating Aegis BMD with forward-based radars through C2BMC allows it to launch the interceptor earlier, before its own radar can acquire the threat, thus defending larger areas. Table 5 includes additional examples of planned integrated capabilities. Because MDA does not systematically report this information, external decision makers have limited insight into the interdependencies between element-level development efforts and whether these efforts are on track to reach maturity needed for integration activities. Additionally, external decision makers may have limited insight as to whether MDA is on schedule to complete delivery of certain system- level capabilities or if they have been delayed. Internally, MDA reports overall BMDS capability goals in its systems engineering documents, but according to MDA officials, these management documents are not provided to external decision makers. MDA uses these documents to describe how element upgrades are synchronized to support deliveries of system-level capabilities, including the timeframes by which they need to complete their own development in order to be available for integration and test events. MDA also uses these documents to identify when particular BMD system-level capabilities are expected to be integrated and delivered in order to improve architectures that defend the U.S. homeland and U.S. forces and allies abroad. Additionally, the system engineering documents identify test and assessment needs to confirm capability delivery goals, as well as potential challenges and risks to meeting the integrated capability delivery goals. While useful to MDA for management purposes, these documents in their entirety are too detailed for external oversight. Nonetheless, key sections of their systems engineering documents contain high-level information that would be useful to congressional decision makers, such as the schedule for delivery of BMD system-level capabilities and schedules for synchronized delivery of BMD elements to integration events that support these capabilities. Table 5 below illustrates how reporting on MDA's progress in achieving capabilities that hinge on integration is fairly limited, particularly when compared to our analysis of MDA’s systems engineering documents. While the BAR may identify a key capability as present or as part of an individual element, it does not describe when the capability will actually be achieved since that depends on a family of elements working together. The systems engineering documents also identify potential challenges to delivering system-level capabilities that the report to Congress does not. As a result, congressional decision makers do not receive key information that could aid them in oversight of MDA’s development efforts. Conclusions As with previous years, MDA had mixed progress in achieving its testing and delivery goals for 2014. MDA conducted two intercept and three non- intercept flight tests that demonstrated an increased capability for Aegis BMD and the GMD program. Moreover, several programs, such as the Aegis BMD SM-3 Block IIA and the Targets and Countermeasures program, took steps to reduce acquisition risk. At the same time, however, MDA is still allowing production to get ahead of testing (concurrency)—a practice which has consistently led to cost and schedule growth as well as performance problems in the past. For the Aegis BMD SM-3 Block IB, MDA will have a full rate production decision in fiscal year 2015 and plans to enter into a multiyear procurement contract in the following year. If it does not conduct a flight test of the redesigned components of its third-stage rocket motor before entering into full production, the Aegis BMD program is at risk for potential cost growth and schedule delays, affecting its planned interceptor production. A flight test serves as an opportunity to increase the confidence that the redesigned component works as intended and determine if any additional changes are necessary. For GMD, the program planned to start production of CE-II Block I interceptors for operational use almost two years before it conducts an intercept flight test in the fourth quarter of fiscal year 2016. In this case, recent development challenges have delayed design reviews, providing the program with additional time to assess the merits of conducting the demonstration flight test ahead of starting CE-II Block I production. In addition, because the agency started the RKV program in the fall of 2013 rather than await the results of an ongoing AOA for homeland missile defense, congressional and DOD decision makers may not have the insight necessary to discern whether MDA’s approach is the most promising, cost-effective solution to pursue. Though design decisions have been made, development activities have yet to begin, so there is still an opportunity for the Director, MDA to follow through on his commitment to follow a rigorous systems engineering approach to conduct the redesign effort. Lastly, although MDA has increased its focus on BMDS integration and delivering integrated system-level capabilities, it does not provide a systematic view of its plans and progress for delivering these capabilities to external decision makers. While the agency is currently not required to externally report key aspects of integration, insight into the interdependencies between element-level development efforts and whether these efforts are on track to reach maturity needed for integration activities is necessary to understand MDA's progress, as many of the capabilities envisioned for EPAA and other regional deployments hinge on successful integration. Recommendations for Executive Action We recommend that the Secretary of Defense take the following three actions to strengthen MDA’s acquisition efforts and help support oversight. 1. To ensure that future efforts are aligned with a sound acquisition approach, which includes robust systems engineering and testing, we recommend that the Secretary of Defense direct the following two actions: a) For Aegis BMD SM-3, DOD conduct a flight test to increase confidence that the redesigned SM-3 Block IB third-stage rocket motor component works as intended prior to inserting it into the SM-3 Block IB production line. b) For GMD, delay production of CE-II Block I interceptors intended for operational use until the program has successfully conducted an intercept flight test with the CE-II Block I interceptor. 2. To ensure MDA makes sound investment decisions on improving homeland ballistic missile defense, the Secretary of Defense should direct MDA to make the department’s analysis of alternatives an integral part of its planning effort and delay any decisions to begin development of the new GMD Redesigned Kill Vehicle until: a) the department’s analysis of alternatives is completed and identifies the best solution to pursue; and b) Congressional and DOD decision makers have been provided the results of that analysis. 3. Drawing from information it already has, the Secretary of Defense should direct MDA to report annually to Congress its plans for, and achieved progress in developing and delivering integrated BMDS- level capabilities. This reporting should include: a) planned integrated BMDS-level capabilities, including dates for when capability is planned for delivery; and b) element-level upgrades needed for delivery of the integrated BMDS capability, including dates that these upgrades need to be available for integration into the BMDS capability. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. These comments are reprinted in Appendix I. DOD also provided technical comments, which were incorporated as appropriate. In responding to a draft of this report, DOD partially concurred with our first two recommendations regarding Aegis SM-3 Block IB and GMD and concurred with our third recommendation to report to Congress its annual progress towards planned integrated BMDS-level capabilities. DOD concurred with the first part of our recommendation to conduct an Aegis SM-3 Block IB flight test prior to inserting a redesigned third-stage rocket motor component into the interceptor’s production line. However, the department partially concurred with the second part of this recommendation to delay production of the CE-II Block I interceptors until the program has conducted a successful intercept attempt with this new interceptor version. In its comments, DOD stated it will delay emplacement of CE-II Block I interceptors until the program has successfully conducted an intercept flight test with the CE-II Block I, but will continue production and final integration of interceptors. DOD also stated that delaying interceptor production and integration until the flight test is conducted would unacceptably increase the risk to reaching the Secretary of Defense mandate to achieve 44 emplaced interceptors by the end of 2017. Based on our past work examining weapon system acquisition and best practices, we found that successful programs follow a knowledge-based acquisition approach and achieve an appropriate balance between schedule and risk that does not include an undue amount of concurrency. However, MDA’s current approach for acquiring the CE-II Block I embraces a proven risk of undue concurrency because any issues uncovered during the intercept test could significantly affect the program. As we found in this report, such an approach has proven very costly for MDA. Because the agency moved forward with CE-II production prior to completing flight testing, test failures exacerbated the disruptions to the program and increased the CE-II’s cost by $1.745 billion. We maintain our position that MDA should take the recommended action to delay production of CE-II Block I interceptors intended for operational use until the program has conducted a successful intercept flight test with the CE-II Block I in an effort to align its efforts with a sound acquisition approach. DOD partially concurred with our recommendation to delay any decision to begin development of the RKV until: 1) the department’s AOA for improving homeland ballistic missile defense is completed and identifies the best solution to pursue; and 2) congressional and DOD decision makers have been provided the results of that analysis. In its response, DOD stated that interim results from the AOA have been used to inform planning decisions and that the results of the final analysis of alternatives will be provided to Congressional and DOD leadership. The department also noted that that an AOA does not make a “best solution” determination but rather provides an objective comparison of alternatives that allows the leadership to make the determination of what path the department should take. We agree that there is generally no requirement for an AOA to identify a single solution. However, the goal of an AOA is to identify the most promising options for decision makers to consider rather than simply providing a comparison of alternatives that does not clearly indicate the most promising solutions, whether it be one or multiple options. DOD declined to commit to delaying any decision to begin developing the RKV and stated its investment decisions will be sound because interim results from the ongoing AOA have been used to inform early planning decisions, including an acquisition strategy framework for the RKV. While we recognize in this report that DOD’s decision to redesign the GMD kill vehicle may be justifiable, by starting RKV development in advance of the AOA’s completion, DOD runs the risk of locking into a solution that may not be the most promising and cost effective option to pursue. In addition, MDA has previously attempted to start new major efforts that were not informed by AOAs which DOD later cancelled because of the programs’ high-risk acquisition strategies and technical challenges. As such, we maintain that MDA should delay any decision to begin RKV development until an AOA that identifies the most promising solution(s) to pursue to improve homeland ballistic missile defense is completed and the results of which have been provided to congressional and DOD decision makers. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and to the Director, MDA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Appendix I: Comments from the Department of Defense Appendix II: Aegis Ballistic Missile Defense (BMD) Appendix II: Aegis Ballistic Missile Defense (BMD) Some Aegis BMD Weapon System capabilities planned for the 2015 timeframe are at risk of delays and performance shortfalls due to technical challenges. Aegis BMD Weapon System planned for 2018 is on track but changes to the test program delay the assessment of key capability. MDA revised its Aegis BMD baselines, limiting transparency by reducing insight into developmental activities. Program Overview Aegis Ballistic Missile Defense (BMD) is the naval component of the Missile Defense Agency’s (MDA) Ballistic Missile Defense System (BMDS). It consists of the Aegis BMD Weapon System (AWS), including a radar, and Standard Missile-3 (SM-3) interceptors. MDA develops the AWS in versions called spirals that expand on preceding capabilities. Deliveries are planned to support MDA’s Phased Adaptive Approach (PAA) to regional BMD, including the PAA in Europe (EPAA), in 2015 and 2018 timeframes. For specifics on the Aegis SM-3 interceptors, see appendixes IV and V. MDA delivered the first AWS spiral for PAA Phase 2, called AWS 4.0.2 in December 2012. However, additional spirals are being developed to ensure that MDA can meet PAA and EPAA Phase 2 goals. One of the spirals, AWS 5.0 Capability Upgrade (CU), expands the battle-space and raid size capability and improves performance against medium and intermediate range threats. It also expands capability to intercept threats in the terminal phase and allows for Integrated Air and Missile Defense (IAMD) where ballistic missiles and air threats (i.e. cruise missiles) can be engaged at the same time. Additionally, AWS version 4.1 is planned to provide similar BMD capabilities as Aegis BMD 5.0CU. MDA is developing AWS 5.1 to support PAA, including EPAA, Phase 3 in 2018. It is planned to further expand performance of AWS 5.0CU against longer range threats and intercepts of threats in terminal phase. Its key capability–Engage on Remote —also allows the system to execute intercepts based on tracking information about the location of threats from remote sensors without the need for the Aegis radar to ever acquire them. MDA is developing this spiral in two phases: 1) provides initial capabilities and integrates the weapon system with SM-3 Block IIA, and 2) delivers remaining capabilities including Engage On Remote needed for EPAA. Some AWS capabilities planned for the 2015 timeframe are at risk of delays and performance shortfalls, due to technical challenges MDA documents indicate that the AWS planned for deployment in support of Phase 2 of the PAA is at risk of schedule delays or performance shortfalls due to technical challenges. While MDA delivered initial Aegis BMD capabilities for PAA Phase 2 with AWS 4.0.2, its documents indicate that ship- based capabilities needed to meet certain PAA Phase 2 goals will not be available until the subsequent versions—AWS 4.1 and 5.0CU—are deployed. However, the certification of AWS 4.1 has been delayed from the end of 2015 to the second quarter of fiscal year 2016, after EPAA Phase 2 is declared. Although MDA accelerated its AWS 4.1 schedule in 2014, including the certification date, by about three months, the new plan could present a challenge by compressing its test schedule. Additionally, both AWS 4.1 and 5.0CU may be certified for deployment before they complete planned development and testing. Specifically, both versions have technical challenges that may further delay the delivery of some capabilities or require fixes after delivery. Although AWS 5.0CU and 4.1 are planned to be certified in the fourth quarter of fiscal year 2015 and second quarter of fiscal year 2016 respectively, both may not complete development and be fully integrated into the BMDS architectures until 2017. Technical challenges for AWS 4.1 may reduce their capability or further delay their delivery. For example, MDA’s analysis indicates that AWS 4.1 raid size handling capabilities do not meet the planned requirement. To mitigate this issue, the program initially considered making modifications to the system. However, Aegis BMD program management officials told us that they rejected this option due to the expected cost and decided to instead lower the requirement. Additionally, capability for intercepting missiles in terminal phase of flight—designed to allow Aegis BMD ships to protect nearby ships from ballistic missiles—is also at risk because of technical challenges and may require an alternate design that could lead to cost growth and schedule delays. Moreover, flight testing of this capability is currently being considered to be conducted three months after AWS 4.1 certification for operations, placing the program at additional risk should issues be discovered during flight testing. The delivery of AWS 5.0CU is expected to meet its 2015 delivery date, but current plans indicate that its enhanced capability to intercept ballistic missiles in the terminal phase of flight will be flight tested after delivery. Currently, the flight test is planned for the first quarter of fiscal year 2017—more than one year after delivery—placing the program at additional risk should issues be discovered during flight testing. The program also continues to discover software defects faster than it can rectify them, while also working on mitigating performance limitations from previous versions that remain applicable. The program made progress in rectifying prior shortfalls and has identified high priority fixes that are still required. While the program plans to fix the key defects prior to delivery, some modifications will have to be made after it is deployed. In January 2014, the program reviewed the designs of both development phases for AWS 5.1. For the first phase, which is designed to integrate the SM-3 Block IIA with the weapon system and deliver other initial AWS 5.1 capabilities, the program demonstrated requisite maturity to proceed to the next stage of development. For the second phase, which builds on the first and is planned to complete AWS 5.1 capabilities, including Engage On Remote, the program met review goals by demonstrating that requirements needed to proceed with development have been well defined. Despite the progress, however, the assessment of the Engage On Remote capability has been delayed and may be at risk. This is a key capability for PAA Phase 3 (including for EPAA), which is designed to mitigate limitations posed by the range of the Aegis radar. The capability allows the ship to execute intercepts based on tracks from certain forward based radars before the threat comes close enough for the Aegis radar to track it. As a result, it expands the space in which the ship can intercept the threat and allows for greater defended area. The full delivery and integration of this capability into the BMDS depends on Aegis BMD, as well as C2BMC and certain sensors. While the required AWS is currently projected to meet the date for a flight test scheduled to assess this capability, C2BMC will not. Rather than delaying the test, MDA will assess only part of the capability in the first quarter of fiscal year 2018, by substituting key aspects with another Aegis ship to directly provide tracks to the shooter. It is currently unclear, whether MDA will introduce another test to assess the full remote engagement capability or add a requirement to two subsequent operational flight tests, which are designed to assess the BMD system-level performance of Phase 3 architectures. If MDA chooses the latter, it will take on additional risk by adding another system- level test objective to already complex flight test designs. For specifics on the C2BMC element, see appendix VI. MDA Revised its Aegis BMD Baselines, Limiting Some Transparency by Reducing Insights into Developmental Activities In fiscal year 2014, MDA changed its approach to managing the development of the AWS, combining all spirals into a single baseline, limiting some visibility into its progress for this year. MDA uses baselines to monitor the progress of its programs and report them to Congress annually in the BMDS Accountability Report. Previously, AWS spirals were included with associated interceptors, aligned to EPAA phases. In June 2014, MDA combined AWS 5.0CU, AWS 4.1 and AWS 5.1 into a single baseline, managed by a single program manager. According to Aegis BMD program management officials the reorganization was expected to allow the program to realize efficiencies in managing the development of the AWS spirals, because of the interdependency between the spiral development efforts. However, officials also told us that there are no tangible savings that have been realized as a result of the reorganization. Moreover, in order for baselines to be useful for managing and overseeing a program, they need to be stable over time so progress can be measured and so that decision makers can determine how best to allocate limited resources. In April 2013, we found that activities from one Aegis BMD baseline were reallocated and combined with activities in other baselines which limited our ability of assess them. Similarly, the proposed baseline for fiscal year 2015 reconfigures the way some content is presented, making comparison with 2014 baselines difficult or impossible. Appendix III: Aegis Ashore Aegis Ashore’s first non-intercept flight test met its objectives. MDA plans one intercept flight test to assess the Romanian capability and one to assess the capability in Poland. Schedule delays and changed testing requirements compress the time for assessment of Aegis Ashore performance with other systems. Aegis Ashore made progress addressing challenges related to radio-frequency spectrum but some challenges remain. Program Overview Aegis Ashore is planned to be a land-based, or ashore, version of the ship-based Aegis BMD. Aegis Ashore is to track and intercept ballistic missiles in the middle of their flight using Standard Missile-3 (SM-3) interceptors. Key components include a vertical launching system with SM-3 interceptors and an enclosure, referred to as a deckhouse, that contains the SPY-1 radar and command and control system. Aegis Ashore will share many components with the sea-based Aegis BMD and will use future versions of the Aegis BMD weapon system that are still in development. Missile Defense Agency (MDA) plans to equip Aegis Ashore with a modified version of the Aegis BMD weapon system software that will share many components with the sea-based Aegis BMD. A total of three Aegis Ashore facilities are planned: one test facility in Hawaii, an operational facility in Romania in 2015, and another operational facility in Poland in 2018 to support European Phased Adaptive Approach (EPAA). DOD deployed the test facility in April 2014. It was used for the first Aegis Ashore flight test in May 2014, and will be used to flight test Aegis Ashore capabilities as upgrades become available. DOD plans to deploy Aegis Ashore in Romania with the Aegis BMD Weapon System (AWS) 5.0CU and SM-3 Block IB in the 2015 time frame. The program received all fabricated components at the site and is currently installing the facility. It plans to complete testing of this facility by the end of 2015. DOD plans to deploy the second operational facility in the 2018 time frame in Poland, equipping it and upgrading the facility in Romania with the AWS 5.1 and SM-3 Block IIA. It plans to begin site preparations in and begin fabrication in the middle of fiscal year 2016. Aegis Ashore’s first non-intercept flight test met its objectives MDA successfully conducted the first flight test involving components of the Aegis Ashore system at the Aegis Ashore Missile Defense Test Complex in May 2014. During the test, a simulated ballistic missile target was acquired, and tracked. This flight test supports development of the Aegis Ashore capability of Phase 2 of EPAA, planned to begin operations in Romania in 2015. During the test, the Aegis BMD Weapon System fired an SM-3 Block IB interceptor from the Vertical Launch System. Several functions were exercised during the test, but the primary purpose of the test, designated as Aegis Ashore Controlled Test Vehicle (AA CTV)-01, was to confirm the functionality of Aegis Ashore by launching a land-based SM-3. The test met its objectives but also revealed a problem. Specifically, there was an issue with how the system steered the interceptor, that potentially resulted from differences between the sea- based and ashore versions of the system. Program management officials said this problem has been corrected and the correction will be installed in the AWS software before the next flight test occurs. MDA plans one intercept flight test to assess the Romanian capability and one to assess the capability in Poland Aegis Ashore is scheduled to participate in only two intercept flight tests– one to assess its Romanian capability and the other to assess the capability for Poland. These capabilities will be delivered to the warfighter in 2015 and 2018 for EPAA Phase 2 and Phase 3, respectively. Since 2010, the program has reduced its test plan from seven flight tests to only three, two of which involve intercepts. Both of these intercept tests–FTO- 02 E1 scheduled for the third quarter fiscal year 2015 and FTO-03 E1 scheduled for the third quarter of fiscal year 2018—are system-level operational flight tests designed to assess the integrated capability of BMD systems for the upcoming EPAA phase. According to program officials, the risk to understanding performance and limitations is small because the AWS slated for Aegis Ashore will be flight tested more extensively on ships. However, the conditions on land are different than at sea and require modifications to adapt the weapon system for operations on land. While leveraging ship-based flight tests to assess some Aegis Ashore capabilities saves testing costs, the non-intercept Aegis Ashore flight test held in May 2014 demonstrated that adaptations made for land- based operations may have unforeseen performance implications. Flight testing Aegis Ashore intercept capability just once prior to delivery may result in schedule delays, cost growth or performance shortfalls, should issues be discovered during flight testing. Schedule delays and changed testing requirements compress the time for assessment of Aegis Ashore performance with other systems Delays in construction at the Romanian operational site and changes to test requirements delay system-level simulated demonstration of new capabilities to just before Aegis Ashore delivery and limits time to rectify issues, should they be discovered during testing. This test is designed to assess the interoperability of the operational Aegis Ashore in Romania with other systems slated for Europe. According to the program, the changes to test requirements were driven by independent testing officials. Previously, all Aegis Ashore tests were going to employ the test asset, which is deployed at the Pacific missile range site in Hawaii. However, MDA made the change to ensure that the operational Aegis Ashore is tested along with the other operational systems deployed in Europe. This test was delayed by about six months, and it is now scheduled to conclude just prior to the delivery of Aegis Ashore in 2015 which limits time for assessment and to rectify issues prior to delivery of the capability. Aegis Ashore made progress addressing challenges related to radio-frequency spectrum but some challenges remain The Aegis Ashore program identified potential workarounds to issues associated with operating the Aegis Ashore radar in the presence of European telecommunication infrastructure, but there could be additional challenges. Radio-frequency is a set of waves that is used to operate the SPY-1 radar used by Aegis BMD, as well as provide an array of wireless communication services to the civilian community, such as mobile voice and data services, radio and television broadcasting, and satellite-based services. While only a part of spectrum needed for radar operations is also used by Romanian telecommunications, the overlap presents challenges with the use of the radar. In March 2011, April 2012, and April 2013 we highlighted issues that Aegis Ashore faces related to radio- frequency spectrum, including: (1) the possibility that the SPY-1 radar might interfere with host nation wireless usage; and (2) the program and the relevant host nation authorities must work together to ensure that host nations approve use of the operating frequency needed for the SPY-1 radar. In March 2014, the Romanian National Allied Radio Frequency Agency granted DOD access to the entire spectrum needed for radar operations, but with limitations. These peacetime limitations include the direction the radar may be radiated as well as times of day. According to the program, MDA has means to coordinate for additional radar operations if required, but the current access should be sufficient to maintain radar reliability. However, there could still be risk to some of the communications infrastructure. The program completed a study that included recommendations that could mitigate some of these effects, by modifying Romanian civilian equipment that could be exposed to the periodic radar radiation. Throughout fiscal year 2014 the program also began negotiations with Poland to secure the use of the Aegis Ashore radar across its entire operating spectrum at that site by 2018. If mitigating procedures work within Romania, DOD expects them to work in Poland. However, anticipated interference during operations is still unknown. Poland has a much more congested spectrum space than Romania, and according to officials from European Command, could experience greater unanticipated interference problems. Additionally, various objects that are found on land and not at sea could interfere with the radar. For example, wind farms, which are located near the proposed site, may interfere with radar operations in some instances. According to program management officials, this is not expected to be a significant issue because of where potential threats would be coming from and the reliance of Aegis Ashore on forward based radars for early acquisition of incoming threats. Appendix IV: Aegis Ballistic Missile Defense Standard Missile-3 (SM-3) Block IB The Aegis BMD program conducted a successful intercept test with the SM-3 Block IB interceptor— FTM-22— on October 4, 2013, which is a key test for a full rate production decision. The SM-3 Block IB interceptor may not be flight tested again with the third-stage rocket motor (TSRM) component redesign which increases production acquisition risk. The program plans for a multiyear procurement strategy in fiscal year 2016. Program Overview The Standard Missile-3 (SM-3) Block IB is a ship-and shore based missile defense system interceptor designed to intercept short- to intermediate- range ballistic missiles during the middle stage of their flight. The SM-3 interceptor has multiple versions in development or production: the SM-3 Blocks IA, IB, and IIA. The SM-3 Block IB features an enhanced target seeker capability for increased discrimination, an advanced signal processor for engagement coordination, an improved throttleable divert and attitude control system for adjusting its course, and increased range. The SM-3 Block IB interceptor is linked with Aegis Ballistic Missile Defense (BMD) Weapon System 4.0.2, Aegis BMD 5.0 Capability Upgrade and Aegis Ashore. For additional information about the Aegis BMD Weapon Systems see appendix II and for Aegis Ashore, see appendix III. The SM-3 Block IB program largely overcame previous development challenges and successfully intercepted all targets in three flight tests. We previously reported that its production line was repeatedly disrupted since 2011 due to flight test anomalies and that MDA had rectified many of those issues identified since then. However, as we reported last year, the final report for the investigation regarding a second interceptor failure test that occurred in September 2013 was expected to be completed in December 2014, but according to officials, the report is further delayed. MDA is also preparing to award a production contract in fiscal year 2015. The Aegis BMD program conducted a successful intercept test with the SM-3 Block IB interceptor— FTM-22—on October 4, 2013, which is a key test for full rate production decision On October 4, 2013, MDA conducted a successful operational flight test of the Aegis BMD system. The test resulted in the lethal intercept of a medium-range ballistic missile target in an operationally representative threat environment. The test, designated FTM-22, met its primary objective, which was to intercept a medium- range ballistic missile target. This test exercised the latest version of the second-generation Aegis BMD Weapon System, capable of engaging longer range and more sophisticated ballistic missiles. FTM-22 was the last required test conducted for a full production decision—the last key production authorization by the Under Secretary of Defense, Acquisition, Technology, and Logistics that would allow MDA to produce the remaining 366 of the 405 total interceptors. With the successful results of FTM-22, MDA anticipates receiving approval for full rate production of SM-3 Block IB from Under Secretary of Defense, Acquisitions, Technology, and Logistics in fiscal year 2015. The SM-3 Block IB interceptor may not be flight tested again with the third-stage rocket motor (TSRM) component redesign which increases production acquisition risk Concurrently with initiating full rate production, the Aegis program office, along with the contractor, is working on a redesign of the third-stage rocket motor (TSRM) components. The TSRM is used to lift the interceptor out of the atmosphere and direct the warhead to the target. This component contributed to test failures. Specifically, although the failure investigation is ongoing, preliminary results indicate that the second interceptor failure from flight test FTM-21 occurred in the TSRM. This failure is also related to the one that occurred in September 2011 flight test FTM-16E2. Consequently, although design changes are considered necessary, MDA does not plan to demonstrate the redesign works as intended via a flight test prior to production. According to program officials and contractor representatives that produce the SM-3 Block IB interceptors, the effort to redesign components in the rocket motor is considered to be relatively straightforward and low risk. Program officials are currently planning to retrofit the interceptors that have already been produced during the four year certification process. According to program officials, they had planned to select the final redesign in early first quarter of fiscal year 2015. However, because of developmental and test challenges with the redesigned component, the program office delayed the selection until later in the fiscal year. Additionally, according to program officials, since the program has not selected the redesign, it is too early to determine the costs associated with inserting it into the interceptor and has not yet been accounted for. Consequently, until the program thoroughly understands the extent of needed modifications, if any, and their effects on performance as demonstrated through testing, its production strategy is at risk of cost growth and schedule delays. Additionally, different issues with that same component have contributed to previous SM-3 Block IB schedule delays and production disruptions in the past. In 2014, we made a recommendation to delay full rate production until such testing demonstrates that the redesigned interceptor is effective and suitable. As it stands, MDA noted that any changes to the SM-3 Block IB would not be included in the full production contract, and that the retrofitting may lead to unanticipated cost increases. As we have previously reported, MDA had experienced these consequences in other elements when it pursued design changes concurrently with production. The program plans for a multiyear procurement strategy in fiscal year 2016 After the program enters into full production, MDA has plans to enter into a multiyear procurement contract which is a special contracting method that allows the agency issue one contract for up to five years, which will allow the agency to procure interceptors for up to five years, even though funds for the entire five years may not be available at the time of award. DOD would need to certify to Congress that the conditions for a multiyear procurement are met. Congress will then have to specifically authorize the multiyear procurement in law before MDA may award the contract. When used appropriately, multiyear contracting can save money compared to a series of annual contracts by allowing contractors to use their resources more efficiently. However, multiyear procurement also entails certain risks that must be balanced against potential benefits, such as the increased costs to the government should the multiyear contract be changed, and can limit DOD’s budget flexibility. MDA is currently redesigning components of the TSRM of the SM-3 Block IB interceptor and it is unclear whether or not it would need any additional changes. Once the redesigned interceptor’s performance has been demonstrated through flight tests the program office may also better understand the costs needed to incorporate those changes into the ongoing production, in addition to if any other design changes are necessary. Consequently, the production strategy is at risk for cost growth and schedule delays. Until the program thoroughly understands the extent of needed modifications, and their effects on performance, not only is the program at risk of additional cost growth and schedule delays—it may also affect any planned cost savings associated with the multiyear procurement. Appendix V: Aegis Ballistic Missile Defense Standard Missile-3 (SM-3) Block IIA The program completed its system-level review of the interceptor’s design and is transitioning to product development to further refine and mature the design and manufacturing processes. The program faces several challenges, including technical issues with a key component—Throttleable Divert and Attitude Control System. The program has a number of flight tests and decisions to be made prior to Phase 3 declaration to the European Phased Adaptive Approach (EPAA). Program Overview The Standard Missile-3 (SM-3) interceptor has multiple versions in development or production: the SM-3 Blocks IA, IB, and IIA. The SM-3 Block IIA interceptor has a 21-inch body diameter which provides increased speed, more sensitive seeker technology, and an advanced kinetic warhead. The SM-3 Block IIA is expected to defend against short-, medium-, and intermediate-range ballistic missiles. Additionally, most of the SM-3 Block IIA components will differ from other standard missile versions requiring new technology being developed for the majority of the SM-3 Block IIA components. This interceptor is planned to have increased range compared to earlier SM-3s. For additional information on the SM-3 Block IB interceptor, see appendix IV. Initiated in 2006 as a cooperative development program with Japan, the SM-3 Block IIA program was added to the European Phased Adaptive Approach (EPAA) in 2009 to defend against longer range threats. The SM-3 Block IIA interceptor is planned to be fielded with Aegis Ballistic Missile Defense (BMD) Weapon System 5.1 by the 2018 time frame and is expected to provide engage on remote capability, in which data from other sensors is used to engage a target, and expand the range available to intercept a ballistic missile. For additional information on Aegis BMD Weapon Systems, see appendix II. The program completed its system- level review of the interceptor’s design and is transitioning to product development to further refine and mature the design and manufacturing processes The program held a system-level review of the interceptor’s design in October 2013, and passed with no major action items and the design met all top level requirements. Completion of at least 90 percent of engineering drawings at this point provides tangible evidence that the product’s design is stable, and a prototype demonstration shows that the design is capable of meeting performance requirements. At the critical design review, the SM-3 Block IIA program completed 100 percent of its drawings and used a prototype of key components to test its performance. As a result of the critical design review, the SM-3 Block IIA design is complete and is proceeding to product development and testing. In June 2014, MDA approved the transition for the SM-3 Block IIA from the technology development phase to the production development phase in its acquisition process. This is where the program further refines and matures the design and manufacturing issues. Once into initial production, the program would provide an initial base for production and deliver assets for continued testing. Additionally, in October 2013, the program completed a propulsion test vehicle test event called PTV-1. It demonstrated that the SM-3 Block IIA interceptor can launch from the vertical launch system. The SM-3 Block IIA program and expected baselines will be included in the BMDS Accountability Report. These baselines—which include resource, schedule, and test, among others—are used to guide and track development of ballistic missile defense capabilities. The program faces several challenges, including technical issues with a key component— Throttleable Divert and Attitude Control System The program is facing some technical challenges with its Throttleable Divert and Attitude Control System (TDACS), which is a key interceptor component that maneuvers the kill vehicle during the later stages of flight. The program designated the issues involving the TDACS (and its associated hardware) as a “moderate risk” that is driving up related cost significantly and causing schedule delays. MDA noted that the problems reduce the TDACS’ performance capabilities while still meeting MDA-set requirements. Because the part has no substitute or alternate supplier, concerns were raised about the delays affecting the program schedule. However, the contractor and program are working to ensure the TDACS and its components do not affect the program schedule. With its current efforts, the program office expects a reduction of risk regarding the TDACS issue. Additionally, they are working with the contractor to stabilize costs and schedules. Until then, the TDACS production and delivery costs and schedule may continue to be at high risk. In the past, the program experienced some problems developing the TDACS, which has historically been a challenge for SM-3 development. Those challenges led to delays in the program’s schedule in conducting the system-level review as well as delaying flight tests until fiscal year 2016. The program has a number of flight tests and decisions to be made prior to Phase 3 declaration to the European Phased Adaptive Approach (EPAA) The program has nine flight tests scheduled between fiscal years 2015 and 2018 and production decisions for the program prior to the Phase 3 declaration of EPAA in late 2018. The flight tests include four intercept tests and three operational tests. During that time period, the program is making its initial production decision in the middle of fiscal year 2017. Based on the program’s test schedule that is laid out, the program does not have a lot of time to make adjustments or changes to the program if a problem emerges. As we reported in the past, any decisions it makes will affect the overall program cost and timing. For example, program officials have stated that the program has not yet determined the number of development and production rounds to be produced. In addition, any decisions on future production plans will require negotiations with Japan since many key components on the interceptors are developed there. Appendix VI: Command, Control, Battle Management and Communications (C2BMC) Appendix VI: Command, Control, Battle Management and Communications (C2BMC) MDA is developing new capabilities for delivery to the current spiral. Some planned modifications to the existing spiral, in part, mitigate earlier schedule delays and capability gaps. The program faces delays caused by added development scope and funding issues. Key improvements to battle management capability of interceptor systems are planned for delivery beyond 2020. Program Overview C2BMC is a global system that links and integrates individual missile defense elements. It allows users to plan ballistic missile defense operations, see the battle develop, and to manage designated sensors. As the integrator, C2BMC allows the Ballistic Missile Defense (BMD) system to defend against more missiles simultaneously, to conserve interceptor inventory, and to defend a larger area than individual systems operating independently. The program delivers the software capabilities in spirals. The current spiral is Spiral 6.4, which became operational in 2011. It provides control of multiple radars. It also processes ballistic missile tracks, and reports these tracks to Ballistic Missile Defense System (BMDS) shooters, such as Ground-based Midcourse Defense (GMD), Aegis BMD, Terminal High-Altitude Area Defense (THAAD), and Patriot, which then use their own command and control, and mission planning tools for stand-alone engagements. Upgrades to this version improve threat acquisition, raid handling and discrimination and are planned through 2016. The next Spiral 8.2 is intended to improve and expand the Spiral 6.4 capabilities, further improving integrated sensor management. Initial version, called Spiral 8.2-1, is planned for delivery in 2017. It will integrate additional sensors and further improve track processing in support of Aegis BMD capability to launch an interceptor before its sensor can acquire the threat. Spiral 8.2-3 is planned for initial delivery in 2018. It includes discrimination upgrades and supports capabilities of some systems to intercept a threat before their organic sensor can acquire that threat. Upgrades to Spiral 8.2-3 are planned past its initial delivery in 2018. The current spiral has been operational and in sustainment since 2011. The Missile Defense Agency (MDA) is developing and delivering capability upgrades before the next version is available in 2017. These upgrades are designed to mitigate existing capability gaps, some of which have been identified through testing. Key capability upgrades include: Regional Debris Mitigation, which allows the system to continue tracking and engage threats when they are surrounded by a large number of objects, or debris. C2BMC deployed the initial capability in May 2014 in support of regional BMD. Boost Phase Cueing between two AN/TPY-2 radars, which enables one radar that is better positioned to acquire a threat while it is boosting, to cue another radar that is better positioned for extended tracking, allowing for earlier tracking and tracking of larger raids. This capability was delivered in December 2014, in support of homeland defense. Discrimination Improvements for Homeland Defense–Near Term, where C2BMC will integrate a set of element capabilities to improve BMDS engagement reliability, lethality, and discrimination, and as a result improve the warfighter shot doctrine, preserving limited inventory. Planned for delivery in 2016, in support of homeland defense. Additional upgrades for this capability are planned to be included in future spirals. Some planned modifications to the existing spiral, in part, mitigate earlier schedule delays and capability gaps MDA is developing modifications to the fielded spiral of C2BMC that mitigate earlier delays of the next spiral. As we found in March 2014, the delivery of this new version, Spiral 8.2, has slipped from 2015 to 2017 having ripple effects on capabilities of other BMD systems. For example, MDA delayed the delivery of a capability that improves the tracking of threats by reducing uncertainties about their location earlier in the engagement timeline, thus allowing Aegis BMD to launch its interceptors sooner, extending the area it can defend. This delay also created a misalignment between the schedules of C2BMC and two efforts that improve satellite capabilities, which are expected to complete development prior to 2015: 1. Air Force’s upgrades to satellites that provide early warning of missile launches for homeland defense, called Space-Based Infrared System (SBIRS) Increment 2. 2. MDA’s program for existing satellites to provide boost phase cues to land based radars, in support of regional and homeland defense, called BMDS Overhead Persistent Infra Red Architecture (BOA). In order to mitigate the misalignment with the Air Force’s SBIRS Increment 2 program, MDA developed a retrofit to C2BMC that ensured continued interoperability between the satellites and the homeland defense architecture. Specifically, without the retrofit, C2BMC would have lost its ability to pass early warnings of missile launches to land based radars and GMD, delaying the ability to track threats and develop plans to intercept them. MDA began testing the retrofit in January 2014 and will continue to do so through 2016. According to program documentation, the cost of this effort was $8.9 million. MDA delayed the delivery of boost phase cueing by BOA until Spiral 8.2 is available, but in 2014 it developed AN/TPY-2 to AN/TPY-2 cueing on boosting tracks. This capability is significantly more limited than the BOA cueing, since the satellite fields of view cover greater areas; however, it allows some of the same benefits, including earlier acquisition and tracking of larger raids by the radar receiving the cues. Furthermore, according to MDA officials, the capability was developed to capitalize on the delivery of the second AN/TPY-2 radar to Japan and will only be applicable to homeland defense, while the satellite capability, once delivered, will support all BMD missions. MDA delivered this capability in December 2014. According to program documentation, the cost to develop the capability was $3.7 million. The program faces delays caused by added development scope and funding issues Added development scope, furloughs and funding challenges could delay C2BMC milestones and the delivery of some capabilities. According to program documentation, some contract and program milestones were delayed, some up to over one year, in part to accommodate work needed to develop capabilities that were added over the last 2 years. Additionally, the program underestimated some of its costs in the last budget submission, which, in addition to the current and projected funding levels, require it to reassess its plans. While the program does not plan to develop new baselines until the fiscal year 2015 budget is finalized, documentation indicates that completion of key activities for the current and following spirals will need to be delayed. For example the program plans to delay the assessment of C2BMC capability that allows BMDS shooters to intercept threat missiles earlier, based on tracks provided by forwarded based radars, before their own radars can acquire the threat. Specifically, MDA plans to complete the initial assessment of the remote engagement capability at the beginning of fiscal year 2019, rather than the end of fiscal year 2017. The agency will also assess the second phase of this capability delivery in the beginning of fiscal year 2021, rather than in the beginning of the second quarter of fiscal year 2019, as previously planned. While these new schedules still support the system-level declarations planned for regional and homeland defense in December 2018 and 2020 respectively, they leave little time to rectify issues, should they be discovered during testing. The program is also considering delaying Spiral 6.4 and 8.2-1 milestones, but as of now, there are no plans to delay the assessments and declaration of their capabilities. Key improvements to battle management capability of interceptor systems are planned for delivery beyond 2020 C2BMC has limited battle management capabilities which currently allows only for control of radars but does not provide a system-level capability to coordinate engagement decisions. According to the Director, Operational Test and Evaluation, effective “battle management” requires C2BMC to not only collect and process information from sensors and weapons, as it currently does, but to also determine which threats should be engaged by which interceptor system, to produce the highest probability of engagement success, and then to transmit this information back to the sensors and weapons. While initially planned for delivery in 2018, such a capability is currently planned for Spiral 8.4, which is scheduled for delivery sometime after Spiral 8.2-3. Appendix VII: Ground-based Midcourse Defense (GMD) Appendix VII: Ground-based Midcourse Defense (GMD) Flight Test GMD (FTG)-06b was a milestone achievement towards demonstrating that the Capability Enhancement (CE)-II version works as intended. Flight testing is several years behind; CE-II demonstration cost increased to $1.98 billion. Delays in interceptor retrofits extend risk to warfighter. GMD’s Redesigned Kill Vehicle (RKV) program has the potential to end two decades of multi-billion dollar efforts to fix and upgrade the kill vehicle. Program Overview The GMD program is a ground-based defense system designed to defend the United States against a limited intermediate and intercontinental ballistic missile attack in the middle part of their flight. Key components include a ground-based interceptor consisting of a booster with an exoatmospheric kill vehicle (EKV) on top, as well as a communication system and a fire control capability. The kill vehicle uses on-board sensors and divert capabilities to steer itself into the threat missile to destroy it. There are currently two versions of the kill vehicle that have been deployed: the initial design known as the Capability Enhancement (CE)-I and the follow-on design, known as the CE-II. In March 2013, the Secretary of Defense announced plans to increase the number of deployed GMD interceptors from 30 to 44 to add protection to the homeland and to stay ahead of long-range ballistic missile threats. The Missile Defense Agency (MDA) conducted a successful CE-II intercept test, called Flight Test GMD (FTG)-06b, in June 2014. MDA has since resumed CE-II interceptor production with deliveries starting in the first quarter of fiscal year 2015. In addition, MDA recently decided a redesign of the GMD kill vehicle is required to address ongoing CE-II reliability concerns and has begun a new effort, called the Redesigned Kill Vehicle (RKV). MDA worked with industry to finalize the RKV concept, which, according to MDA, informed its schedule goals to conduct the first flight test in fiscal year 2018 and new interceptor production beginning in fiscal year 2020. FTG-06b was a milestone achievement for the GMD program and the first of several needed successful intercept tests to fully demonstrate the CE-II interceptor works as intended. While the successful execution of FTG-06b was a major accomplishment for the program, additional testing is necessary to demonstrate the CE-II design works as intended and for the warfighter to have a full understanding of the interceptor’s capabilities and limitations. Some of the CE-II capabilities that both MDA and the warfighter have identified that need to be demonstrated include: intercepting a target representative of an intercontinental ballistic missile; performing a salvo test where two interceptors are utilized against a single target; and performing a long time of flight intercept. MDA currently plans to complete these tests by fiscal year 2024. Flight testing is several years behind; CE-II demonstration cost increased to $1.98 billion The path to FTG-06b was a disruptive period for the GMD program. The program initially planned to conduct its first CE-II intercept test, FTG-06, in the first quarter of fiscal year 2008 prior to fielding the first CE-IIs later in fiscal year 2008. However, in March 2009, we found that CE-II fielding had outpaced flight testing, as the program began fielding CE-IIs in advance of conducting FTG-06. The program subsequently experienced approximately six and a half years of delays, failing both of its CE-II intercept attempts and a CE-I intercept attempt. With the GMD program’s successful execution of FTG-06b, the program demonstrated it had resolved some of the major technical problems discovered during the prior six-and-a-half year period of test failures and development challenges and successfully executed FTG-06b. Although the program has resolved many of the technical challenges, it now faces the long term effects from the prior period, as flight tests were delayed by several years in order for the program to overcome the test failures. For example, the program initially planned to conduct a salvo intercept test in early fiscal year 2009 following a successful CE-II intercept test. However, because of the test failures and development delays, the salvo test is now planned to occur in late fiscal year 2017— almost nine years later than initially planned. The cumulative effect of these delays has extended the completion of planned CE-II flight tests to fiscal year 2023—approximately five and a half years after the program has completed fielding the CE-IIs. Another long term effect from the prior period of CE-II test failures is that the cost to demonstrate, as well as fix, the currently deployed CE-IIs has increased from an initial $236 million—the cost of the first CE-II flight test—to currently $1.981 billion. The need for failure reviews, additional flight tests, mitigation development efforts, and a retrofit program have increased the CE-II’s demonstration cost by $1.745 billion. Some of the mitigation development efforts are ongoing and, as such, the cost to demonstrate and fix the CE-IIs may continue to increase. Delays in interceptor retrofits extend risk to warfighter MDA’s fleet of currently deployed CE-I and CE-II interceptors are in need of upgrades and retrofits to address prior test failures. However, in order to meet the goal of fielding 44 interceptors by the end of 2017 and also offset the unplanned cost increase to demonstrate and fix the CE-II, MDA plans to delay fixing the fielded CE-IIs until fiscal year 2015 with fielding completed in fiscal year 2016. MDA also plans to delay fixing the fielded CE-Is until fiscal year 2018, which will continue beyond fiscal year 2020. In addition, according to program officials, the program does not plan to fix the currently deployed or newly produced CE-IIs’ divert thrusters, a component with known performance issues that helps steer the interceptor in flight. While MDA’s plan to produce new interceptors ahead of fixing the fielded interceptors may enable the program to field additional interceptors sooner, it also increases risk for the warfighter because the deployed interceptors do not have the fixes needed to address known issues. As such, the fielded interceptors are susceptible to experiencing the same failure modes exhibited during prior test failures, leaving the warfighter with an interceptor fleet that may not work as intended. According to MDA, the warfighter can compensate for some of these anticipated in- flight reliability failures by launching a number of interceptors to defend against an enemy attack. However, such an approach is inventory- intensive and limits the system’s raid handling capacity, reducing the system’s overall effectiveness to defend the homeland against ballistic missile attacks. In addition, since MDA tentatively plans to begin replacing the fleet of currently fielded interceptors with RKV interceptors starting in fiscal year 2020, it is unclear why MDA would expend the resources to fix the CE-Is only to begin replacing them two years later. MDA’s decision to redesign the GMD kill vehicle will be DOD’s seventh major attempt to fix and improve the current kill vehicle design. The current GMD kill vehicle was initially designed as a prototype in the early 1990’s. Since then, MDA has spent tens of billions of dollars to correct issues with the original prototype design, improve the kill vehicle’s performance, and increase the number of interceptors fielded to expand capabilities to defend the homeland from ballistic missile attacks. In the fall of 2013, MDA began a new effort to redesign the GMD kill vehicle, called the Redesigned Kill Vehicle (RKV), to address growing concerns within the department about the CE-II’s reliability. The RKV is in addition to efforts currently underway to upgrade and redesign the CE-II, as seen in table 12 below: MDA’s prior performance in upgrading and redesigning the GMD kill vehicle has achieved mixed results. Over the past 15 years, MDA has, on average, initiated redesign or upgrade efforts for GMD approximately every two years. These efforts, while perhaps needed, have proven to be very expensive and, according to MDA, did not achieve the goal of providing the warfighter with a reliable, producible, and cost-effective interceptor. A more recent example of updating the GMD kill vehicle is the CE-II Block I, which began in 2010 when MDA awarded a contract to Boeing to develop and sustain the GMD system. As part of that effort, Boeing was tasked with redesigning the CE-II EKV to address obsolescence and improve reliability, producibility, availability, and maintainability. MDA has since devised a new, multi-phased strategy to evolve the GMD system and the planned improvements for the CE-II Block I are now limited to component modifications and quality improvements that were identified during the FTG-06a failure resolution effort. Many of the initial goals and objectives for the CE-II Block I appear to have been passed onto the RKV. According to MDA, it is pursuing the RKV to replace the current fleet of interceptors with new ones that are testable, reliable, more producible, and cost effective. During an April 2014 Senate Armed Services Committee hearing, the Director, MDA, stated that the agency was committed to implementing a rigorous acquisition process for the redesign effort and would not circumvent sound acquisition practices. Also, in an April 2014 report submitted to Congress describing the RKV’s plans and objectives, MDA described some initial steps the agency is taking to employ a rigorous systems engineering process, such as including manufacturability, reliability, and testability criteria as critical design conditions. The agency’s recent commitment to follow a knowledge-based approach to acquire the RKV is a positive indication that the agency is seeking to improve its investment decisions and achieve better outcomes. Our prior work on best practices for acquisitions found that successful programs take steps to confirm their technologies are mature, their designs are stable, and their production processes are in control. These steps help ensure a high level of knowledge is achieved at key junctures in development. Appendix VIII: Targets and Countermeasures (Targets) Appendix VIII: Targets and Countermeasures (Targets) Key Findings for Fiscal Year 2014 The Targets program supported MDA’s test schedule and improved reliability by reducing failures. The program’s current contracting approach may result in better acquisition outcomes. The Targets program has flown targets in non-intercept tests that can reduce risks, but it continues to use new targets in more expensive and higher risk intercept tests. Program Overview The MDA’s Targets and Countermeasures (hereafter referred to as Targets or Targets program) designs, develops, and procures missiles to serve as targets during the testing of missile defense systems. As such, targets are test assets and are not operationally fielded. A typical target consists of a launch vehicle with one or more boosters, a control module that steers the vehicle after the booster stage separates, a payload module that can deploy countermeasures, and a surrogate re-entry vehicle. The Targets program acquires many types of targets covering the full spectrum of threat missile capabilities and ranges. While some targets have been used by the Missile Defense Agency’s (MDA) test program for years, others have been recently or are now being developed to more closely represent current and future threats. The quality and availability of these targets are instrumental to the execution of MDA’s flight test schedule. See table 13 for the quantities of targets planned for fiscal year 2014 through 2019 based on the range of the target. The Targets program supported MDA’s test schedule and improved reliability by reducing failures The Targets program successfully launched four targets in fiscal year 2014 to support MDA’s test schedule, including the first flight of a new medium- range target called the ARAV-TTO-E—described as a simple low-cost target by program officials. Specifically, the Targets program provided three short-range targets and one medium-range target to support Aegis testing requirements, including the full-rate production decision for the SM-3 Block IB interceptor. The Targets program provided seven additional targets in fiscal year 2014, including an intermediate- range target to support the retest of Ground-based Midcourse Defense’s (GMD) Capability Enhancement (CE)-II interceptor that failed during FTG- 06a in December 2010. In the past we have reported that reliability and availability of targets has caused delays in MDA’s testing schedule. For example, target failures and anomalies have caused the Terminal High Altitude Area Defense (THAAD) program to change its flight test plan and decrease the amount of flight tests. However, while the program has improved its reliability by reducing the number of target failures (see figure 2), target availability remains a risk to MDA’s test schedule. From fiscal years 2010 through 2014, only one of the 46 targets launched failed. The Targets program may have reduced target failures during this timeframe, in part, by primarily using short-range targets that are less complex than medium-, intermediate-, and intercontinental-range targets. Moving forward, however, the majority of MDA’s tests will use medium-, intermediate-, and intercontinental-range targets. Another contributing factor to the reduction in target failures may be the additional time available to further develop targets while programs have been resolving developmental issues. For example, the GMD program’s CE-II interceptor failed during FTG-06a in December 2010 which resulted in the need for a retest. The GMD program’s first retest failed in fiscal year 2011 and it successfully conducted a retest in fiscal year 2014. Consequently, this slowed the GMD program’s test schedule and subsequently its target demands providing the Targets program with additional time to further develop or resolve issues with any of its targets. As GMD and other programs resolve their developmental issues, the test plan becomes more aggressive, and target demands increase, additional time to develop or address issues with targets may not be as readily available. Target availability remains a risk to the MDA test plan. For example, two of the Targets program’s medium-range targets—the MRBM T1/T2 and MRBM T3—have not been available as planned for some tests. Consequently, these tests either received substitute targets or were delayed. According to program officials, there was a delay in awarding the MRBM T1/T2 contract due to a procurement integrity allegation which was not substantiated, but affected its availability for testing. As a result, the first flight of this target was delayed two and half years from the third quarter of fiscal year 2014 to the first quarter of fiscal year 2017 and several substitute targets were needed for tests between that timeframe. The MRBM T3 has had some development issues that had to be resolved which delayed its availability for tests, according to program officials. Subsequently, the first flight of this target was delayed approximately one year from the first quarter of fiscal year 2014 to the first quarter of fiscal year 2015. The program’s current contracting approach may result in better acquisition outcomes The program’s contracting approach for targets is potentially improving by moving from sole-source to competitive awards and restructuring contracts to better achieve desired outcomes. Past contracting decisions have had cost and schedule impacts. For example, the Targets program began work on a medium-range target—the eMRBM—in fiscal year 2010 under an existing contract. According to program officials, the eMRBM contract did not contain disincentives for poor performance or failures. Accordingly, when there were issues with the target during testing, the program stated they had to pay the contractor additional money to resolve the issues. Consequently, after developmental delays and spending $333 million for two of these targets—one successfully used in fiscal year 2013 and one planned to be used in fiscal year 2015—the remaining requirements were reduced due to affordability and the multiple tests that were scheduled to use this target either received substitute targets or were deleted. Conversely, in fiscal year 2014, the Targets program competitively awarded a contract for a new medium-range target—MRBM T1/T2—which, according to program officials, includes a range of incentives for successful execution during testing and a fixed price for the target to better control costs and achieve expected outcomes. As such, if the target performs poorly or fails during a test, then according to program officials, the contractor may receive less money. Program officials explained that they have also adjusted the contracting approach to better control costs by only buying the number of targets needed and including options to buy additional targets at a pre-negotiated price if requirements change. For example, the MRBM T3 contract procures four targets, but it also has options for up to three additional targets. As structured, this gives the program some flexibility to adjust to changing requirements with less risk of impacts to cost and the test schedule. The Targets program has flown targets in non-intercept tests that can reduce risks, but it continues to use new targets in more expensive and higher risk intercept tests The Targets program successfully flew a new medium-range target during a non-intercept flight test in October 2014 that may enable the program to reduce risks associated with this target prior to its use in an intercept flight test in fiscal year 2015. Non-intercept flight tests can serve as risk reduction flights by confirming that the target works as intended and to discover and resolve issues prior to its use in a more costly and higher risk intercept flight test that is designed to test a system’s performance. However, the Targets program plans to use new intermediate- and intercontinental-range targets for the first time in intercept flight tests in fiscal years 2015 and 2016, respectively. Program officials explained that many of the components in the intermediate- and intercontinental-range targets have already been flown and based on previous flight data and modeling and simulation they have a high level of confidence that the targets will work as intended. The Targets program is also taking other measures, such as component-level ground tests and pre-test trials, to identify and resolve any issues prior to the planned intercept tests. We have previously recommended that MDA conduct risk reduction flight tests—non-intercept tests—for each new target, but it has not fully implemented this recommendation and program officials maintain that the decision to use new targets in intercept flight tests will continue based on associated risks. Appendix IX: Terminal High Altitude Area Defense (THAAD) Appendix IX: Terminal High Altitude Area Defense (THAAD) THAAD delivered assets for operational use prior to demonstrating their capability in a flight test. THAAD delivered 10 interceptors to complete its second lot in fiscal year 2014. THAAD’s streamlined battery configuration may enable cost savings and early delivery of the remaining batteries. A new transport method may double the number of THAAD interceptors that can be transported via C-17 aircraft in fiscal year 2015. Program Overview THAAD is a rapidly-deployable ground-based system able to defend against short- and medium-range ballistic missile attacks during the middle and end stages of a missile’s flight. THAAD is organized as a battery that consists of interceptors, multiple launchers, a radar, a fire control and communications system, and other support equipment. The first two batteries have been conditionally accepted by the Army for operational use. In December 2014, THAAD received urgent materiel release approval from the Commanding General of the United States Army Aviation and Missile Command to enable an earlier delivery of equipment for the next two batteries for operational use to meet the Army’s request to support urgent warfighter needs. THAAD plans to continue production through fiscal year 2025, for a total of 7 batteries, 503 interceptors, and 7 radars. THAAD has two development efforts—THAAD 1.0 and THAAD 2.0. THAAD 1.0 is for the production of the batteries, interceptors, and supporting hardware and provides the warfighter with initial integrated defense against short- and medium-range threats in one region. THAAD 2.0 is primarily software enhancements that expand THAAD’s ability to defend against threats in multiple regions and at different ranges, and adds debris mitigation and other upgrades. THAAD currently has two hardware configurations—one for the first two batteries and another to address obsolescence issues for the remaining five batteries. However, the program plans to equip the first two batteries with the upgraded hardware by fiscal year 2018. THAAD is testing the new configuration that addresses obsolescence issues in two upcoming flight tests in fiscal year 2015. THAAD delivered assets to defend against an intermediate-range threat, although this capability is not planned to be demonstrated in a flight test until the fourth quarter of fiscal year 2015. As such, THAAD program officials currently have limited insight into if and how THAAD will perform against an intermediate-range threat. However, program officials expect THAAD to perform successfully based on modeling and simulations and analysis from a previous flight test that used a medium-range target with a velocity close to that of an intermediate-range target. If THAAD does not perform as expected during this test, the program may have to retrofit its currently deployed assets at an additional cost. THAAD delivered equipment for its next two batteries for operational use, although it has not flight tested the changes made to this equipment to address obsolescence issues. THAAD planned to release these two batteries for operational use in the fourth quarter of fiscal year 2016, but the Army requested an urgent materiel release enabling operational use earlier to meet warfighter needs. However, these two batteries have new hardware and software to address obsolescence issues and the two flight tests to assess these changes are in the fourth quarter of fiscal year 2015. Without the flight tests to confirm that the obsolescence issues have been corrected, the program may have delivered assets to the Army that may not work as intended or that may require fixes. THAAD delivered the remaining 10 interceptors to complete its second lot in fiscal year 2014, which represents a 60 percent decrease in production from the prior fiscal year. Program officials attribute the decrease in production to funding challenges related to sequestration. Although the program only delivered 10 interceptors in fiscal year 2014, it was able to avoid costs associated with decreased production by combining the build of subassemblies for its next lot of interceptors with some foreign military sales. According to program officials, this allowed the program to avoid over a $100 million in costs because the production rate remained at a sufficient level to prevent any additional funding to accommodate decreases. The first two THAAD batteries conditionally accepted by the Army for operational use have a configuration that includes two tactical station groups—one for fire control and communications and another as backup—that are both fully interchangeable. According to program officials, the warfighter has been primarily using one tactical station group and using the other for training when needed. As such, the program streamlined the battery configuration to a single tactical station group and it is developing a table-top trainer and portable planner that program officials liken to the size and functionality of a computer to subsume the role of the second one being used for training. The remaining batteries will have the streamlined configuration and program officials noted that they will also update the first two batteries with the streamlined configuration when they are modernizing them with the changes to address obsolescence. Program officials believe that this streamlined battery configuration has reduced cost for the program which may allow the early delivery of the remaining batteries. A new transport method may double the number of THAAD interceptors that can be transported via C- 17 aircraft in fiscal year 2015 Program officials explained that currently four THAAD interceptors can be transported at one time in a C-17 aircraft, but the program has designed and tested a new missile transport method that may allow it to double the capacity per aircraft in fiscal year 2015. The program is spending approximately $59 million to achieve this doubled capacity and plans to have the ability to equip all of its batteries with this upgrade by fiscal year 2019. Program officials assert that this new missile transport method, if fully implemented, may provide efficiencies for the warfighter by reducing the number of C-17 aircraft flights to transport THAAD interceptors to needed locations. Appendix X: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, LaTonya Miller, Assistant Director; David Best; Helena Brink; Joe Kirschbaum; Anh Nguyen; Wiktor Niewiadomski; Kenneth E. Patton; Karen Richey; Steven Stern; Robert Swierczek; Brian Tittle; Hai V. Tran; and Alyssa Weir.
Since 2002, MDA has spent approximately $105 billion, and it plans to spend about $38 billion more by 2019, to defend against enemy ballistic missiles. MDA is developing a BMDS comprised of a command and control system, sensors that identify incoming threats, and intercepting missiles. For over a decade, GAO has reported on MDA's progress and challenges in developing and fielding the BMDS. GAO is mandated by law to assess the extent to which MDA has achieved its acquisition goals and objectives, as reported through its acquisition baselines, and to report on other acquisition issues as appropriate. This, GAO's 12th annual report, examines progress and challenges in fiscal year 2014 associated with MDA's: (1) individual element testing and asset delivery goals, (2) efforts to reduce acquisition risks, and (3) reporting on the BMDS integrated capability. GAO examined MDA's acquisition reports and assessed them against GAO's acquisition best practices, analyzed baselines reported to discern progress, and interviewed DOD and MDA contractor officials. In fiscal year 2014, the Missile Defense Agency (MDA) made some progress in achieving its testing and delivery goals for individual elements of the Ballistic Missile Defense System (BMDS), but was not able to complete its planned fiscal year goals for testing. MDA conducted two intercept tests demonstrating an increased capability. However, it did not complete six planned flight tests for a variety of reasons, including test delays and retests to address previous failures, which limit the knowledge gained in fiscal year 2014. Additionally, several BMDS elements delivered assets in fiscal year 2014 without completing planned testing, which increases cost and schedule risks for an individual system and the BMDS as a whole. In one instance, the Terminal High Altitude Area Defense element delivered assets although its capability has not been demonstrated through flight testing. Potential also exists to reduce acquisition risks for several MDA efforts that are pursuing high-risk approaches that do not adhere to an approach which encourages accumulating more knowledge before program commitments are made and conducting testing before production is initiated. Specifically: Aegis Ballistic Missile Defense (BMD)—MDA demonstrated that it had matured the Aegis Standard Missile-3 (SM-3) Block IIA interceptor's design prior to starting production, a best practice. However, Aegis BMD is still addressing issues in the Aegis SM-3 Block IB interceptor revealed through prior test failures and is planning to award a multiyear procurement contract prior to flight testing the final design. If design changes are later needed, the cost, schedule, and performance impact could be significant. Ground-based Midcourse Defense (GMD) system—MDA reduced risk by adding a non-intercept flight test in fiscal year 2015 which allows the program to collect valuable data on redesigned components. However, GMD increased risk to the warfighter by prioritizing new interceptor production over fixing previously deployed interceptors and resolving known issues. In addition, MDA has decided to redesign the GMD kill vehicle prior to determining whether the effort is the most cost-effective solution. Unless MDA aligns its future efforts for Aegis and GMD with acquisition best practices, the agency's acquisition outcomes may be on a similar trajectory to that of prior years, incurring both cost growth and schedule delays. MDA is working to increase the extent to which the various elements of the BMDS are capable of working as one integrated system, but the agency reports limited information to Congress regarding its integration goals and its progress against these goals. Integration of the BMDS is important because it improves the system performance beyond the abilities of individual elements. Although MDA is not required to provide this information in its reports and briefings to Congress, congressional decision makers have limited insight into the planned BMD system-level capabilities, the supporting element-level upgrades, and how element-level efforts are synchronized to ensure timely delivery.
Background Enacted by Congress in 1972, FACA responded to concerns that federal advisory committees were proliferating without adequate review, oversight, or accountability. Congress included measures in FACA intended to ensure that advisory committees responded to valid needs, that the committees’ proceedings were as open as feasible to the public, and that Congress was kept informed of the committees’ activities. FACA articulates certain principles regarding advisory committees, including broad requirements for balance, transparency, and independence. For example, regarding the requirement for balance, FACA requires advisory committees to have membership fairly representing an array of viewpoints and interests. FACA also requires agencies to announce committee meetings in advance and in general, to hold open meetings. FACA also sets forth other requirements for advisory committee formation, their operations, and how they provide advice and recommendations to the federal government. For example, FACA stipulates that Congress, the President, or federal agencies are authorized to establish federal advisory committees. Some federal advisory committees are created by an agency at the direction of a statute, but all other agency-created advisory committees are commonly referred to as “discretionary” federal advisory committees. The Office of Management and Budget (OMB) has established a maximum number of discretionary federal advisory committees that each agency could establish, which varies by agency. Additionally, the subcommittees of a federal advisory committee are generally exempt from following FACA requirements if they report to a parent advisory committee. Alternatively, if a subcommittee provides advice and recommendations directly to a federal agency, it is required to comply with FACA requirements. FACA does not require agencies to implement the advice or recommendations of their federal advisory committees; advisory committees are by design advisory. For regulatory agencies governed by the Administrative Procedures Act, such as FCC, the relationship between consideration and implementation of a federal advisory committee’s advice and recommendations is complicated because those agencies must follow certain rules and processes in their rulemaking efforts. Consequently, while an advisory committee’s advice and recommendations may form the basis for a regulatory agency’s notice of proposed rulemaking, factors beyond the advisory committee’s advice may play a role in determining what action an agency ultimately takes. GSA, through its Committee Management Secretariat, is responsible for prescribing administrative guidelines and management controls applicable to federal advisory committees governmentwide. While GSA does not approve or deny agency decisions about creating or managing advisory committees, GSA has developed regulations, guidance, and training to help agencies implement FACA requirements. GSA also created and maintains an online FACA database (available to the public at www.fido.gov/facadatabase), which contains information about each federal advisory committee, including committee charters, membership rosters, budgets, and, in many cases, links to committee meeting schedules, minutes, and reports. Not every advisory committee or group that provides advice or recommendations to an agency is subject to the requirements prescribed in FACA. To be subject to FACA, a discretionary committee or group must have been created to provide advice or recommendations for the President or one or more agencies or officers of the federal government. Groups assembled only to exchange facts or information with federal officials are not federal advisory committees, nor are certain groups made up of only state or local officials. FACA explicitly exempts some committees from its requirements, and certain other groups are exempt under other statutes. For example, in some instances the Unfunded Mandates Reform Act of 1995 exempts advisory committees that are composed wholly of federal, state, local, or tribal government officials. Groups that are exempt from FACA are not required to comply with the procedural and transparency provisions of the act. FCC’s Federal Advisory Committees Address a Variety of Technical and Operational Issues, and Stakeholders View the Committees as Functioning Effectively FCC established seven discretionary federal advisory committees to examine a range of technical and operational telecommunications issues. The stakeholders we contacted—including committee members, FCC officials, and trade and interest group representatives—viewed the operations of the committees as effective. Almost all of the advisory committees established subcommittees to address specific topics, and we found that the majority of the advisory committees’ work is completed at the subcommittee level. FCC’s Seven Current Discretionary Advisory Committees Examine Technical and Operational Telecommunications Issues FCC currently has seven advisory committees that provide advice and recommendations to the agency on numerous technical and operational telecommunications issues. These issues range from interoperability and security of communications networks to consumer and diversity concerns regarding telecommunications markets. All of FCC’s federal advisory committees are “discretionary”—that is, they were established by FCC under its own authority to create such committees. In fiscal year 2004, FCC’s advisory committees had a combined budget of $1.2 million. Following is a brief description of the objective of each of FCC’s seven federal advisory committees. See appendix II for additional information on each of the advisory committees. Advisory Committee for the 2007 World Radiocommunication Conference: Provides advice and technical support to FCC, and recommends proposals for the 2007 World Radiocommunication Conference. Advisory Committee on Diversity for Communications in the Digital Age: Makes recommendations to FCC about policies and practices that will further enhance the ability of minorities and women to participate in telecommunications. Consumer Advisory Committee: Makes recommendations to FCC regarding consumer issues in telecommunications and strives to increase consumer participation in proceedings before FCC. Media Security and Reliability Council: Provides recommendations to FCC and industry on how to implement a comprehensive national strategy for broadcast and media sustainability in the event of terrorist attacks, natural disasters, and all other threats or attacks nationwide. Network Reliability and Interoperability Council: Provides recommendations to FCC and to the communications industry that will help to ensure the reliability and interoperability of wireless, wireline, satellite, cable, and public data networks, including emergency communications networks. North American Numbering Council: Provides advice and recommendations to FCC that foster efficient and impartial administration of the North American Numbering Plan. Technological Advisory Council: Provides technical advice to FCC and makes recommendations on technological and technical issues related to innovation in the communications industry. At the time we conducted our study, FCC’s federal advisory committees had over 280 members. These members represent numerous sectors across telecommunications including industry, academia, advocacy groups, private consulting, and government. As shown in figure 1, our survey data indicate the majority of members come from private businesses. According to FCC’s Chairman, the Commission creates federal advisory committees at its discretion to advise the agency on operational or technical issues associated with FCC’s statutory responsibilities. All of FCC’s advisory committees are chartered for a 2-year period. Recommendations for forming federal advisory committees at FCC can come from a variety of sources. For example, two designated federal officers said that problems in the telecommunications industry, such as widespread telecommunications outages and telephone numbering shortages, were the impetus behind the creation of two committees. When considering which advisory committees to establish, FCC said that the Commission’s committee management officer—the agency official responsible for managing and overseeing the advisory committees— evaluates the usefulness and mission of a potential committee to ensure the benefits of establishing the committee are clear. While the committee management officer determines which advisory committees are to be established, the opinions of FCC’s Chairman are taken into consideration. As prescribed by OMB, FCC is limited to eight discretionary federal advisory committees, but FCC officials we interviewed said that this limit does not pose a problem, and there are no plans to create additional committees at this time. Committee Members, FCC Officials, and Interest Group Representatives Generally Believe That Advisory Committees Function Effectively Advisory committee members who responded to our survey, as well as the FCC officials and trade and interest groups we contacted, said the committees generally operate and function effectively. For example, 87 percent of responding committee members were satisfied with the clarity of their committees’ operating rules and procedures. Regarding committee operations, 73 percent of the responding members said they were satisfied with how the committees use technology to facilitate meetings. Furthermore, an overwhelming majority of members agreed that fellow members represent parties that have an interest in the mission and agenda of the committee, and that committee members have sufficient knowledge and experience to provide input on the issues addressed by the committee. Overall, 82 percent of responding committee members were satisfied with their experience of serving on the committee, and almost 90 percent responded that they would be interested in serving on the committee again. Moreover, most of the advisory committee members who served previously also told us that they were satisfied with their committee experience. Most of the committee chairmen we interviewed believed the advisory committee process works well. For example, one chairman said the process facilitates communication, input, and openness. Another committee chairman told us that the advisory committee process is an effective venue for both FCC and industry to participate in the agency's rulemaking process. He further stated that FCC receives a lot of talented advice at little cost, which is important because, in his view, FCC lacks adequate technical expertise. Another committee chairman said that FCC staff do not have the level of expertise that exists on the advisory committee and could not afford to hire such experts. FCC officials we contacted, including bureau chiefs and designated federal officers, also told us that the advisory committee process generally functions well. For example, one bureau chief told us that the advisory committee structure gives FCC access to industry expertise at a minimal cost to taxpayers. Further, he commented that FACA requirements help FCC ensure that committee operations remain transparent and accessible to the public. According to another bureau chief, advisory committees provide a unique opportunity for the top experts in important technical fields to provide FCC with the benefits of their knowledge in a nonadversarial context. One FCC designated federal officer said his committee works tremendously well, with a lot of talented people working to achieve the committee’s objectives. FCC’s Chairman and one commissioner, as well as all the bureau chiefs that we contacted, agreed that effective committee operations enhanced the ability of their advisory committees to reach consensus and subsequently produce useful recommendations for FCC. Several of the trade and interest group representatives that we interviewed also told us that FCC advisory committees function effectively. For example, several representatives remarked that the advisory committee process is an effective forum for bringing people together from various industry sectors to collaborate on advisory committee issues. Further, the trade group representatives generally believe FCC's advisory committees address current and important telecommunication issues and that members have sufficient knowledge to address committee issues. However, three trade group representatives told us that the advisory committee process was ineffective because FCC does not always implement the committees’ advice. To increase the effectiveness of advisory committee functions, several trade groups provided suggestions for possible improvements. For example, one representative said that FCC could increase the use of subcommittees while another told us that FCC should provide funding for travel and develop other methods to improve committee participation among underserved groups. Subcommittees Perform the Majority of FCC’s Federal Advisory Committees’ Work Most of FCC’s federal advisory committees have subcommittees that collect information and develop draft recommendations for the full committee, with only one committee—the Technological Advisory Council—having no subcommittees. The committee chairmen and designated federal officers told us that a substantial amount of committee work is completed at the subcommittee level because, generally, the full committees meet less frequently than the subcommittees. In fact, subcommittees often conduct their work in informal ways, such as via telephone and videoconferencing, e-mail exchanges, and additional in- person meetings. The committee members who responded to our survey agreed, with 79 percent responding that more work is completed at the subcommittee level than at the full committee. About 76 percent of the survey respondents also reported that they have served as subcommittee members. Committee members generally volunteer to serve as subcommittee members, but they can also be selected for service by the committee or subcommittee chairman, an FCC official, or other committee members. Our survey results also showed that approximately 80 percent of responding members were satisfied with subcommittee operations. We heard from committee chairmen that most subcommittees set their own agenda and have strong participation by their members. Further, two committee chairmen said the meetings for their subcommittees are open to the public. An advisory committee’s subcommittees are not subject to FACA requirements if they report to a parent advisory committee and the parent advisory committee deliberates the subcommittee’s recommendations before adopting and passing them on to the agency. According to our survey, approximately 68 percent of the responding committee members said their committees deliberate the proposals of their subcommittees from a moderate to very great extent. As shown in figure 2, the extent to which full committees deliberated the proposals of their subcommittees varied by committee. FCC and Its Advisory Committees Adhere to FACA and Related Regulations, but Committee Members Are Not Always Clear About the Role FCC Expects Them to Provide When They Are Appointed to Committees FACA governs the establishment, operation, and termination of federal advisory committees. Under FACA and GSA regulations, agency heads are responsible for the administration of their advisory committees, including establishing key administrative functions for the advisory committees, forming the committees, ensuring committee operations are transparent, and ensuring that the products advisory committees produce are fully independent of the agency that established the committee. Agencies Must Have Certain Organizational Procedures in Place to Support the Establishment of Federal Advisory Committees To establish the framework in which the federal advisory committees can function, FACA requires that agencies, among other things, have (1) a “committee management officer,” (2) operating guidelines for the agency’s federal advisory committees, and (3) a process in place to report key data about each committee to GSA. Committee management officer: Agencies must designate a committee management officer to assist with the management of their advisory committees and to oversee the agency’s compliance with FACA requirements. FCC has delegated this responsibility to the Managing Director of FCC. In addition to advisory committee oversight, the Office of the Managing Director is generally responsible for activities involving the administration and management of FCC, such as developing and managing the agency’s budget and financial programs, and overseeing the agency’s personnel management process and policy. We found that the committee management officer delegated many of his advisory committee oversight responsibilities to an advisory committee liaison. Administrative guidelines: The act also requires agency heads to issue administrative guidelines and management controls applicable to their agency’s advisory committees. When we asked FCC if the Commission had administrative guidelines, as required by FACA, an FCC official provided us with guidelines that had expired in August 1998. In May 2004, the deputy committee management officer told us that FCC continued to use the expired guidelines on an informal basis and that the Commission was planning to reinstate the guidance in the near future, as a result of our review. On September 9, 2004, FCC reinstated their administrative guidelines with no revisions. Reporting committee information to GSA: Agencies are required to report information electronically on each advisory committee using a governmentwide shared Internet-based system that GSA maintains. The information contained in this Internet-based system (or database) can be used by the Congress to perform oversight of related executive branch programs and by the public, the media, and others, to stay abreast of important developments resulting from advisory committee activities. FCC has been submitting committee information to GSA as required. FACA and Regulations Impose Several Requirements on the Formation of Federal Advisory Committees The act has various requirements that relate to the formation of advisory committees. In particular, committees must (1) be in the public interest and related to the agency’s area of responsibility, (2) have a charter, (3) have a designated federal officer, and (4) have balanced membership. Public interest: FACA requires the agency establishing an advisory committee to find that the committee is in the public interest and related to the agency’s area of responsibility. According to GSA, agencies must provide a statement that their advisory committees are in the public interest and essential to agency business in their justifications submitted to GSA for establishing an advisory committee. FCC provided such a statement to GSA for all of its advisory committees. Charter: FACA requires all committees to have a charter that contains specific information, including the committee’s scope and objectives, a description of duties, the period of time necessary to carry out its purposes, the estimated operating costs, and the number and frequency of meetings. All of FCC’s advisory committees are operating under charters that met the specific requirements. Designated federal officer: The act requires agency heads to appoint a designated federal officer for each committee to oversee the committee’s activities, call the meetings of the committee, approve the agendas, and attend the meetings. Each of FCC’s advisory committees has a designated federal officer who abides by these requirements. Balanced membership: FACA also requires that the membership of committees be fairly balanced in terms of points of view represented. Our survey of committee members as well as discussions with FCC officials indicated that most stakeholders believed the committees had balanced membership. For example, all of the FCC bureau chiefs and designated federal officers we contacted, as well as the committee chairmen, said they believe committee membership is balanced. Further, 88 percent of the committee members who responded to our survey agreed that members represent divergent points of views. FCC commissioners were more split on this issue. FCC’s Chairman and one commissioner stated that the committees are adequately balanced while two others stated the committees are not always inclusive of varied interests. Of the trade and interest groups we contacted, five said they believed the advisory committees had balanced membership. For example, one trade group representative said FCC tries to be very inclusive with the committees’ membership, and another said FCC goes out of its way to ensure the committees are balanced. However, six of the trade group representatives we contacted did not believe the advisory committees were balanced. Of those with this view, four said that the committees had too many industry representatives, one said the committees did not have enough consumer representation, and one said the committees lacked geographical and ethnic diversity. According to FCC’s Chairman, the Commission has gone to great lengths to ensure advisory committee membership is fairly balanced regarding the points of view represented and the functions performed. He stated that the advisory committees’ members represent a broad array of service and equipment providers of all sizes, as well as trade organizations and members of the academic community. Information regarding committee membership contained in the FACA database and that we collected from FCC’s designated federal officers indicates that FCC attempted to draw committee membership from many facets of the telecommunications industry. For example, FCC reported that members for one advisory committee represent large and small telecommunications consumers, local and interstate carriers, state regulators, equipment and software manufacturers, satellite companies, cable companies, Internet service providers, wireless companies, and research organizations. According to FCC, members for another advisory committee were selected to represent a broad and balanced viewpoint, with members from nonprofit consumer and disability advocacy organizations, industry, underserved populations, Native Americans, and private citizens. Membership for another advisory committee is completely open—meaning any interested party can participate in committee activities. Membership designation: While membership must be balanced, federal agencies generally have a reasonable amount of discretion to appoint members to serve on committees. Agencies also have discretion to determine what type of advice the advisory committee members are to provide. Members of advisory committees may be appointed as “representatives,” which means they are providing “stakeholder advice” or advice reflecting the views of the entity or interest group they are representing (such as industry, labor, or consumers). Committee members may also be appointed as “special government employees,” which means the agency appoints them with the expectation that they will provide advice on the basis of their best judgment. The Office of Government Ethics distinguishes between special government employees and representative members. Committee members appointed as special government employees who are not representative members are expected to be impartial and are subject to conflict-of-interest rules administered by the Office of Government Ethics. Committee members designated merely as representative are viewed by the Office of Government Ethics as having been appointed to represent a particular and known viewpoint, and thus are not subject to the same ethics review. Consistent with guidance provided by the Office of Government Ethics, GSA officials told us that GSA cannot control how agencies designate their members, but they generally said that if an agency is looking for a committee member to provide his or her expert advice, the member should be designated as a special government employee; if the member is to provide the views of an outside entity, the member should be designated as representative. FCC has designated all current members of its federal advisory committees as representatives. The FCC official who is responsible for determining if proper designations are made—the designated agency ethics official—told us he discusses member designations with the committees’ designated federal officers, but he generally does not review any documents to determine what type of advice the member is expected to provide. The ethics official said that it is a long-standing tradition at the Commission to appoint all members as representative. The ethics official also told us there is an emphasis at FCC for members to provide the representative positions of groups, given the nature of the industry, which makes the representative designation more appropriate. FCC’s designation of all committee members as representatives suggests an expectation that all of the members would contribute the opinion of the organization, company, or institution that they represent. However, we found that for some members it is unclear what interests they should be representing on the advisory committees because they do not directly work within the telecommunications industry. Rather these members—who comprised almost 13 percent of our survey respondents—are affiliated with universities or private consulting companies. Of the six survey respondents who work for universities, five reported that they only provide their own expert advice and not advice that represents the position of a particular group. Similarly, nearly half of those who work in private consulting reported that they only provide their own expert advice. Additionally, we found that—even for those members who do work for entities within the telecommunications industry—there might be confusion for some of them about the type of advice they were expected to provide to the committee. About 13 percent of the respondents who work for private businesses reported that they do not view themselves as providing representative advice, despite being designated by FCC as representative members. All told, only 78 percent of the survey respondents said they provide the opinion of the organization, company, or institution that they represent. A majority of the 22 percent of respondents who did not view their advice as representative said that they provided advice based on their own expert opinion. These results suggest two points regarding the designation of members and their understanding of their advisory roles. First, if certain members—such as those affiliated with universities or who work in private consulting— were appointed to provide their best professional judgment rather than the representative position of a particular group, they might be more appropriately appointed as special government employees. If members are so designated, they would be subject to the Office of Government Ethics rules for special government employees. Second, some members who FCC has designated as representatives do not believe they are contributing the advice of the organization, company, or institution that they were selected to represent. As such, these members may not fully understand what role they were appointed to play on the advisory committee. In our recent report on federal advisory committees, we recommended that GSA issue guidance stating that agencies should specify in the appointment letters to committee members whether they are appointed as special government employees or as representatives. We further recommended that for those appointed as representative members, the entity or group that they are to represent should be noted in the letter. GSA and the Office of Government Ethics provided formal statements to us that outline actions they have taken and plans they are developing to address our report recommendations. For example, the Office of Government Ethics issued additional guidance, dated July 19, 2004, which discusses the distinction between representative committee members and special government employees. GSA officials told us that they consulted with the Office of Government Ethics and modified their training on the matter of representative versus special government employee designations. We found that for its advisory committees, FCC was already generally telling members that they were to provide representative advice on behalf of their employer in their appointment letters. However, for those members affiliated with universities, law firms, or consulting firms who are told to provide advice on behalf of such entities, the underlying viewpoint on telecommunications issues that the member is expected to represent is not clear because such institutions generally do not have an obvious viewpoint on telecommunications issues. While FCC may have selected these individuals to represent particular telecommunications viewpoints, those viewpoints are not specifically stated in the appointment letter. That is, naming the institution to be represented might not always make clear the viewpoint to be represented. Requirements Guide Many Aspects of Advisory Committee Operations Regarding advisory committee operations, FACA generally requires committee meetings to be open to the public. Also, GSA regulations provide principles that agencies should apply to their management of advisory committees, including (1) supplying support services for their committees, (2) seeking feedback from advisory committee members on the effectiveness of committee activities, and (3) communicating to the committee members how their advice has affected agency programs and decision making. Openness: FACA requires agencies to announce committee meetings ahead of time and give notice to interested parties about such meetings. With some exceptions, the meetings are to be open to the public, and agencies are to prepare meeting minutes and make them available to interested parties. During our review, we found that FCC provided adequate notice of meetings, held open meetings and prepared minutes in accordance with the act for all of its advisory committees. Support services: FACA and GSA regulations specify that agencies should provide support services for their committees. According to the designated federal officers, FCC typically provides meeting facilities and administrative and logistical support for the committees. At the time of our review, most of the designated federal officers, as well as the deputy committee management officer, said the committees had sufficient resources to effectively conduct committee operations. While each advisory committee has a budget, we found the funds were allocated for FCC staff, both professional and administrative. FCC does not pay any travel-related costs for committee members to attend meetings. As shown in table 1, the majority of committee members surveyed were satisfied with the support provided by FCC. Communications: GSA regulations also state that agencies should (1) seek feedback from advisory committee members on the effectiveness of committee activities and (2) communicate to committee members on a regular basis how their advice has affected agency programs. We found FCC does not have a formal process whereby it requests feedback from committee members about the committees’ activities, nor does FCC formally track how the committees’ advice and recommendations have been considered and provide this information to committee members. However, most of the designated federal officers said they periodically discuss committee issues with members. One designated federal officer told us he believes the communication with his committee members is sufficient because the members are willing to serve on the committee again. Nonetheless, according to our survey, adequate communications between FCC and committee members was one of the few areas related to the operations of FCC’s committees that committee members expressed some concern. As shown in table 1, less than 47 percent of survey respondents were satisfied with how FCC communicated to members how the Commission would use their advice. In response to our survey of committee members, we received 10 comments indicating there is limited communication between FCC and the advisory committees. For example, one committee member said FCC should provide a clear disposition for each recommendation presented to it. Another member said he presumes his respective committee’s advice is helpful to the Commission, but would like more feedback on whether the advice is actually used. Federal Advisory Committees Are Required to Produce Independent Advice to Federal Agencies The advice and recommendations of federal advisory committees must be independent of influence by the entity that created the advisory committee, or in this case, FCC. We found that the advice and recommendations provided by FCC’s committees are generally considered by stakeholders to reflect the independent judgment of committee members. The majority of FCC’s commissioners believe that the federal advisory committees provide independent advice and recommendations. However, one commissioner suggested that committee independence could be improved while another stated that independence varies by committee. In addition, all of the designated federal officers and bureau chiefs who have responsibility for FCC’s federal advisory committees agree that the advisory committees provide advice and recommendations that are independent of agency influence. Among committee members who responded to our survey, 89 percent stated that they believed their committee is at least moderately independent of FCC, while approximately 7 percent stated that they believed their committee is only a little or not at all independent of FCC. Of the trade and interest groups we contacted, six believed that committees’ advice and recommendations are independent of FCC, and four others stated that independence varies based on the committee or the issues being addressed. Only 1 of the 12 trade groups responded that the committees’ advice or recommendations are not independent. FCC Has Taken Action on Advisory Committee Recommendations, but Stakeholders’ Views on FCC’s Use of Committee Work Varied While most of the stakeholders we contacted agreed that the advisory committees produce quality work, views on FCC’s implementation of the committees’ advice and recommendations varied. While FCC is not required to implement the advice and recommendations of its advisory committees, in general, the FCC bureau chiefs and designated federal officers were more satisfied with how FCC uses the committees’ work than other stakeholders. FCC’s Chairman stated that the Commission implements the advisory committees’ recommendations in various ways, such as incorporating recommendations into regulations or less formally by publicizing committee work at trade shows or other public events. One commissioner agreed with this view, further stating that FCC always takes the committees’ advice and recommendations seriously. However, another commissioner said FCC should establish a more formal process for the entire Commission to consider committee recommendations; and still another commissioner said FCC should give “due attention” to each committee submission, regardless of the subject matter. We found FCC does not have a formal process for tracking advisory committee recommendations. While the deputy committee management officer told us that as a result of our review, FCC plans to improve the accountability of the advisory committee process by requiring that committee recommendations be tracked; as of September 2004, FCC had not taken any action on developing such a tracking system. In our survey, 83 percent of respondents believed FCC was receptive to their advice from a moderate to very great extent. However, only 54 percent of responding members were satisfied with the extent to which FCC takes the committees’ advice into account when developing policy. Another 27 percent were neither satisfied nor dissatisfied, and more than 8 percent were dissatisfied or very dissatisfied with the extent to which FCC takes the committees’ advice into account when developing policy. As part of our survey, we received comments from 19 out of 200 survey respondents who were dissatisfied with FCC’s use of their committees’ advice. In general, the members who provided comments were dissatisfied because they believed that FCC (1) does not provide feedback about how the committee’s recommendations are used, (2) does not take action on the committee’s recommendations, (3) has a predetermined agenda, or (4) uses the advisory committees as “window dressing.” Further, only 5 of the 12 trade and interest groups we contacted believed FCC actually uses the committees’ advice and recommendations. Three others stated that the committees’ advice and recommendations have little influence on FCC actions. The FCC bureau chiefs and designated federal officers we contacted were more satisfied than were committee members with how FCC uses the committees’ advice. For example, one bureau chief said FCC always considers the recommendations received from his advisory committee, and the designated federal officers for five of the advisory committees said they believe FCC would implement their committees’ recommendations. To demonstrate how the Commission implements the advice and recommendations of the advisory committees, FCC officials provided the following examples: An FCC bureau chief said that FCC adopted, as rules, many of the recommendations made by the North American Numbering Council. For example, the committee provided advice and recommendations on implementation issues associated with local number portability, which allows consumers to keep their telephone numbers when switching from one telecommunications carrier to another. This process has been in place for wireline consumers since 1997, and it is now available for wireless consumers. An FCC commissioner agreed, stating the North American Numbering Council has provided invaluable expertise in support of FCC’s policies relating to telephone numbering, including local number portability. The designated federal officer for the Advisory Committee for the 2003 World Radiocommunication Conference said the committee recommended a total of 41 draft preliminary views, of which 35 became U.S. preliminary views, and a total of 41 draft proposals of which 28 became U.S. proposals. The designated federal officer for the Media Security and Reliability Council said the committee produced over 100 best practices recommendations oriented toward the media industries. To support industry adoption of those best practices recommendations aimed at media, FCC said it developed an outreach brochure describing the best practices and arranged for 13,000 copies to be distributed directly by FCC field offices and at conventions held by the National Association of Broadcasters and the National Cable and Telecommunications Association. Also as a result of the Media Security and Reliability Council’s work, the deputy designated federal officer told us FCC issued a notice of proposed rulemaking regarding the Emergency Alert System. According to an office chief, the Technological Advisory Council generated several of the ideas that led to the Spectrum Policy Task Force Report, which formed the basis for several of the Commission’s most important forward-looking initiatives. FCC Has Five Advisory Groups That It Considers Exempt from FACA; These Groups Function Differently from Federal Advisory Committees FCC has five advisory groups that it considers exempt from FACA. Two of these advisory groups are “joint boards” that FCC is statutorily mandated to create. FCC also established two groups referred to as “joint conferences” that are designed to advise the agency on certain issues over which FCC has regulatory jurisdiction. Both the joint boards and conferences function differently from FCC’s federal advisory committees in large part because they are not considered to be subject to FACA requirements. FCC also created an additional committee, the Intergovernmental Advisory Committee, which, although exempt from FACA, functions similarly to federal advisory committees in some respects. FCC’s Mandated Joint Boards Are Exempt from FACA and Function Differently from Federal Advisory Committees FCC is mandated to support two joint boards: one addresses “jurisdictional separations” and the other examines “universal service requirements.” FCC told us it considers these joint boards to be exempt from FACA because the Unfunded Mandates Reform Act of 1995 exempts groups that are composed wholly of federal, state, local, or tribal government officials. FCC established one board, called the Joint Board on Jurisdictional Separations, in 1980 to make recommendations regarding cost allocations that are part of the determination of telephone rates. FCC created the other board, called the Joint Board on Universal Service, in 1996 to implement the Telecommunication Act’s universal service provisions. The joint boards convene three times per year at the National Association for Regulatory Utility Commissioners meetings, which are held in different locations across the country. Because the meetings of the boards are held in varied locations and the members come from dispersed areas, each board receives a small budget from FCC to cover travel costs. The establishment and operations of the joint boards differ greatly from federal advisory committees that are subject to FACA requirements. Some of these differences are driven by requirements in the statute for the joint boards, and some of the differences are due to the fact that the joint boards are not subject to FACA requirements. For example: The type of membership for the joint boards is specified in statute. Section 410(c) of the Communications Act requires the boards to have three FCC commissioners and four state commissioners serve as members. The Joint Board on Universal Service is also required to have one state consumer public advocate member. FCC nominates the FCC commissioners. The National Association for Regulatory Utility Commissioners nominates the state officials and the National Association of State Utility Consumer Advocates nominates the state consumer public advocate. FCC makes the final selections of joint board members. Conversely, federal advisory committees following the requirements of FACA must have balanced membership but the mission of each committee determines who will be selected to serve as members. The meetings of the joint boards are closed to the public. However, the boards also hold public hearings once or twice per year to collect information. In contrast, as we noted earlier, meetings of federal advisory committees must generally be open to the public. The joint boards have no charter or bylaws guiding their operations. Federal advisory committees, on the other hand, operate with a charter for a 2-year term, and must operate according to a set of specific procedural requirements. FCC must respond to the recommendations of the Joint Board on Universal Service. In contrast, FCC is not required to respond to the advice or recommendations of a federal advisory committee. The Commission can implement a recommendation of a federal advisory committee or reject it without comment. FCC’s Joint Conferences Are Exempt from FACA and Function Differently from Federal Advisory Committees FCC also established two joint conferences that it considers exempt from FACA. Under the Communications Act, FCC is authorized to confer with state regulatory commissions on telecommunications accounting issues, as well as other issues, over which it has regulatory jurisdiction. As a result, FCC established the Joint Conference on Accounting Issues in 2002 to review the possible need for changes to FCC’s regulatory accounting rules. The Joint Conference on Advanced Telecommunications Services, created in 1999 was established to assist in the deployment of advanced telecommunications capability, such as high-speed Internet, to all Americans. Similar to the joint boards, the joint conferences convene in different locations across the country at the National Association for Regulatory Utility Commissioners meetings, and the Joint Conference on Accounting Issues receives a small budget from FCC to cover travel costs. As with the joint boards, the establishment and operations of the joint conferences are different from federal advisory committees that operate under FACA requirements. However, many of the operations of the joint conferences are similar to those of the joint boards. For example, as with joint boards, the meetings of the joint conferences are closed to the public, and there is no charter or bylaws guiding their operations. Several aspects distinguish the joint conferences from the joint boards. For example: FCC said that unlike the joint boards, there are no statutory guidelines determining nominations for joint conferences: FCC entirely chooses the membership. At this time, the Joint Conference on Accounting Issues has two federal regulatory commissioners and five state regulatory commissioners who serve as members. For the Joint Conference on Advanced Telecommunications Services, membership currently includes all five FCC commissioners and six state commissioners. FCC does not believe it is required to respond to or implement the recommendations of the joint conferences; rather it can implement the recommendation or reject without comment. This is similar to FCC’s use of federal advisory committee advice or recommendations, but contrasts FCC’s responsibilities regarding the advice and recommendations of the Joint Board on Universal Service. FCC’s Intergovernmental Advisory Committee Is Exempt from FACA, but It Has Some Similarities in Function to Federal Advisory Committees FCC formed the Intergovernmental Advisory Committee in 1997 to advise the agency on issues of concern to state, local, and tribal governments. This committee provides ongoing advice and information to FCC on a broad range of telecommunications issues including, but not limited to, rural issues, homeland security, facilities siting, broadband access, barriers to competitive entry, and public safety communications for which FCC explicitly or inherently shares responsibility or administration with local, county, state, or tribal governments. The Intergovernmental Advisory Committee holds meetings in Washington, D.C., four times per year; but unlike the federal advisory committees, its meetings are closed to the public. In addition, FCC allocates no funds to the Intergovernmental Advisory Committee in support of its activities. Despite these differences in the Intergovernmental Advisory Committee’s operations relative to federal advisory committees, other aspects of its establishment and operations closely mirror those of federal advisory committees, even though it is not considered to be subject to FACA requirements. For example: The committee has a charter guiding its objectives and operations for each 2-year term. Its membership is determined at the discretion of FCC. FCC solicits members through a public notice for nominations and then selects members. As specified in its charter, the committee has 15 members: 5 state government representatives, 7 local representatives, and 3 representatives from tribal governments. FCC is not required to respond to or implement the advice or recommendations of the committee. FCC can implement the recommendation or reject without comment. Conclusions FCC’s federal advisory committees address important telecommunications issues, and stakeholders generally view the committees’ work as beneficial and useful to the Commission. The advisory committees generally follow the rules and requirements prescribed by FACA, which ensures the committees’ activities are transparent and accessible to the public. However, because FCC does not have a formal process for determining and documenting committee member designations, it appears that some of FCC’s advisory committee members are not clear about the type of advice the Commission expects them to contribute to their committees. Despite being designated as representatives, some members responded to our survey that they do not contribute the opinions of the organization, company, or institution they represent, but rather contribute their own expert advice—a role that appears closer to that which the Office of Government Ethics and GSA describe as the role of a member who would typically be appointed as a special government employee. The confusion about the role of committee members may be particularly at issue for members who do not directly work within the telecommunications industry, such as those affiliated with a university, a law firm, or in private consulting. When members are designated as special government employees, they are subject to the Office of Government Ethics rules that apply to special government employees, which include conflict-of-interest reviews. Recommendation for Executive Action To better ensure that FCC’s federal advisory committee members are fully informed about the advice they are being asked to provide, we recommend that FCC establish a process for determining and documenting the type of advice federal advisory committee members are expected to contribute. FCC should appoint advisory committee members to serve as representatives only after making a clear determination of what interests those members are expected to represent on the committee. Committee members who are not representing a specific interest or viewpoint may be more appropriately appointed as special government employees. For representative members, FCC should specifically state in their appointment letters what particular interest those members are appointed to represent. This statement will be especially important for the committee members affiliated with universities, law firms, or private consulting firms, since it is not always clear or transparent what interests FCC would like them to represent. Agency Comments We provided a draft of this report to FCC, GSA, and the Office of Government Ethics for their review and comment. In its response, FCC agreed with our recommendation and noted that future appointment letters for representative committee members would make clear the specific underlying viewpoint, interest group, or segment of the community that the member is expected to represent. FCC also provided technical comments that we incorporated into the report as appropriate. GSA did not provide written comments but agency officials told us they agree with our recommendation, saying it would be helpful to the federal advisory committee designation process for agencies to clearly identify for representative members who, organizationally for example, they are expected to represent on the committee. In its comments to us, the Office of Government Ethics agreed with our findings and recommendation. Written comments from FCC and the Office of Government Ethics are provided in appendixes IV and V, respectively. We will provide copies of this report to interested congressional committees; the Chairman, FCC; and other interested parties. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov or Amy Abramowitz at (202) 512-2834. Major contributors to this report include Bert Japikse, Jean McSween, Erica Miles, Sally Moino, and Tina Sherman. Objectives, Scope, and Methodology As requested by the House of Representatives Committee on Government Reform, the objectives of this report are to provide information on (1) the Federal Communications Commission’s (FCC) current federal advisory committees and how they operate, (2) the extent to which FCC’s advisory committees follow applicable laws and regulations, (3) how FCC makes use of the advisory committees’ advice or recommendations, and (4) other advisory groups established at FCC that are not characterized as Federal Advisory Committee Act (FACA) committees and how they operate. To respond to the first objective on FCC’s current federal advisory committees and how they operate, we obtained the charters and other documents on FCC’s active advisory committees to determine the committees’ missions, charter dates, frequency of meetings and estimated operating costs. We gathered additional information from the FACA database maintained by the General Services Administration (GSA), such as committee member lists and FCC statements regarding how the committees achieved balance. Based on audit work completed for a prior GAO report, we determined that the data from the FACA database were sufficiently reliable for our purposes. We reviewed information on FCC’s Web site relating to the advisory committees, as well as the Web sites established by the advisory committees. We discussed committee operations with FCC officials, including the committees’ designated federal officers and the advisory committee liaison, as well as with the current committee chairmen. To further document how advisory committees operate, we attended one committee meeting in person and viewed another meeting via the Internet. To obtain committee members’ perspectives regarding advisory committee operations and effectiveness, we developed and administered a Web-based survey. From January 20, 2004, through February 6, 2004, we conducted a series of pretests with FCC’s advisory committee chairmen and members to help further refine our questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. Upon completion of the pretests and development of the final survey questions and format, we sent an announcement of the upcoming survey to 282 FCC advisory committee members, including the committee chairmen on March 17, 2004. They were notified that the survey was available online on March 24, 2004. We sent follow up e-mail messages to nonrespondents as of April 15, 2004, and then attempted to contact those who had not completed the survey. The survey was available online until June 11, 2004. Of the population of 282 members who were asked to complete the survey, we received 200 completed surveys for an overall response rate of 71 percent. The practical difficulties of conducting surveys may introduce errors commonly referred to as “nonsampling error.” For example, questions may be misinterpreted and the respondents’ answers may differ from committee members who did not respond to the survey. To minimize nonsampling error, we pretested the survey and conducted numerous follow-up contacts with nonrespondents. In addition, steps were taken during data analysis to further minimize error, such as performing computer analyses to identify inconsistencies and completing a review of data analysis by an independent reviewer. The survey and its results can be found in appendix III. In addition to the survey, we interviewed 13 members from FCC’s federal advisory committees that operated under the preceding presidential administration to obtain their perspectives on the advisory committee process and to determine if operations had changed over time. We asked them to respond to a set of questions very similar to those asked of current members on the Web-based survey. The members that we interviewed had participated on the following FCC advisory committees: two members from the National Advisory Committee; two members from the North American Numbering Council; two members from the Public Safety National Coordination Committee; four members from the Technological Advisory Council; one member from the Advisory Committee for the 1999/2000 World Radiocommunication Conference; and two members from the Network Reliability and Interoperability Council. To respond to the second objective on whether FCC’s advisory committees are following applicable laws and regulations, we reviewed the FACA legislation as well as GSA’s regulations regarding federal advisory committee management and identified the key requirements that FCC must follow. To determine if FCC complied with these requirements, we reviewed relevant documentation relating to FCC’s efforts to meet the requirements, such as Federal Register notices announcing the committee meetings, information reported to GSA, and the committee charters. Through our survey, we also obtained committee member views on aspects of FCC’s implementation of FACA and GSA requirements. To determine what management controls FCC established over the advisory committee process, we interviewed the deputy committee management officer and obtained a copy of FCC’s advisory committee guidelines. We interviewed FCC’s designated agency ethics official to determine FCC’s process for designating its committee members as representatives. We also interviewed officials with the Office of Government Ethics and GSA to understand their role in the oversight of advisory committees at FCC, especially regarding membership designation. To respond to the third objective on how FCC makes use of the advisory committees’ advice and recommendations, we analyzed survey responses from current committee members and responses from FCC officials to obtain their views on FCC’s use of committees’ advice and recommendations. We requested that FCC’s designated federal officers verify information found on the FACA database relating to committee recommendations in fiscal year 2003. Specifically, we asked them to verify that the data found on the database were correct and to describe in general terms the reasons why the recommendations were or were not implemented by FCC. We asked FCC’s five commissioners and the bureau and office chiefs who have responsibility over the advisory committees to comment in writing about (1) the quality and usefulness of advisory committee advice or recommendations, (2) the extent to which the committees’ advice or recommendations are independent of FCC, and (3) their views on how FCC implements the advice or recommendations from the advisory committees. We received responses from the five FCC commissioners and the chiefs of the following FCC bureaus and office: Consumer and Governmental Affairs Bureau, which is responsible for the Consumer Advisory Committee; International Bureau, responsible for the Advisory Committee for the World Radiocommunication Conference; Wireline Competition Bureau, responsible for the North American Numbering Council; Media Bureau, responsible for the Media Security and Reliability Council; and Office of Engineering and Technology, responsible for the Network Reliability and Interoperability Council and the Technological Advisory Council. Also in response to the third objective, we interviewed 12 trade and interest group representatives to obtain the perspectives of stakeholders outside of federal government regarding the quality and usefulness of the advisory committees’ work. The following groups responded to our questions about FCC’s federal advisory committees: (1) the Cellular Telecommunications and Internet Association, (2) the National Association of Broadcasters, (3) Consumer Federation of America, (4) Consumers Union, (5) American Council for the Blind, (6) Media Access Project, (7) the National Association of Regulatory Utility Commissioners, (8) the National Association of Telecommunications Officers and Advisors, (9) the National Cable and Telecommunications Association, (10) the National Indian Telecommunications Institute, (11) the Satellite Broadcasting and Communications Association, and (12) the Telecommunications Industry Association. To respond to the fourth objective on other advisory groups established by FCC that are exempt from FACA, we interviewed five FCC officials assigned to the non-FACA advisory groups as well as two state public service commissioners’ staff that are affiliated with the joint boards. We obtained documentation from FCC’s Office of General Counsel concerning the formation of the joint boards, joint conferences, and the Intergovernmental Advisory Committee, and for their exemption from FACA requirements. We also contacted the following 10 trade and interest group representatives to determine if they had any issues or concerns with the operations of FCC’s non-FACA advisory groups: (1) the Cellular Telecommunications and Internet Association, (2) the National Association of Broadcasters, (3) Consumers Union, (4) Media Access Project, (5) the National Association of Regulatory Utility Commissioners, (6) the National Association of Telecommunications Officers and Advisors, (7) the National Cable and Telecommunications Association, (8) the National Indian Telecommunications Institute, (9) the Satellite Broadcasting and Communications Association, and (10) the Telecommunications Industry Association. We conducted our review from November 2003 through September 2004 in accordance with generally accepted government auditing standards. Information on FCC’s Federal Advisory Committees and Advisory Groups Exempt from FACA This appendix provides information on FCC’s seven federal advisory committees and on FCC’s five advisory groups that FCC considers exempt from FACA, including two joint boards, two joint conferences, and the Intergovernmental Advisory Committee. Advisory Committee for the 2007 World Radiocommunication Conference Purpose of the committee: To provide the FCC with advice, technical support, and recommended proposals for the 2007 World Radiocommunication Conference. Effective date of charter: May 31, 2004 (2-year charter). Committee meetings: Held in Washington D.C., at least 4 times per year, open to the public. Number of members: 69 (all representing members). Steps taken to select members for the committee: According to the committee’s designated federal officer, membership is open on the committee, and FCC issues a public notice asking all interested parties to be a part of the committee. The designated federal officer also stated that FCC’s Chairman makes the final determination on committee chairman and leadership of the subcommittees. How the committee achieves balanced membership: According to FCC, the committee has an open membership and includes representatives of competing industry sectors as well as government agencies and scientific and technical organizations. See figure 3 for the primary employment sectors of committee members who responded to our survey. Type of output: According to the designated federal officer, the committee develops preliminary views and proposals to assist in drafting the U.S. position for the World Radiocommunication Conference. The FCC bureaus review the preliminary views to determine if they agree or disagree with the position. Fiscal year 2004 estimated annual operating costs for staff and overhead: $105,500. Current subcommittees: The committee has five informal working groups addressing (1) issues related the terrestrial and space science services; (2) issues involving the satellite services, including those related to high altitude platform stations; (3) international mobile telephone and 2.5 gigahertz sharing issues; (4) issues concerning the broadcasting and amateur services; and (5) regulatory issues. Advisory Committee on Diversity for Communications in the Digital Age Purpose of the committee: To make recommendations to FCC regarding policies and practices that will further enhance the ability of minorities and women to participate in the telecommunications and related industries. Effective date of charter: September 2, 2003 (2-year charter). Committee meetings: Held in Washington, D.C., a minimum of 2 times per year, open to the public. Number of members: 26 (all representing members). Steps taken to select members for the committee: According to the designated federal officer, when FCC’s Chairman announced the formation of the committee, interested individuals volunteered to serve on the committee. The designated federal officer said that FCC had an idea of people it wanted to serve on the committee and also contacted congressional staff to obtain their input on people who were qualified to serve. The designated federal officer said that FCC had a large list of potential members, but decided to limit the number of members to around 25. How the committee achieves balanced membership: According to FCC, membership is solicited from all facets of the telecommunications industry, including representation from the industry's financial and technical sectors. See figure 4 for the primary employment sectors of committee members who responded to our survey. Type of output: The committee will make recommendations to the Commission. For example, on June 14, 2004, the committee made recommendations on the use of tax policy to promote opportunity and on the expansion of FCC’s rule-based incentives to promote opportunity for socially disadvantaged persons. Fiscal year 2004 estimated annual operating costs for staff and overhead: $10,000; according to the designated federal officer, this amount does not cover FCC staff’s cost and will be adjusted in the future to include those costs. Current subcommittees: (1) Career Advancement, (2) Financial Issues, (3) New Technologies, and (4) Transactional Transparency and Related Outreach. Consumer Advisory Committee Purpose of the committee: To make recommendations to FCC regarding consumer issues within the jurisdiction of the Commission and to facilitate the participation of consumers (including people with disabilities and underserved populations, such as Native Americans and persons living in rural areas) in proceedings before the Commission. This committee was formerly called the Consumer/Disability Telecommunications Advisory Committee. Effective date of charter: November 20, 2002 (2-year charter). Committee meetings: Held in Washington, D.C., a minimum of 2 times per year, open to the public. Number of members: 35 (all representing members). Steps taken to select members for the committee: According to the committee’s designated federal officer, FCC released a public notice soliciting nominations and received over 100 responses to the public notice. To determine the representation of the nominations received, the designated federal officer stated that FCC prepared a spreadsheet showing all nominations. Also, FCC legal staff, the chief of the Consumer and Governmental Affairs Bureau, the committee chairman, and the designated federal officer reviewed all the nominations and forwarded names to the FCC chairman, who made the final decisions about membership. How the committee achieves balanced membership: According to FCC, the committee is comprised of members from both the private and public sectors, including nonprofit consumer and disability advocacy organizations, industry, underserved populations, Native Americans, and private citizens serving in a representative capacity. Members were selected to represent a broad and balanced viewpoint so that the many voices of the Commission’s many constituencies can be heard. See figure 5 for the primary employment sectors of committee members who responded to our survey. Type of output: The committee will make recommendations to the Commission. For example, the Consumer Advisory Committee made recommendations in fiscal year 2003 (1) supporting the creation of a national “do not call” list, which is easily accessible to consumers; (2) urging the Commission to promote consistency and uniformity in federal and state regulations of telemarketing practices and; (3) urging the Commission to increase enforcement actions against deceptive practices in telemarketing. Fiscal year 2004 estimated annual operating costs for staff and overhead: $307,775. Current subcommittees: (1) Consumer Education, Outreach, and Complaints; (2) Broadband; (3) Ancillary Services, and (4) Telecommunications Relay Services. Media Security and Reliability Council Purpose of the committee: To provide members of the broadcast and multichannel video programming distribution industries the opportunity to make recommendations to FCC and their industries that, when implemented, will ensure optimal reliability, robustness and security of broadcast and multichannel video programming distribution industries facilities. These recommendations will be based on, among other things, homeland defense and security considerations and will take into account all reasonably foreseeable circumstances. This will encompass ensuring the security and sustainability of broadcast and multichannel video programming distributor facilities throughout the United States; ensuring the availability of adequate transmission capability during events or periods of exceptional stress due to natural disaster, man-made attacks or similar occurrences; and facilitating the rapid restoration of broadcast and multichannel video programming distributor services in the event of disruptions. Effective date of charter: March 26, 2004 (2-year charter). Committee meetings: Held in Washington, D.C., a minimum of 2 times per year (estimated total meetings - 4), open to the public. Number of members: 40 (all representing members). Steps taken to select members for the committee: According to the committee’s designated federal officer, FCC approached all major players of broad-based media, such as satellite providers, cable companies, and television networks, as well as smaller companies and public interest groups to serve as committee members. The designated federal officer told us that to be effective, the committee’s membership needed to reflect a public-private partnership. This FCC official further stated that the FCC chairman’s office and FCC’s Media Bureau were part of the selection process. How the committee achieves balanced membership: According to FCC, committee membership includes senior representatives from mass media companies, cable television and satellite service providers, trade associations, public safety representatives, manufacturers, and other related entities. The members were selected for their different areas of expertise and to represent a balanced viewpoint. See figure 6 for the primary employment sectors of committee members who responded to our survey. Type of output: The committee will have the opportunity to make recommendations to FCC and to the broadcast and multichannel video programming distribution industries. According to FCC, the committee developed best practices recommendations for media companies that help to ensure the continued operation of service in times of crisis and the effective communication of emergency information to the public. Fiscal year 2004 estimated annual operating costs for staff and overhead: $152,000. Current subcommittees: (1) Communications Infrastructure Security, Access and Restoration; and (2) Public Communications and Safety. Network Reliability and Interoperability Council Purpose of the committee: To partner with FCC, the communications industry, and public safety to facilitate enhancement of emergency communications networks, homeland security, and best practices across the burgeoning telecommunications industry. Effective date of charter: December 29, 2003 (2-year charter). Committee meetings: Held in Washington, D.C., a minimum of 3 times per year, open to the public. Number of members: 55 (all representing members). Steps taken to select members for the committee: According to the designated federal officer, FCC solicited certain firms and wanted participation from chief executive officers. The designated federal officer stated that a list of potential members is sent to the FCC chairman for approval. How the committee achieves balanced membership: According to FCC the committee includes representatives of all segments of the telecommunications industry. Its members represent large and small telecommunications consumers, local and interstate carriers, state regulators, equipment and software manufacturers, satellite companies, cable companies, Internet service providers, wireless companies and research organizations, among others. See figure 7 for the primary employment sectors of committee members who responded to our survey. Type of output: The committee will make recommendations to FCC and to the communications industry intended to improve telecommunications network robustness and reliability. Fiscal year 2004 estimated annual operating costs for staff and overhead: $202,000. Current subcommittees: (1) Enhanced 911, (2) Homeland security, (3) Network best practices, and (4) Broadband. North American Numbering Council Purpose of the committee: To advise FCC and to make recommendations that foster efficient and impartial administration of the North American Numbering Plan. The Council advises the Commission on numbering policy and technical issues, initially resolve disputes as directed by the Commission, and provides guidance to the North American Numbering Plan Administrator, the Local Number Portability Administrator, the Pooling Administrator as directed by the Commission. Effective date of charter: October 4, 2003 (2-year charter). Committee meetings: Held in Washington D.C., approximately six meetings per year, open to the public. Number of members: 28 voting members, 27 alternate members (all representing). Steps taken to select members for the committee: According to the designated federal officer, members are invited from each sector of the telecommunications market, including wireless, trade, state representatives, carriers, incumbent local exchange carriers and competitive local exchange carriers. The designated federal officer said that members are asked to respond regarding their expertise and experience in the telecommunications world. Also, according to the official, members serving on the council from the previous charter were asked if they wanted to continue serving. How the committee achieves balanced membership: According to FCC, the committee balances membership by including representatives from every sector of the telecommunications industry, as well as members representing the North American Numbering Plan member countries, state regulators, and consumers. See figure 8 for the primary employment sectors of committee members who responded to our survey. Type of output: The committee will make recommendations to FCC that foster efficient and impartial administration of the North American Numbering Plan, and advise FCC on numbering policy and technical issues. Fiscal year 2004 estimated annual operating costs for staff and overhead: $234,000. Current subcommittees: (1) steering group, (2) number oversight working group, (3) legal expertise working group, (4) local number portability working group, (5) cost recovery working group, (6) industry numbering committee, (7) North American Numbering Plan expansion/numbering optimization, (8) abbreviated dialing for one call notification issues management group, (9) North American Portability Management limited liability corporation, (10) intermediate numbering/soft dial tone issue management group, (11) contamination threshold issues management group, and (12) universal service fund issues management group. Technological Advisory Council Purpose of the committee: To provide technical advice to FCC and address questions referred to it by the FCC chairman, the chief of the Office of Engineering and Technology or by the committee’s designated federal officer. Effective date of charter: November 25, 2002 (2-year charter). Committee meetings: Held in Washington, D.C., 3 to 5 times per year, open to the public. Number of members: 33 (all representing members). Steps taken to select members for the committee: According to the committee’s designated federal officer, FCC sought individuals with expertise but also accepted outside nominations. With the selection process narrowly focused, the designated federal officer and the committee chairman made the membership decisions. How the committee achieves balanced membership: According to FCC, members have been selected to balance the expertise and viewpoints that are necessary to effectively address the new technology issues that will be directed to the committee. Members are recognized experts in their fields and, for private sector companies, individuals who hold technical executive positions such as Chief Technical Officer or Senior Technical Manager. See figure 9 for the primary employment sectors of committee members who responded to our survey. Type of output: According to the committee’s designated federal officer, the committee does not make formal recommendations. Rather, their deliverables are in the form of presentations on emerging technologies that the chairman of FCC hears during the committee’s meetings. Fiscal year 2004 estimated annual operating costs for staff and overhead: $201,000. Current subcommittees: None. Joint Board on Jurisdictional Separations Purpose of the joint board: To make recommendations on apportioning regulated costs between the interstate and intrastate jurisdictions. Year of establishment: 1980. Meetings: Held at the National Association for Regulatory Utility Commissioners meetings in varying locations 3 times per year, closed to the public; occasional en banc meetings are held. Number of members: 7 (three federal commissioners and four state commissioners). Steps taken to select members for the joint board: FCC nominates the federal commissioners and the National Association for Regulatory Utility Commissioners nominates the state commissioners. FCC makes the final selections of joint board members. Type of output: The joint board makes recommendations to the Commission. One recommendation resulted in FCC establishing an interim “freeze” on the jurisdictional separations process. Budget for fiscal year 2003: FCC allocated $25,000 that is applied towards travel and other meetings costs. Joint Board on Universal Service Purpose of the joint board: To make recommendations to implement the universal service provisions of the Telecommunications Act. Year of establishment: 1996. Meetings: Held at the National Association for Regulatory Utility Commissioners meetings in varying locations 3 times per year, closed to the public; occasional en banc meetings are held. Number of members: 8 (three federal commissioners, four state commissioners, and one state consumer public advocate). Steps taken to select members for the joint board: FCC nominates the federal commissioners, the National Association for Regulatory Utility Commissioners nominates the state commissioners, and the National Association of State Utility Consumer Advocates nominates a state consumer public advocate. FCC makes the final selections of joint board members. Type of output: The joint board makes recommendations to the Commission. For example, a recommendation to FCC proposed modifications to the Lifeline/Link-Up program. Budget for fiscal year 2003: FCC allocated $50,000 that is applied toward travel and other meetings costs. Joint Conference on Accounting Issues Purpose of the joint conference: to review the possible need for changes to FCC’s regulatory accounting rules. Year of establishment: 2002. Meetings: Held at the National Association for Regulatory Utility Commissioners meetings in varying locations 3 times per year, closed to the public. Number of members: 7 (two federal commissioners and five state commissioners). Steps taken to select members for the joint conference: FCC nominates the federal commissioners and the National Association for Regulatory Utility Commissioners nominates the state commissioners. FCC makes the final selections of joint board members. Type of output: The joint conference makes recommendations to the Commission. For example, a recommendation to FCC proposed revisions to Part 32 rules to include the reinstatement of certain accounts and the addition of several new accounts. Budget for fiscal year 2003: FCC allocated funds from joint board allocations for this conference. A total of $4,881 applied toward travel and other meetings costs. Joint Conference on Advanced Telecommunications Services Purpose of the joint conference: To fulfill the promise of Section 706 of the Telecommunications Act of 1996. The joint conference shares ideas, gathers real-life stories from across the country, and assists the FCC in its reports to Congress on the deployment of advanced telecommunications services. Year of establishment: 1999. Meetings: Held at the National Association for Regulatory Utility Commissioners meetings in varying locations 3 times per year, closed to the public. Number of members: 11 (five federal commissioners and six state commissioners). Steps taken to select members for the joint conference: FCC nominates the federal commissioners and the National Association for Regulatory Utility Commissioners nominates the state commissioners. FCC makes the final selections of joint board members. Type of output: The joint conference provides a forum for ongoing dialogue. The conference has held field hearings across the country to learn about the deployment of advanced telecommunications services. It also developed a report on broadband deployment in cooperation with the Florida Public Service Commission. Budget for fiscal year 2003: No FCC funds were specifically allocated to this joint conference. Intergovernmental Advisory Committee Purpose of the committee: To provide guidance to the Commission on issues of importance to state, local, and tribal governments, as well as to the Commission. The Committee provides ongoing advice and information to the Commission on a broad range of telecommunications issues of interest to state, local, and tribal governments, including cable and local franchising, public rights-of-way, facilities siting, universal service, broadband access, barriers to competitive entry, and public safety communications, for which the Commission explicitly or inherently shares responsibility or administration with local, county, state, or tribal governments. Year of establishment: 1997 (the committee’s original name was the Local State and Government Advisory Committee). Meetings: Held in Washington, D.C., 4 times per year, closed to the public. Number of members: 15 (five state government representatives, seven local representatives, and three representatives from tribal governments). Steps taken to select members for the committee: FCC released a public notice soliciting nominations and selected committee members from among the nominations. Type of Output: Recommendations to the Commission. Comments recently filed as part of an FCC proceeding on Voice Over Internet Protocol. Budget for fiscal year 2003: No FCC funds were allocated. GAO Survey of FCC Federal Advisory Committee Members Q1. How long have you been a member of the committee? 1 year to 1 ¾ years 1 ¾ years to 2 years (percent) (percent) (percent) (percent) Q2. To the best of your knowledge, did you attain membership to the committee through any of the following circumstances? (percent) (percent) (percent) Q3. In which of the following sectors do you primarily work? Q5. Approximately how many people does your company employ? Q6. Since your appointment, approximately how many committee meetings have you attended? About half the meetings (percent) A few meetings (percent) (percent) (percent) Q7. How important are the following factors in your decision to attend or not attend your committee's meetings? (percent) (percent) (percent) (percent) 197 (percent) (percent) (percent) (percent) Q8. How much time on a yearly basis do you devote to committee membership activities (including research and preparation for meetings, travel, and attending meetings)? (percent) (percent) (percent) Q9. As a member, what type of advice do you contribute to the committee? 200 Q10. Would you agree or disagree with the following statements as they apply to the composition of your committee? (percent) (percent) Disagree (percent) Strongly disagree (percent) (percent) (percent) Q11. Who sets the agenda for your committee's meetings? 200 Q12. Do you believe the appropriate party or parties sets the committee's agenda? (percent) (percent) Q13. As a committee member, do you generally have access to the information you need to make an informed decision on an issue? (percent) (percent) (percent) (percent) Q14. Overall, how satisfied or dissatisfied are you with the following aspects of the operations and procedures of your committee? Satisfied (percent) (percent) (percent) (percent) (percent) (percent) Satisfied (percent) (percent) (percent) (percent) (percent) (percent) Q15. In terms of formulating committee advice or recommendations, how independent do you believe the committee is of FCC? A little or not at all independent (percent) (percent) (percent) Q16. In terms of formulating committee advice or recommendations, to what extent do you believe the committee maintains a balance of influence among various interest groups (such as industry, trade or consumer groups)? (percent) (percent) (percent) (percent) (percent) Q17. Which of the following methods does your committee use to convey its advice or recommendations to FCC? (percent) (percent) (percent) Q18. In your opinion, to what extent is the public provided opportunity to express its views to your committee? (percent) (percent) (percent) (percent) (percent) Q19. To your knowledge, have members of the public (excluding FCC staff) ever expressed their views to the committee? (percent) (percent) Q20. Does your committee have any subcommittees? (percent) (percent) Q21. Were members selected to serve on your committee's subcommittees through any of the following methods? (percent) (percent) (percent) Q22. What was the basis for selecting members to serve on subcommittees? Q23. Have you been a member of any subcommittees? (percent) Q24. How is the work of the subcommittees completed? Q25. In your opinion, to what extent is the public provided an opportunity to express its views to your subcommittees? (percent) (percent) (percent) (percent) (percent) Q26. To your knowledge, have members of the public ever expressed their views to the subcommittees? (percent) (percent) Q27. Overall, how satisfied or dissatisfied have you been with the operation of your subcommittees? (percent) (percent) (percent) (percent) Q28. In your experience—given the understanding that the full committee approves all subcommittee advice and recommendations— what is the balance of work between the full committee and subcommittees with regard to output? Entirely work of subcommittees with approval from the full committee (percent) (percent) (percent) (percent) (percent) Q29. To what extent does the committee deliberate the proposals of the subcommittees before they are voted upon? (percent) (percent) (percent) (percent) (percent) Q30. Does your committee's work influence FCC policy or operations through any of the following mechanisms? (percent) (percent) (percent) 200 Q31. How satisfied or dissatisfied are you with the extent to which FCC takes your committee's advice and recommendations into account when developing policy or making changes in operations? (percent) (percent) (percent) (percent) (percent) Q33. Are setting or changing voluntary industry standards an output of your committee? (Voluntary industry standards are those not mandated by FCC.) (percent) (percent) Q34. How satisfied or dissatisfied are you with the effectiveness and impact of your committee to set or change voluntary industry standards? (percent) (percent) (percent) (percent) (percent) Q36. Thinking over your entire tenure on the committee, to what extent would you characterize FCC as receptive to the advice and recommendations of your committee? (percent) (percent) (percent) (percent) (percent) Q37. Thinking over your entire tenure on the committee, to what extent would you characterize industry as receptive to the advice and recommendations of your committee? (percent) (percent) (percent) (percent) (percent) Q38. Overall, how satisfied or dissatisfied are you with your experience serving on the committee? (percent) (percent) (percent) (percent) (percent) Q39. If invited, would you be interested in serving on this committee again? (percent) (percent) Comments from the Federal Communications Commission Appendix V
FCC has regulatory authority over many complex telecommunications issues. To obtain expert advice on these issues, FCC often calls upon its federal advisory committees, comprised mostly of members from industry, private consulting, advocacy groups, and government. These committees must follow the Federal Advisory Committee Act (FACA), which sets requirements on the formation and operation of such committees. Because of Congressional interest in how FCC receives advice from outside experts, this report provides information on (1) FCC's current advisory committees, (2) the extent to which the committees follow applicable laws, (3) how FCC makes use of the committees' advice, and (4) the non-FACA advisory groups that FCC has established. The Federal Communications Commission (FCC) has seven federal advisory committees established at its discretion that address various telecommunications issues. FCC officials, committee members, and other stakeholders we contacted generally believed FCC's advisory committees operated effectively. In forming and operating advisory committees, FCC must follow FACA and related regulations, which require, among other things, that committee membership is balanced in terms of points of view represented and that committee activities are transparent to the public. While FCC follows applicable requirements, GAO found that committee members are not always clear about their expected role on the committees--that is, the type of advice that FCC expects them to provide. FCC designates all of its committee members as "representatives," meaning they are appointed with an expectation that they will provide advice reflecting the views of a company, organization, or other group. However, approximately 22 percent of responding committee members did not say they provided representative advice. Further, some committee members are affiliated with universities or consulting firms that may not have an obvious telecommunications viewpoint. If committee members are expected to primarily provide their own expert opinion, they are expected to be impartial and may be more appropriately appointed as special government employees. Such members are subject to ethics rules administered by the Office of Government Ethics, including conflict-of-interest reviews. While FCC is not required to implement the advice or recommendations of its advisory committees, FCC has taken actions based on these committees' recommendations. Overall, GAO found FCC officials tended to be more satisfied with how FCC implements the committees' recommendations than other stakeholders, including committee members themselves. For example, of the committee members who responded to a GAO survey, only 54 percent were satisfied with the extent to which FCC takes the committees' advice into account when developing policy. Further, three trade groups we contacted said that the advisory committees' advice and recommendations have little influence on FCC actions. In addition to its seven federal advisory committees, FCC considers five advisory groups as exempt from FACA requirements, including two "joint boards," two "joint conferences," and the Intergovernmental Advisory Committee. FCC was mandated to establish the joint boards and created the joint conferences at its discretion. Since the joint boards and joint conferences are considered exempt from FACA, they function differently from FCC's federal advisory committees. FCC created the Intergovernmental Advisory Committee, which it also considers exempt from FACA, to address telecommunications issues affecting state, local, and tribal governments.
Background The Davis-Bacon Act, enacted in 1931, and related legislation require employers on federally funded construction projects valued at more than $2,000, or on federally assisted projects, to pay their workers, at a minimum, wages that the Secretary of Labor has determined to be “prevailing” for corresponding classes of workers on similar projects in the same locality. To carry out this mission, Labor administers surveys to construction contractors and third parties, such as representatives of unions and contractor associations, and asks them to provide wage and fringe benefit data on a form called the WD-10. Labor sets wages for four types of construction—building, residential, heavy, and highway—that it finds reflect current categories in the construction industry as well as the act’s requirement that wages for Davis-Bacon workers be commensurate with workers on “similar” projects. Labor’s survey coverage ranges from a county to an entire state, reflecting its implementation of the act’s requirement that prevailing wages represent those paid in the same locality. For example, surveys are typically conducted on a countywide basis for all construction types except highway, which are often conducted on a statewide basis. Labor generally issues general area wage rates for specific job classifications or occupations, such as electricians, carpenters, and drywallers to meet the act’s requirement that it set wages for “corresponding classes” of workers. Labor has implemented procedures to verify wage data submitted on the surveys to address problems related to data accuracy. In 1999, we reviewed these procedures and recommended specific changes to increase their impact on the accuracy of the wage determinations while reducing the time and cost to collect this information. See appendix I for a more detailed description of the wage determination process. BLS, the Labor component responsible for collecting, analyzing, and disseminating labor statistics, is providing data to WHD from its existing survey programs to allow WHD to evaluate whether those data can be used to set prevailing wages under the Davis-Bacon Act. BLS seeks to produce nationally representative employment and economic statistics that are timely and accurate. To do so, BLS has established key priorities, such as drawing representative samples, ensuring high response rates, and guaranteeing the confidentiality of survey respondents. In fiscal year 1997, BLS began collecting wage data through its Occupational Employment Statistics (OES) survey, which had until then collected only employment data. This mail survey, which comprises a sample of 1.2 million establishments, covers approximately 400,000 establishments each year and thus takes three yearly cycles to obtain data from the entire sample. BLS is also in the process of combining several surveys that produced local and national employment, wage, fringe benefit, and employment cost data into a single survey: the National Compensation Survey (NCS). By April 2001, BLS expects to survey over 30,000 establishments in 154 metropolitan and nonmetropolitan areas that represent all such areas in the United States. Initial data collection will involve BLS staff conducting on-site interviews and reviewing various payroll documents. According to BLS officials, although this sample will be sufficient to produce national estimates, BLS will be able to publish detailed data for only about half of the areas surveyed. Labor Has Initiated, but Not Completed, Efforts to Improve the Wage Determination Process In response to the conference report directive, Labor is currently testing a number of efforts under two tracks that it believes will improve the wage determination process. It expects that wage determinations would more accurately reflect prevailing wages if the wage survey process was improved through efforts that would, for example, increase survey participation and the timeliness of data collection and analysis. The earliest of these efforts began in 1996, with most scheduled for completion in fiscal year 2000. Labor will evaluate the results of these efforts and decide in fiscal year 2001 which track, or combination of efforts under both tracks, to implement. Labor informed the House Education and Workforce Committee in 1997 that it had selected these two tracks to test simultaneously: one track focuses on ways to redesign the current process WHD uses to collect and analyze survey data to set prevailing wage rates, while the other explores the use of BLS survey data as the basis for setting prevailing wages. Table 1 highlights selected major efforts under the redesign track; table 2 describes the efforts under the BLS track. Efforts under the redesign track seek to (1) improve survey data collection by, for example, redesigning the WD-10 survey form, making the form more accessible through a specially designated Internet web site, and using alternative methods to identify contractors and distribute surveys; and (2) enhance data analysis through such means as verifying wage data and developing technology to help identify inaccuracies in the data. WHD has tested or plans to test some of these efforts in two comprehensive surveys covering entire states and all four types of construction, which WHD traditionally has not done. For example, in the first survey, conducted in Oregon in 1998, WHD used state unemployment insurance (UI) data to identify additional construction establishments to survey. In the second survey, scheduled to begin in Colorado in June 1999, WHD plans to test technology, such as the use of imaging and scanning software, to facilitate data entry and analysis. Efforts under the BLS track have focused on using existing BLS surveys to obtain data on wage rates, fringe benefits, and the union affiliation of construction employees. According to WHD and BLS officials, BLS was selected as a possible alternative data source for a number of reasons, including BLS’ more comprehensive approach and expertise in collecting wage data compared with other potential sources, and its history of providing statistical information to others. Also, BLS already provides wage and fringe benefit data to WHD for the determination of prevailing wage rates under the Service Contract Act (SCA), which requires that individuals working in service occupations (such as janitors, security guards, or data processors) under contract to a federal employer be paid prevailing wages. SCA, however, has a more flexible concept of locality than Davis-Bacon, and many of these service contracts are nationwide in scope. As a result, under SCA, WHD uses a single national rate for several types of fringe benefits to determine prevailing wage rates, unlike Davis-Bacon, for which it must use fringe benefits paid in a given locality. BLS has undertaken three distinct efforts to collect or tabulate data on wage rates, fringe benefits, and union affiliation of construction employees for WHD. In regard to wage data, BLS is using its existing survey procedures and sampling frame to produce data for construction industries in local areas to allow WHD to evaluate the data’s usefulness in setting wage rate determinations. To collect data on fringe benefits and union affiliation, BLS conducted pilot surveys using existing survey procedures and sampling frames to test whether NCS and OES could obtain the necessary information. WHD and BLS officials agreed that no significant changes would be made to OES or NCS during this initial period, as these surveys had been recently revised (for example, adding the wage variable to the OES) or developed (for example, the NCS), and BLS did not know how additional changes to the surveys would affect their viability. As shown in tables 1 and 2, the first of these efforts—telephone verification of contractor and third-party wage data submissions—began in 1996, and some of the efforts have been completed or implemented, such as on-site verification, the use of automated printing and mailing operations, and the use of state UI data to identify construction establishments. However, most efforts are still being tested or are ongoing and not scheduled to be completed before fiscal year 2000. For example, results from the Oregon survey, which tested several of these efforts, will not be available until September 1999. Additionally, WHD does not expect to select a knowledge management software package before the end of fiscal year 1999. The development of one effort—CATI—to facilitate the clarification of data by follow-up telephone calls will not begin until fiscal year 2000. Moreover, even though BLS has provided some data to WHD from the initial OES union affiliation test and two of the NCS fringe benefit studies, all of the results will not be final until 2000. Although the conference report did not set a deadline for Labor to complete these efforts, Labor officials said they will decide which track—or combination of efforts under both tracks—to select in fiscal year 2001. Officials said this schedule is necessary given the time frames of individual efforts and the need to evaluate and analyze all of the results when the efforts are completed. For example, according to officials, because final results representing the full OES sample will not be available until 2000, an assessment of the OES data’s usefulness cannot be done until the entire 3-year cycle of data collection is completed. Officials will not be able to determine until then whether the wage data collected by the survey will meet BLS standards for issuance and be sufficient to meet WHD’s needs in determining wage rates. However, officials said that although they would discontinue efforts at any time that did not appear to be working, in the absence of a clear “stop light,” they believed they needed to see these efforts through, evaluate them, and make an informed decision. Since fiscal year 1997, Labor has allocated over $11 million for these improvement efforts. It spent $7.4 million in fiscal years 1997 and 1998 and allocated $3.75 million in fiscal year 1999. In its fiscal year 2000 budget, Labor plans to obligate another $3.75 million to continue funding these activities. To date, WHD has primarily used these funds to (1) procure the services of private sector contractors to redesign the wage survey process and conduct on-site verification; (2) purchase computer hardware and software and telecommunications equipment; and (3) reimburse BLS (about $3.7 million) for its survey activities, including the salaries and expenses of about 11 full-time-equivalent staff at BLS to conduct the NCS surveys. These funding amounts do not include salaries for WHD staff working on improvement activities. Labor’s Efforts Have Potential to Improve Accuracy and Timeliness of Wage Determinations On the basis of our review of Labor’s efforts and our past work on the Davis-Bacon Act, we believe that a number of Labor’s efforts under both tracks, if successfully implemented, have the potential to improve the accuracy and timeliness of wage determinations. To achieve more accurate and timely wage determinations under either track, Labor officials said the process must promote greater survey participation, improve the accuracy of data submissions and Labor’s ability to verify them, and increase the efficiency of data collection and analysis. Labor must ensure that the data are collected, analyzed, and published in a timely manner so that when wage rates are issued, they still reflect current local conditions. As summarized in table 3, a number of WHD’s efforts seek to improve the accuracy of the incoming wage data, such as making wage survey forms easier to complete, and to promote greater participation, such as using BLS’ OES survey with its large sample of construction establishments. However, Labor officials said they will need to address a number of unresolved issues in both tracks that could limit the potential of these efforts to achieve the desired results. Furthermore, they said they would need to do a number of things to ensure the track or efforts they select are the best options for improving the accuracy and timeliness of wage determinations. To achieve these potential results, Labor officials said that they need to address a number of unresolved issues: Efforts to redesign the current wage determination process or conduct statewide surveys for all four construction types could significantly increase the volume of data received by WHD analysts. WHD estimates these changes would result in a tenfold increase in the number of WD-10s wage analysts would have to process before they begin data analysis. Although WHD plans to use technology to facilitate data handling and analysis, such a significant increase in the volume of data could affect the timeliness of wage determinations and raise questions about the adequacy of WHD resources and technology to deal with this work load. The use of alternative databases such as UI to identify additional construction establishments may not result in sufficient data that would adequately represent the current universe of construction establishments. The use of Oregon’s UI database provided names of additional construction establishments to survey; however, according to BLS officials, UI databases may not accurately represent all construction establishments because of the high rate at which they are created and disbanded. As a result, WHD officials said they will need to evaluate the advantages and disadvantages of alternative data sources to ensure that survey participation accurately reflects the current universe. This would also be the case for any states, such as Colorado, that do not allow WHD to use their UI databases. Using BLS’ OES data as the basis for wage determinations presents WHD with a number of operational issues about setting wage rates. For example, WHD officials said they need to evaluate whether the level of data provided through OES by occupation or construction type would be sufficient to comply with wage determination rate requirements. Also, because OES provides no information on fringe benefits, WHD officials said they would have to link OES wage data with other data sources that include fringe benefit data to set wage rates that comprise all relevant wage data and accurately reflect local conditions. WHD officials believe that the only adequate source of fringe benefit data is NCS; but because NCS data are available only at the national level or for limited geographic areas, their usefulness may be limited. Labor officials also said that they need to develop clear plans about how to ensure that the track or efforts they choose are the best options to improve the timeliness and accuracy of wage determinations. Accordingly, they have established general performance measures that the officials said will be used to gauge Labor’s process improvements and guide the final decision about which track to select. The measures seek to ensure that, by fiscal year 2002, Labor will be able to survey each area of the country for all four types of construction at least every 3 years, and issue 90 percent of all wage determinations within 60 days of Labor‘s national WHD office receiving wage survey data from regional offices. Regarding the first measure, WHD officials believe that conducting surveys and issuing the resulting wage determinations every 3 years will lead to wage determinations that validly represent locally prevailing wage rates. Regarding the second measure, WHD officials reported that WHD currently issues almost all wage determinations within 60 days of receiving the information from regional offices and they would seek to maintain this level of timeliness at least 90 percent of the time despite the potentially significant increase in data volume resulting from more frequent, larger surveys. According to WHD officials, the first measure represents an improvement in timeliness in the wage determination process given that wage determinations are based on survey data that are, on average, 7 years old. Officials recognized, however, that they would have to consider other indicators to ensure that more frequent, larger surveys result in more accurate data and greater survey participation, especially if efforts under both tracks enable them to conduct surveys every 3 years. Nevertheless, they believe it is too soon to define these other indicators before the results of the individual efforts are available. The second measure provides some indication of timeliness but does not reflect improved accuracy or participation. In addition, WHD officials said they are not sure how this measure would help assess efforts under the BLS track, since under this scenario, BLS—not WHD’s regional offices—would be providing the wage data to WHD’s national office. To develop baseline data that will be used to assess the progress individual efforts achieve, WHD has also recently started to model the process; this involves tracking segments of the current WHD wage determination process to identify and address bottlenecks. For example, WHD is collecting data from its Oregon and Colorado surveys to estimate the time it takes WHD wage analysts to conduct various survey activities and the percentage of employers submitting usable wage data. However, these data may not be appropriate baseline data because they include a mix of traditional and new practices, and represent data from only two surveys. Also, given that WHD has little useful information on the time needed to issue a wage determination, the accuracy of wage determinations, or survey participation rates, it is not clear how this information will allow WHD to assess the extent to which the tracks improve the process. Finally, Labor has begun to identify other key factors, such as cost, that will need to be addressed as part of its decision-making process, but it has not yet set priorities or assigned weights to these factors. These factors are important if both tracks demonstrate some improvements in timeliness and accuracy, which they likely will, or if WHD must consider certain trade-offs—for example, if one track achieves greater levels of accuracy, but is significantly more expensive or resource-intensive. However, Labor believes it is premature to do so until it has seen the results of all of the individual efforts. Agency Comments We provided a draft of this report to the Department of Labor for its review and comment. In its comments, Labor stated that our report provided an excellent summary of its recent efforts to improve the accuracy and timeliness of Davis-Bacon wage determinations. Labor also reiterated that it must first establish whether both approaches it is undertaking, or some combination of the two, will be feasible to meet the needs of the Davis-Bacon wage determination program before it can assess the relative merits of each. Labor also noted that it had initiated improvements to the Davis-Bacon wage determination process before the congressional conference report directive. We acknowledge that Labor initiated prior efforts to improve the process; however, the scope of this report focuses only on the status of Labor’s efforts to respond to the congressional directive. Labor officials also provided technical comments and corrections, which we incorporated as appropriate. Labor’s comments are included in their entirety in appendix II. We are sending copies of this report to the Honorable Alexis M. Herman, Secretary of Labor; the Honorable Bernard E. Anderson, Assistant Secretary for Employment Standards; the Honorable Katherine G. Abraham, Commissioner of the Bureau of Labor Statistics; appropriate congressional committees; and other interested parties. Please call me or Larry Horinko, Assistant Director, at (202) 512-7014 if you or your staffs have any questions about this report. Other major contributors to this report were Lori Rectanus, Ronni Schwartz, and Robert C. Crystal. Labor’s Wage Determination Process Under the Davis-Bacon Act The Davis-Bacon Act requires that workers employed on federal construction contracts valued in excess of $2,000 be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers employed on projects that are similar in character to the contract work in the geographic area where the construction takes place. To determine the prevailing wages and fringe benefits in various areas throughout the United States, Labor’s Wage and Hour Division (WHD) periodically surveys wages and fringe benefits paid to workers in four basic types of construction (building, residential, highway, and heavy). Labor has designated the county as the basic geographic unit for data collection, although Labor also conducts some surveys setting prevailing wage rates for groups of counties. Wage rates are issued for a series of job classifications in the four basic types of construction, so each wage determination requires the calculation of prevailing wages for many different trades, such as electrician, plumber, and carpenter. For example, one heavy construction survey in Louisiana identified wage rates for 89 different construction trade occupations. Because there are over 3,000 counties, WHD would need to conduct more than 12,000 surveys each year if every county in the United States was to be surveyed. In fiscal year 1997, Labor issued 1,860 individual rates in wage determinations based on 43 area wage surveys. Labor’s wage determination process consists of four basic stages: planning and scheduling surveys of employers’ wages and fringe benefits in similar job classifications on comparable construction projects; conducting surveys of employers and third parties, such as representatives of unions or industry associations, on construction projects; clarifying and analyzing respondents’ data; and issuing the wage determinations. Stage 1: Planning and Scheduling Survey Activity Labor annually identifies the geographic areas that it plans to survey. Because it has limited resources, a key task of Labor’s staff is to identify those counties and types of construction most in need of a new survey. In selecting areas for inclusion in planned surveys, the regional offices establish priorities based on criteria that include the need for a new survey according to the volume of federal construction in the area; the age of the most recent survey; and requests or complaints from interested parties, such as state and county agencies, unions, and contractors’ associations. If a type of construction in a particular county is covered by a wage determination based on collective bargaining agreements (CBA) and Labor has no indication that the situation has changed such that a wage determination should now reflect nonunion rates, an updated wage determination may be based on updated CBAs. The unions submit their updated CBAs directly to the national office. Planning begins in the third quarter of each fiscal year when the national office provides regional offices with the Regional Survey Planning Report (RSPR). The RSPR provides data obtained under contract with the F.W. Dodge Division of McGraw-Hill Information Systems that show the number and value of active construction projects by region, state, county, and type of construction, and the percentage of total construction that is federally financed. Labor uses the F.W. Dodge data because F.W. Dodge has the only continuous nationwide database on construction projects. Labor supplements these data with additional information provided to the national office by federal agencies regarding their planned construction projects. The RSPR also includes the date of the most recent survey for each county and whether the existing wage determinations for each county are union, nonunion, or a combination of both. Using this information, the regional offices, in consultation with the national office, designate the counties and type of construction to be included in the upcoming regional surveys. Although Labor usually designates the county as the geographic unit for data collection, in some cases more than one county is included in a specific data-gathering effort. The regional offices determine the resources required to conduct each of the priority surveys. When all available resources have been allocated, the regional offices transmit to the national office for review their schedules of the surveys they plan to do: the types of construction, geographic area, and time frames of when they plan to survey each defined area. When Labor’s national office has approved all regional offices’ preliminary survey schedules, it assembles them in a national survey schedule that it transmits to interested parties, such as major national contractor and labor organizations, for their review and comment. The national office transmits any comments or suggestions received from interested parties to its affected regional offices. Organizations proposing modifications of the schedule are asked to support their perceived need for alternative survey locations by providing sufficient evidence of the wages paid to workers in the type of construction in question in the area where they want a survey conducted. The target date for establishing the final fiscal year survey schedule is September 15. Once the national office has established the final schedule, each regional office starts to obtain the information needed to generate lists of survey participants for each of the surveys it plans to conduct. Each regional office then contacts Construction Resources Analysis (CRA) at the University of Tennessee. CRA applies a model to the F.W. Dodge data to identify all construction projects in the start-up phase (within the parameters specified in the regional office’s request) and produces a file of projects that were active during a given time period. The time period may be 3 months or longer, depending on whether the number of projects active during the period is adequate for a particular survey. The information CRA solicits from F.W. Dodge is provided directly to the regional offices and includes data on construction projects such as the location, type of construction, and cost; the name and address of the contractor or other key firm associated with the project; and if available, the subcontractors. When the regional offices receive this information, Labor analysts screen the data to make sure the projects meet four basic criteria for each survey. The project must be of the correct construction type, be in the correct geographic area, fall within the survey time frame, and have a value of at least $2,000. In addition to obtaining files of active projects, Labor’s regional analysts are encouraged to research files of unsolicited information that may contain payment evidence submitted in the past that is within the scope of a current survey. Stage 2: Conducting Surveys of Participants When the regional offices are ready to conduct the new surveys, they send a WD-10 wage reporting form to each contractor (or employer) identified by the F.W. Dodge reports as being in charge of one of the projects to be surveyed, together with a transmittal letter that requests information on the projects listed on the enclosed WD-10, a list of subcontractors that may have worked on each project, and information on any additional projects the contractor may have. Every WD-10 that goes out for a particular project has on it a unique project code, the location of the project, and a description of the project. Data requested on the WD-10 include a description of the project and its location, in order to assure the regional office that each project for which it receives data is the same as the one it intended to have in the survey. The WD-10 also requests the contractor’s name and address; the value of the project; the starting and completion date; the wage rate, including fringe benefits, paid to each worker; and the number of workers employed in each classification during the week of peak activity for that classification. The week of peak or highest activity for each job classification is the week when the most workers were employed in that particular classification. The survey respondent is also asked to indicate which of four categories of construction the project belongs in. In addition, about 2 weeks before a survey is scheduled to begin, regional offices send transmittal letters to congressional representatives and a list of third parties, such as national and local unions and industry associations, to encourage participation. Labor encourages the submission of wage information from third parties, including unions and contractors’ associations that are not the direct employers of the workers in question, in an effort to collect as much data as possible. Third parties may obtain wage data for their own purposes, such as for union officials that need wage information to correctly assess workers’ contributions toward fringe benefits. Third-party data generally serve as a check on data submitted by contractors if both submit data on the same project. Regional offices also organize local meetings with members of interested organizations to explain the purpose of the surveys and how to fill out the WD-10. Because the F.W. Dodge reports do not identify all the subcontractors, both the WD-10 and the transmittal letter ask for a list of subcontractors on each project. Subcontractors generally employ the largest portion of on-site workers, so their identification is considered critical to the success of the wage survey. Analysts send WD-10s and transmittal letters to subcontractors as subcontractor lists are received. Transmittal letters also state that survey respondents will receive an acknowledgment of data submitted and that the respondent should contact the regional office if one is not received. Providing an acknowledgement is intended to reduce the number of complaints that data furnished were not considered in the survey. Labor analysts send contractors who do not respond to the survey a second WD-10 and a follow-up letter. If they still do not respond, analysts attempt to contact them by telephone to encourage them to participate. Stage 3: Clarifying and Analyzing Respondents’ Data As Labor’s wage analysts receive the completed WD-10s in the regional offices, they review and analyze the data. Labor’s training manual guides the analyst through each block of the WD-10, pointing out problems to look for in data received for each one. Analysts are instructed to write the information they received by telephone directly on the WD-10 in a contrasting color of ink, indicating the source and the date received. They are instructed to draw one line through the old information so it is still legible. Labor’s wage analysts review the WD-10 to identify missing information, ambiguities, and inconsistencies that they then attempt to clarify or verify by telephone. For example, an analyst may call a contractor for a description of the work done on a project in order to confirm that a particular project has been classified according to the correct construction type. An analyst may also call a contractor to ask about the specific type of work that was performed by an employee in a classification that is reported in generic terms, such as a mechanic. In that situation, the analyst would specify on the WD-10 whether the employee is a plumber mechanic or some other type of mechanic to make sure that the wages reported are appropriately matched to the occupations that are paid those rates. Similarly, because of variations in area practice, analysts may routinely call to find out what type of work the employees in certain classifications are doing. This is necessary because in some areas of the country, some contractors have established particular duties within a traditional general craft as a specialty craft (for example, drywall finishers as a specialty craft under the general craft of painters). Specialty crafts are usually paid at lower rates than general crafts. Labor verifies wage data from a sample of wage data forms submitted by contractors and third parties by both telephone and on-site review. For telephone verification, Labor selects a 10-percent sample of wage data submissions from third parties and a 2-percent sample of submissions from contractors. They verify wage data by telephone and, where appropriate, ask that supporting payroll documents be mailed to Labor. For on-site verification, Labor selects at least a 10-percent sample of wage data forms submitted by contractors and third parties. A private accounting firm was hired to conduct on-site reviews. Auditors from the firm conduct an on-site review of payroll records at the contractor’s work site to verify wage survey data. For both telephone and on-site verification, Labor’s procedures require that the data be verified only with the contractors, not with the third parties. Any discrepancies between the original WD-10 submitted and the payroll records or contractor’s testimony are recorded by the wage analyst and auditor. WHD reviews the discrepancies and makes changes, as necessary. Data Are Recorded When an analyst is satisfied that all issues with respect to the data on the WD-10 for a particular project have been resolved, the data are recorded and tabulated. The analyst enters them into a computer that generates a Project Wage Summary, Form WD-22a, for reporting survey information on a project-by-project basis. The WD-22a has a section for reporting the name, location, and value of each project; the number of employees who were in each classification; and their hourly wage and fringe benefits. It also has a section for reporting the date of completion or percentage of the project completed, whichever is applicable. At least 2 weeks before the survey cutoff date, the response rate for the survey is calculated to allow time to take follow-up action if the response rate is determined to be inadequate. For example, WHD operational procedures specify that if data gathered for building or residential surveys provide less than a 25-percent usable response rate or less than one-half of the required key classes of workers, the analyst will need to obtain data from comparable federally financed projects in the same locality. If an analyst has no data on occupations identified by Labor as key classification of workers for the type of construction being surveyed, Labor’s procedures require him or her to call all the subcontractors included in the survey who do that type of work and from whom data are missing, to try to get data. If the analyst still cannot obtain sufficient data on at least one-half of the required key classes, consideration must be given to expanding the scope of the survey geographically to have more crafts represented. If the overall usable response rate for the survey is 25 percent or more, data on three workers from two contractors are considered sufficient to establish a wage rate for a key occupation. After the survey cutoff date, when all valid data have been recorded and tabulated, the final survey response rate is generated by computer. Typically, a WHD analyst takes 4 months to conduct a survey. Once all the valid project data have been entered, the prevailing wage rate for each classification of worker can be generated by computer. If a majority of workers is paid at a single rate in a job classification, that rate prevails for the classification. The wage rate needs to be the same, to the penny, to constitute a single rate. Lacking such a majority, a weighted average wage rate for that occupation is calculated. The prevailing wage rate for each occupation is compiled in a computer-generated comprehensive report for each survey, called the Wage Compilation Report, Form WD-22. The WD-22 lists each occupation and the wage rate recommended for that occupation by the regional office. The form indicates whether the rate is based on a majority or a weighted average, and provides the number of workers for which data were used to compute each wage rate. The regional offices transmit survey results to the national office, which reviews the results and recommends further action if needed. Stage 4: Issuing the Wage Determinations The national office issues final wage determinations after reviewing recommended wage rates submitted by the regions. There is no review or comment period provided to interested parties before they go into effect. Access to wage determinations is provided both in printed reports available from the U.S. Superintendent of Documents and on an electronic bulletin board. Notices of modifications to general wage determinations are published in the Federal Register. Labor’s Appeals Process An interested party may seek review and reconsideration of Labor’s final wage determinations. The national office and the regional offices accept protests and inquiries relating to wage determinations at any time after a wage determination has been issued. The national office refers all the complaints it receives to the relevant regional offices for resolution. Most inquiries are received informally by telephone, although some are written complaints. Regional office staff said that a majority of those with concerns appear to have their problems resolved after examining the information (collected on form WD-22a) for the survey at issue, because they do not pursue the matter further. If an examination of the forms does not satisfy the complainant’s concerns, the complainant is required to provide information to support the claim that a wage determination needs to be revised. The national office modifies published wage determinations in cases where regional offices, on the basis of evidence provided, recommend that it do so, such as when it has been shown that a wage determination was the result of an error by the regional office. Some of those who seek to have wage rates revised are told that a new survey will be necessary to resolve the particular issue that they raised. For example, if the wage rates of one segment of the construction industry are not adequately reflected in survey results because of a low rate of participation in the survey by that segment of the industry, a new survey would be necessary to resolve this issue. Those who are not satisfied with the decision of the regional office may write to the national office to request a ruling by Labor’s WHD administrator. If the revision of a wage rate has been sought and denied by a ruling of Labor’s WHD administrator, an interested party has 30 days to appeal to the Administrative Review Board for review of the wage determination. The board consists of three members appointed by the Secretary of Labor. The Solicitor of Labor represents WHD in cases involving wage determinations before the Administrative Review Board. A petition to the board for review of a wage determination must be in writing and accompanied by supporting data, views, or arguments. All decisions by the Administrative Review Board are final. Related GAO Products Davis-Bacon Act: Labor Now Verifies Wage Data, but Verification Process Needs Improvement (GAO/HEHS-99-21, Jan. 11, 1999). Davis-Bacon Act: Process Changes Could Address Vulnerability to Use of Inaccurate Data in Setting Prevailing Wage Rates (GAO/T-HEHS-96-166, June 20, 1996). Davis-Bacon Job Targeting (GAO/HEHS-96-151R, June 3, 1996). Davis-Bacon Act: Process Changes Could Raise Confidence That Wage Rates Are Based on Accurate Data (GAO/HEHS-96-130, May 31, 1996). Davis-Bacon Act (GAO/HEHS-94-95R, Feb. 7, 1994). The Davis-Bacon Act Should Be Repealed (GAO/HRD-79-18, Apr. 17, 1979). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on: (1) the status of the Department of Labor's efforts to improve the Davis-Bacon Act wage determination process; and (2) whether the changes Labor is making are likely to address the timeliness and accuracy of wage determinations. GAO noted that: (1) in response to the conference report directive, Labor is testing a number of efforts that are aimed at improving the process for determining prevailing wage rates; (2) the alternatives being tested fall under two tracks: (a) redesigning Labor's Wage and Hour Division's (WHD) existing survey process, including revising survey forms to obtain data more efficiently and using technology to more quickly and accurately analyze the survey data obtained; and (b) using data from surveys conducted by the Bureau of Labor Statistics (BLS) to determine prevailing wage rates; (3) the earliest efforts began in 1996 and most efforts under both tracks are scheduled for completion by fiscal year (FY) 2000; (4) given these timeframes and the need to analyze the results, Labor officials said they will decide in FY 2001 which track best promotes a wage determination process that will result in accurate, timely wage determinations; (5) efforts under either track, if successfully implemented, have the potential to improve the timeliness and accuracy of wage determinations; (6) redesigning the survey form and making it more accessible and understandable to survey participants could increase survey participation and improve the timeliness of data submitted, potentially leading to more accurate and timely wage determinations; (7) however, Labor officials identified several key issues that they will need to address for efforts under either track to achieve the intended results; (8) these issues include concerns about: (a) WHD's ability to deal with potentially significant increases in the volume of survey data collected under a revised process; and (b) limitations of BLS data as a tool in setting prevailing wage rates; (9) Labor officials also acknowledged that they need to develop a clear plan to make an informed decision about which track, or combination of efforts under both tracks, to implement; (10) Labor has established general performance measures that officials say will guide Labor's efforts; (11) additionally, it has started to collect limited baseline data to assess progress made under both tracks but such data may be of limited use; and (12) Labor has also recognized that other factors, such as cost, will need to be considered when officials decide which efforts would most improve the accuracy and timeliness of wage determinations, but officials have not yet specified how these other factors will be analyzed.
Background SSA administers two programs under the Social Security Act that provide benefits to people with disabilities who are unable to work: Disability Insurance (DI) and Supplemental Security Income (SSI). According to SSA policy, to be eligible for either DI or SSI, an adult must be unable to engage in “substantial gainful activity”—typically work that results in earnings above a monthly threshold established each year by SSA—because of a medically determinable physical or mental impairment that is expected to last at least 12 months or result in death. Established in 1954, the DI program provides monthly benefits to workers (and their spouses and dependents) whose work history qualifies them for disability benefits and whose impairment is disabling. In 2007, SSA paid about $99 billion in DI benefits to about 8.1 million workers, spouses, and dependents. The average monthly benefit was $1,004 for disabled workers. SSI is a means- tested income assistance program created in 1972 that provides a financial safety net for people who are aged, blind, or disabled, and have low incomes and limited assets. Unlike the DI program, SSI has no prior work requirements. In 2007, SSA paid about $37 billion in SSI benefits. As of December 2007 about 7.4 million recipients received an average monthly benefit of $468. Some individuals with disabilities receive both DI and SSI benefits if they meet both DI’s work history requirements and SSI’s income and asset limits. Disability Determination Process The process to determine a claimant’s eligibility for SSA disability benefits is complex, involving several state and federal offices. The disability determination process, which is the same for DI and SSI claimants, involves an initial determination of disability and provides up to two levels of administrative review within SSA. A claimant first completes an application, or claim, for DI or SSI benefits, which includes information regarding illnesses, injuries, or conditions and a signature giving SSA permission to request medical records from medical care providers. Once the SSA field office staff verify that nonmedical eligibility requirements are met, the claim is sent to the state’s DDS office for determination of medical disability. If the claim is approved, a claimant will be notified and will receive benefits, including limited retroactive benefits for some DI claimants. Additionally, if the claim is approved, a claimant may become eligible for Medicaid or Medicare health coverage. If the claim is rejected, a claimant has 60 days to request that the DDS reconsider its decision. If the DDS reconsideration determination concurs with the initial denial of benefits, the claimant has 60 days to appeal and request a hearing before an SSA administrative law judge (ALJ). A claimant may appeal an unfavorable administrative law judge decision to SSA’s appeals council, which includes administrative appeals judges and appeals officers and, finally, to federal court. SSA and DDS officials (examiners and ALJs) determine disability using a five-step sequential process based on evidence such as medical findings and statements of functional capacity obtained during the initial determination process and updated as necessary at each appeal level. (See fig. 1.) Development of Medical Evidence for Initial Determinations Generally, SSA requires DDSs to develop a complete medical history for each claimant for at least a 12-month period prior to the application. SSA guidance directs DDSs to request records from all providers who have treated or evaluated the claimant during this time period, except those who treated only ailments clearly unrelated to the claimed impairment. DDSs generally pay providers for records and SSA pays the DDSs to cover these expenses. Each DDS determines its payment rates for medical and other services necessary to make determinations, subject to certain limits. DDSs request laboratory reports, X-rays, doctors’ notes, and other information used in assessing the claimant’s health and functional capability from many types of providers including: physicians or psychologists; hospitals; community health centers; schools (for child claimants); and Department of Veterans Affairs (VA), military, or prison health care facilities. In addition to medical evidence, DDSs review statements from the claimant or others about the claimant’s impairment and ability to perform daily activities. SSA directs DDSs to make “every reasonable effort” to help the claimant obtain medical reports, which SSA defines as one initial medical records request and, if needed, one follow-up request within 10 to 20 days, when providers have not responded, unless experience with a particular provider warrants more time. DDSs allow a minimum of 10 days after the follow-up request for the provider to reply. When records indicate the claimant has been to other medical providers, DDSs also contact those providers for records. Generally records are placed in the claimant’s case record. SSA regulations require that disability determinations place more, and in some cases controlling, weight on the opinions of a claimant’s treating providers. For example, a treating provider’s opinion about the nature and severity of the claimant’s impairment should generally be given controlling weight where their opinion is well supported by other substantial evidence in a claimant’s case record. In claims where the gathered medical and nonmedical evidence is insufficient to support a disability determination, DDSs may order consultative exams or tests. DDSs pay providers to perform these examinations and SSA pays them to cover these costs. SSA regulations require that payments to providers for consultative exams not exceed the highest rate paid by federal or other state agencies for the same or similar services. The regulation allows states to determine the rates of payment and, as a result, DDS rates of payment for consultative exams vary nationwide. SSA regulations specify the types of providers who may perform these exams or tests, and require DDSs to recruit, train, and oversee them. SSA regulations also state that the claimant’s own provider is generally the preferred source for consultative exams if qualified, equipped, and willing to perform the exams. (See fig. 2.) Claimant To support DDSs’ efforts to process claims quickly, SSA has established an expedited process for claims in which a determination of disability is likely. In September 2007, SSA implemented its Quick Disability Determination process nationwide after testing it in the Boston region. This process uses a computer model using certain key terms in the claim file to identify claims for which a decision of disability is likely and medical evidence establishing disability can be easily obtained. DDSs can use expedited processes for these claims; for example, DDS staff in a couple of states we visited explained how they request and receive medical records for Quick Disability Determination cases by fax. SSA reported, for fiscal year 2007, that the national average processing time for all initial claims was 83 days. By comparison, during the pilot, the Boston region decided Quick Disability Determination claims in an average of 11 days. SSA also has policies to expedite claims involving diseases such as certain types of cancer that are terminal or otherwise so severe that they clearly meet SSA’s definition of disability. SSA performs a quality assurance review of a sample of more than 30,000 DDS decisions each year. SSA assesses the accuracy of the DDSs’ determination and the sufficiency of the documentation for the DDSs’ compliance with requirements for medical records collection and consultative exams process. Decisional deficiencies occur when a different determination should have been made, and documentation deficiencies occur when additional documentation is necessary in order to make the correct determination. SSA also collects extensive data on spending for consultative exams and requires DDSs to routinely report substantial budget, program operations, and management data to SSA. Electronic Medical Record Collection In 2004, President Bush called for widespread adoption of interoperable electronic health records within 10 years and issued an executive order assigning the coordination of the effort to the Department of Health and Human Services. Under the department’s leadership, volunteer organizations designated to develop standards for the health care industry have prepared initial certification criteria for health information technology such as electronic patient records and records management systems. As businesses, providers decide when and whether to invest in these certified systems. Another executive order in 2006 directs certain federal agencies to “utilize, where available, health information technology systems and products that meet recognized interoperability standards.” HHS also has awarded several contracts related to health information technology to address issues such as standardization, networking, and privacy and security. SSA collection of medical evidence is affected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) which defines the circumstances in which an individual’s health information may be used or disclosed. In addition, HIPAA’s security provisions require entities that hold or transmit health information to maintain reasonable safeguards to protect the information against unauthorized use or disclosure and ensure its integrity and confidentiality. DDSs Face Challenges Obtaining Medical Records from Claimants’ Providers Determining eligibility for disability benefits is a complex, challenging task. DDS officials identified obtaining records from claimants’ medical providers as a major challenge to DDS examiners’ ability to quickly compile the necessary evidence for disability determinations. DDSs cited problems with the consistency of provider response to record requests, both in timeliness and completeness of records submitted. DDSs have responded to these challenges by conducting additional follow-up contacts with medical providers and claimants, and more than half of the 51 DDSs we surveyed reported adjusting their payment methods. Although SSA routinely reviews DDSs’ compliance with medical records collection requirements, SSA does not systematically identify and review the effectiveness of promising DDS medical evidence collection practices. Medical Providers Do Not Respond Consistently to DDS Requests for Records DDS officials identified provider response to medical records requests as a challenge in our survey of 51 DDSs. One DDS director reported in our survey that more than 300 providers in the state were considered “nonproductive” so that the DDS must send claimants who are patients of those providers to consultative exams when evidence from other sources is insufficient. One DDS director noted that public health clinics and hospitals are overburdened providing patient care and that medical records programs get short shrift. According to both DDS officials and providers we interviewed, generating records for disability claims takes lower priority than patient care and costs money for medical records staff time and contracted copy services, for example. One DDS official told us that some providers do not bill the DDS for records because the state’s centralized payment system is slow and generates payments that are hard to reconcile with invoices. Examiners in another state told us that some providers refuse to submit requested records for claimants with unpaid bills, or charge the claimants instead of the DDS. DDSs also can have difficulty obtaining medical records when medical records are purged or moved to another location, or when facilities close or are destroyed. DDSs request records from all providers who have treated the claimant for at least the 12 months preceding the application for benefits, except those who treated only minor ailments clearly unrelated to the claimed impairment or when the claimed disability began more recently. As a result, the volume of records requested is high: 13 DDSs reported sending over 200,000 requests in fiscal year 2007. Provider response to these requests for medical records is inconsistent; some submit records to the DDSs within 10 days, others never respond at all. Timeliness of medical record receipt is a central concern because SSA tracks how long it takes to process initial claims, and measures DDSs against regulatory performance standards. SSA reported that the national average processing time for initial claims was 83 days in fiscal year 2007. Although not all DDSs were able to complete our survey question on the volume of medical record requests and timeliness of provider responses, 32 of the 37 DDSs who did provide numbers reported receiving responses for up to 40 percent of their requests for medical records within 10 days. However, a substantial number of requests for medical records go unfulfilled. As shown in figure 3, 14 DDSs received less than 80 percent of requested records. Another 14 DDSs did not provide sufficient data in response to our survey to calculate the percentage of requests for which they received medical records. DDS examiners request records from various types of providers including physicians or psychologists in individual or group practices; hospitals; community health centers; schools (for child claimants); and VA, military, or prison health care facilities. As shown in table 1, DDS directors we surveyed reported that some types of providers are more responsive to medical records requests than others. The task of obtaining a complete medical history is further complicated when claimants do not identify all their medical providers when applying for benefits. Almost all of the 51 DDS directors (48) we surveyed reported that examiners at least sometimes identify providers who had not been listed on the claimant’s application. Examiners may find out about additional medical providers as they review the records in the file, for example, and must generally request records from those providers. In our review of 100 initial claim files, we identified 19 in which DDS examiners requested records from providers who had treated the claimant but had not been identified on the application. In addition to contacting multiple providers, DDS examiners must develop evidence for all of the claimed impairments, which can be numerous and include both mental and physical conditions. During our site visits, DDS claims examiners told us that claims involving mental impairments posed particular documentation challenges, noting that some claimants with mental impairments may have difficulty obtaining treatment or accurately describing their medical histories. Furthermore, SSA regulations include some specific requirements for collecting evidence of mental impairments. For example, generally where there is indication of a possible mental impairment, SSA regulations establish a special technique to be used when evaluating the severity of mental impairments, which includes rating the claimant’s degree of functional limitation in four broad functional areas and recording the results of this evaluation on a standard document. The opinions of providers with an ongoing treatment relationship with the claimant are a particularly important source of evidence for disability determinations. Treating providers’ opinions about the nature and severity of the claimant’s impairment often are given great deference in SSA regulations. Examiners must give controlling weight to treating providers’ opinions if they are not inconsistent with the other substantial evidence in the case record and are well supported by medically acceptable clinical and laboratory diagnostic techniques. Yet, of the 51 DDSs we surveyed, none reported that half or more were willing to provide such opinion statements, and 15 indicated that none or almost none were willing to provide the statements. Almost all DDSs (48 of 51) reported asking for treating sources’ opinion statements in their initial medical records requests, but as table 2 shows, DDSs are not always successful at obtaining those statements, even after multiple requests, and the statements they receive are not always helpful in making their determinations. A good and useful MSS both states a quantification of the effects of the condition on the claimant’s ability to function and an explanation as to how the assessment is supported by the evidence. These are rare. More often we receive “less useful” MSS’s that only do the first part. Treating sources are generally OK with just sending records or including a statement such as “the patient has severe rheumatoid arthritis, remains under my care, and can’t return to work for the foreseeable future.” When we get such an MSS, we either are left to refute it or return it to the TS for a better underlying analysis. This annoys them and usually does not come to a beneficial or happy result. DDS officials and providers described various reasons why treating providers may be reluctant to submit medical source statements. Treating providers may be concerned that submitting their medical opinion to the DDS might interfere with the doctor-patient relationship, and they also typically focus on diagnosis and treatment rather than evaluation of functional ability. Providers also may have limited knowledge of SSA standards or the physical or mental requirements for different types of work. Almost All DDSs Engage in Additional Follow-up Contacts to Encourage Provider Response; about Half Have Modified Their Payments SSA regulations and guidance specify the timing of DDS requests for medical records but leave the methods of contact up to each DDS. If it does not receive records after one request, the DDS must make one follow-up request within 10 to 20 days unless the provider is known to take longer to respond. After that, the DDS must generally give the provider an additional 10 days and then may send the claimant for a consultative exam if needed. Requests by mail remain the most prevalent method for requesting medical records, used at least very often by 42 of the 51 DDSs surveyed. All use fax to some extent, with slightly more (27) reporting they use fax at least often and 24 reporting using fax sometimes. During our site visits, 6 of the 28 DDS examiners we interviewed told us that some providers raise concerns about privacy or compliance with HIPAA, for example, by insisting on a hard copy of the claimant’s signed authorization to release medical records. According to SSA, hard-copy, fax, or electronically transmitted versions of its official authorization form, signed and dated by the claimant, all comply with relevant state and federal laws and regulations, including HIPAA. Once records are received, the DDS may need further contact with providers to clarify ambiguities or request additional information. SSA guidance require examiners to recontact a provider whose medical report contains ambiguities, conflicts either internally or with other evidence, is incomplete, or is not based on medically acceptable clinical and laboratory diagnostic techniques. In addition, SSA guidance directs the DDS’s examiners to recontact a treating provider if the report contains an opinion on an issue reserved for SSA, such as whether the claimant is disabled or has a condition that meets one of the medical listings, without identifying the basis for that opinion. If the initial recontact SSA requires is not successful, DDSs report pursuing additional approaches to encourage providers to submit or clarify records. These include making additional follow-up calls to providers, their assistants, or medical records staff and asking claimants to get in touch with their providers about sending in the records. In addition, DDSs conduct outreach to emphasize the importance of submitting medical records and contact providers to resolve questions about privacy. Privacy of medical records came up frequently in our discussions of the medical evidence collection process: DDS officials in each of the five states we visited indicated that some providers relay concerns about patient privacy and compliance with applicable protections. DDS professional relations officers also supplement the examiners’ contacts via provider education and outreach to medical societies. If information in the medical records requires clarification, DDS medical consultants, such as physicians or psychologists, also may contact providers directly. SSA guidance permits DDSs to obtain verbal statements from treating providers, then send summaries of those statements to the providers for their signatures to expedite the DDS determination process. In addition to following up with providers and claimants, more than half of the 51 DDSs we surveyed reported modifying their payment methods for medical records. To encourage provider response, 34 of the 51 DDS directors surveyed reported taking steps to improve the timeliness of their payments and 6 reported increasing their payment amounts. While only 30 DDS directors reported in our survey that their payment rates were high enough to ensure adequate medical records collection, some DDS directors commented that they had heard from some types of providers that their rates were not adequate; psychologists or other specialty providers, for example, reported that payments were adequate for some types of providers but not others. Asked in the survey how their payment rates compare with prevailing rates for medical records in their states, 3 of the 51 DDSs reported that their payment rates were above prevailing rates in their states, 19 reported that the rates were about the same, and 20 reported that their payment rates were below prevailing rates. Vermont’s DDS instituted an incentive payment for prompt response because that state prohibits providers from charging for providing copies of health care records requested to support a claim or appeal under any provision of the Social Security Act or any other federal or state needs-based program. SSA Conducts Quality Assurance Reviews, but Does Not Gather Some Key Data on Varied DDS Approaches to Collecting Medical Records While SSA conducts quality assurance reviews and collects data on program operations from DDSs, it has not systematically evaluated the effectiveness of the DDSs’ varied approaches to collecting medical records. SSA regularly reviews DDSs’ compliance with requirements for medical records collection as part of its quality assurance review of a sample of more than 30,000 DDS decisions each year. These reviews take place before the DDS determination is communicated to the claimant, and SSA returns the claim to the DDS for additional work if SSA reviewers find that additional medical evidence or analysis is needed. These reviews assess both the accuracy of the DDSs’ determinations and the sufficiency of the documentation the DDSs obtained. Decisional deficiencies occur when the DDS should have made a different determination, and documentation deficiencies occur when additional documentation is necessary in order to make the correct determination. Errors related to the collection of medical evidence include cases in which insufficient medical evidence was obtained to support the DDS determination, for example, to establish that the claimant’s impairment is severe or expected to last at least 12 months or result in death. SSA also requires DDSs to routinely report substantial budget, program operations, and management data to SSA. While these data help SSA oversee the DDSs, they may lack some key measures that SSA could use to evaluate the effectiveness of different DDSs’ medical records collection practices. For example, not all DDSs’ computer systems routinely track the total number of requests they send and the timeliness of provider responses. Of the 51 DDS directors we surveyed, 14 did not provide complete responses on the number of medical record requests they sent and received responses to, and others were able to provide only estimates. The lack of consistent data on receipts of medical records from providers limits SSA’s ability to evaluate the effectiveness of different DDSs’ medical records collection activities—evaluations which could lead to wider adoption of practices that are found to be successful and cost effective. Nationally consistent data could help SSA assess whether some DDSs’ approaches are more effective than others or whether adoption of new approaches, such as incentive payments for prompt provider response, yields faster submission of records. DDSs Face Challenges Recruiting and Retaining Qualified Consultative Exam Providers Recruiting and retaining enough medical providers to conduct consultative exams was frequently cited by DDS representatives as one of the main challenges to medical evidence collection, in part because of provider concerns about missed appointments or DDS payment rates for consultative exams. Responses to these challenges include scheduling consultative exams with medical providers whose practices focus primarily on performing disability evaluations and adjusting payments, for example, by paying providers for the time they spend preparing for a consultative exam that a claimant fails to attend. Recruitment and Retention of Consultative Exam Providers Is Difficult We frequently heard from DDS directors, both during our site visits and in response to our survey, about their difficulty finding medical providers to conduct consultative exams. It is even difficult for DDSs to obtain consultative exams from claimants’ treating physicians—the preferred source for consultative exams according to SSA guidance and regulations. For example, 41 of the 51 DDS directors we surveyed reported that their offices routinely ask claimants’ treating providers if they are willing to perform a consultative exam if needed, but 34 of these directors reported that claimants’ treating providers are never or almost never willing to perform these exams. According to DDS officials and providers, reasons for this reluctance may include concern about disrupting the doctor-patient relationship through involvement in the disability claim and dissatisfaction with DDS payment rates. These inquiries often are included in the requests for medical records sent by the DDSs to claimants’ treating providers. For example, in our review of 100 claim files for initial disability determinations, 45 files contained one or more requests for medical records that included an inquiry about the providers’ willingness to perform a consultative exam. However, only 2 claimants’ files had records of consultative exams conducted by the treating provider. In many cases, DDSs make this request in the form of a yes or no question that accompanies their requests for medical records or by asking providers to contact them if they would be interested in performing a consultative exam. Often providers either indicate they are not willing to perform a consultative exam or leave the question blank. In some cases, the requests for records indicate that the absence of a response will be interpreted as an indication that they are not interested. One reason why the DDSs may face difficulty recruiting and retaining consultative exam providers is the frequency with which disability claimants miss their consultative exam appointments. DDS directors reported in our survey that claimants fail to attend approximately 16 percent of consultative exam appointments on average, with 40 of the 51 directors providing this information. When asked the reason why claimants fail to attend these appointments, DDS directors reported that claimants sometimes miss appointments for reasons including transportation challenges, unmet needs for someone to accompany the claimant to the appointment, reluctance to take part in the exam, or inability to attend due to a mental or physical health condition. Regardless of the reason for claimants’ failure to attend scheduled exams, several DDS examiners we spoke with identified missed consultative exams as a major problem which may affect providers’ willingness to participate. If a claimant misses an appointment, providers lose revenue if they are unable to substitute another patient and cannot bill the DDSs for the missed exam. When asked to what extent provider concerns about missed consultative exam appointments posed challenges, almost half of DDS directors (24 of 51) reported that such concerns posed challenges to a great or very great extent, although some DDSs (20) reimburse providers for time spent preparing for missed consultative exams. Current payment rates also may contribute to the DDSs’ challenges recruiting and retaining consultative exam providers who submit high- quality reports. Almost all DDS directors (50 of 51) reported that DDS fee schedules posed a challenge, at least to some extent, to recruiting and retaining a panel of highly qualified consultative exam providers. Several DDS officials told us current consultative exam payment rates affect their ability to recruit and retain consultative exam providers in their states. For example, California DDS officials commented that current consultative exam payment rates are below prevailing payment rates in the state. Wyoming DDS officials also told us that payment rates pose challenges to the recruitment of providers for Wyoming’s consultative exam provider pool. Consultative exam payment varies among DDSs nationwide. SSA regulations require that payments to providers for consultative exams not exceed the highest rate paid by federal or other agencies in the state for the same or similar services. Within those parameters, DDSs vary in the type of payment rates they use as benchmarks for consultative exams. (See fig. 4.) Many DDS directors (17 of 51) also indicated that in their opinion current payment amounts in their states are not high enough to ensure that the DDS receives timely, high-quality consultative exam reports. For those DDSs, seven also reported that consultative exam reports only sometimes demonstrated sufficient familiarity with the claimants’ medical records and history to support the assessment. Some DDSs Rely on High- Volume Consultative Exam Providers or Pay Providers for Preparing for Missed Appointments Some DDSs have adopted responses to the challenge of recruiting and retaining consultative exam providers by (1) relying on high-volume providers whose practices focus primarily on performing disability evaluations and (2) adjusting consultative exam payments. As shown in figure 5, most DDSs (32 of 51) report they often use high-volume providers to conduct consultative exams for claimants in their state. Twenty-nine indicated that using these providers has a moderately positive or very positive effect on the quality of the consultative exam reports they receive. At least one DDS has taken the concept of high-volume consultative exam providers one step further. The New York DDS expanded its use of high- volume consultative exam providers by hiring contractors to recruit consultative exam providers and manage claimants’ appointments. New York DDS officials reported that the majority of consultative examinations now are conducted through these contractors in areas of the state covered by contracts. As described to us by New York DDS officials, these contracts provide for extensive training of new consultative exam providers that can last several months, content and timeliness requirements for exam reports, and quality assurance including surveys of claimants and inspection of providers’ facilities. Some DDSs have adjusted their payments for consultative exams to address recruitment challenges in their states. For example, Wyoming currently pays usual and customary rates that providers receive for similar exams throughout the state. Wyoming DDS officials reported that they make use of such a structure due to the sparse population and small number of medical providers that service their state, approximately 1,000. According to Wyoming DDS officials, a relatively small portion of these providers are willing to perform consultative exams for the DDS and they believe that without usual and customary payment, even fewer providers would be willing to conduct them. In addition, many DDSs (20 of 51) pay consultative exam providers for the time they spend preparing for exams that claimants fail to attend, which may help DDSs retain their consultative exam provider pool. Among those 20 DDSs reporting that they offer such payments, the average payment provided was about $44. Finally, DDSs engage in various activities to facilitate claimant attendance at consultative exams. The most common activities reported are reminder letters and telephone calls and reimbursement for travel costs (see table 3). Examiners at two of the DDSs we visited described arranging for consultative exam providers to perform in-home evaluations for claimants whose impairments kept them confined to their homes. Examiners noted that “third parties”—family members or social workers listed as contacts on the application for benefits—may help facilitate consultative exam appointments, especially for claimants who are homeless or who have mental or developmental impairments. SSA Reviews Consultative Exams and DDS Decisions, but Does Not Evaluate DDS Practices to Address Recruitment and Retention Challenges While SSA evaluates consultative exams as part of its quality assurance review process and collects data on spending for consultative exams, it has not evaluated the effectiveness of varied DDS responses to challenges related to recruiting and retaining consultative exam providers. SSA reviews consultative exams as part of its ongoing quality assurance reviews of more than 30,000 randomly sampled initial disability determinations. SSA reviewers assess the claim file for errors including unnecessary consultative exams; consultative exam from an improper source (such as failure to use a psychiatrist or psychologist to evaluate a mental disorder); or incomplete, inadequate, or unsigned consultative exam reports. Despite these overall quality reviews, SSA officials indicated they were unable to locate any studies SSA has conducted to evaluate the effectiveness of varied DDS collection practices. By undertaking such studies, SSA program managers could identify promising DDS practices to recruit and retain consultative exam providers or evaluate their effectiveness and potential for wider adoption and thereby improve accountability by facilitating wider adoption of DDS practices with the potential to help the agency achieve its service delivery goals, such as making the correct decision early in the process. SSA currently does not collect some information, such as nationally comparable data on missed consultative exams, that could help SSA evaluate DDS practices that may hold promise for improved recruitment and retention of consultative exam providers in other states. SSA Has Made Progress in Moving to Electronic Collection of Medical Records, but Faces Challenges Shifting to the Use of Electronic Medical Records SSA’s transition from paper medical records to the use of electronic images of medical records has increased opportunities for program efficiencies and agency collaboration. SSA prefers and encourages providers to submit medical records online, but it continues to receive a little more than half of these records in paper form. SSA has only conducted limited studies of the problems related to electronic submission of medical records and has not taken additional steps necessary to facilitate greater use of online submission options. In anticipation of the medical community’s replacement of paper with uniform electronic medical records, SSA is developing procedures to electronically request and receive electronic medical records and analyze them in ways that are expected to make the medical evidence collection process and disability decision making more efficient. Use of Electronic Images Enables SSA and DDSs to Collaborate More Efficiently As a step toward automating its disability process, SSA has successfully adopted the use of electronic images of medical records instead of paper copies for new claimants. Electronic images of medical records—records scanned, faxed, or uploaded into SSA’s computer database—are an important step in SSA’s transition to an automated process, as these images can be submitted, stored, and accessed electronically by authorized staff from distant locations. Electronic medical evidence—even in the form of electronic images—facilitates collaboration between SSA and DDSs. For example, electronic files have enabled SSA to implement a new process for resolving disagreements concerning DDS disability decisions reviewed by SSA before initial decisions are finalized. Rather than having SSA reviewers in each regional office review DDS decisions only in that region, electronic access to records enables staff in other regions and policy staff in SSA headquarters to review cases remotely. SSA introduced this process to promote more nationally consistent interpretations of SSA policy. Additionally, SSA and DDSs are able to shift workloads from office to office without mailing records, which takes time and increases the risk that records will be lost. However, SSA officials and DDS directors told us electronic image records have limitations in that they cannot be electronically analyzed and searched. Almost all surveyed DDS directors (50 of 51) reported that having medical records in electronic folders has increased productivity, but some indicated that frustrations exist, such as some computer system usage problems. For example, several DDS examiners told us they were frustrated by occasional data system interruptions, due in part to performance problems with SSA’s computer system. The SSA system manages large amounts of data across multiple SSA and DDS computer systems. Over half of DDS directors (27 of 51) reported that one of the challenges to medical evidence collection was performance problems with SSA’s integrated computer system, and most (38 of 51) reported that improvement in the stability or responsiveness of the system would add a great or moderate value to the DDSs’ medical evidence collection efforts. SSA Has Made Progress in Developing Options for Submitting Records Electronically, but More than Half Are Still Submitted on Paper One of SSA’s goals is to receive all medical records electronically. SSA maintains several avenues for providers to submit medical evidence online, and nearly all DDS directors (48 of 51) reported that DDS outreach to providers very often addressed options for electronic submission. Some providers, however, have told DDS officials they find SSA’s online submission options inconvenient, difficult to use, or beyond their technical expertise. For example, many providers do not use SSA’s Electronic Record Express Web site to submit records, although it was designed to provide an efficient option for submitting medical records. This Web site limits the number of files that can be sent at one time, which is problematic for large providers such as big hospitals or medical centers. Additionally, infrequent users must call a designated DDS official to reset expired passwords if too much time has passed between submissions. SSA officials told us some providers opt to pay a commercial service to submit medical records, because the service provides for the submission of many files at once, which can be a more efficient option for providers of large volumes of medical records. SSA has recently deployed its own tool for submission of many files at once, called Webservices, but to use this option, medical providers must develop their own software interface to SSA’s Web site. Although SSA provides some technical support, some providers may still find this option beyond their technical expertise. As of November 2008, only two medical record providers were using Webservices. SSA officials noted that additional providers have expressed interest in using WebServices but the agency temporarily limited its use to these two because of limits on the system’s capacity that it intends to resolve after a planned upgrade. DDS professional relations officers at a 2007 conference of the National Association of Disability Examiners noted various difficulties they face encouraging providers to use SSA’s Web site for submitting evidence online. In order to use online options for submitting medical records to SSA, some providers with electronic medical record systems may either need to convert files or print and scan them. In some cases, providers may find this too time consuming to be feasible. Although some providers have registered as Web site users, the difficulties encountered were enough to make them stop using it. A DDS professional relations officer said that they were getting so many calls from providers having problems with the Web site that they had to designate someone to handle the calls. On the other hand, the Mississippi DDS had early success encouraging providers to use the Web site by contracting with a former SSA official who provided detailed “start to finish” guidance on how to use the Web site. SSA held conferences in two cities in March 2008 to give its Web site users an opportunity to express their concerns, and made some modifications to the Web site in July 2008, but SSA has conducted only limited study of the problems with electronic submission of medical records or analyzed the barriers various groups of providers face using the site (such as small- and medium-volume users), and they have not developed a strategy for overcoming these barriers. The agency has made progress responding to some user concerns, for example, by enabling claimants’ representatives to view clients’ folders online, but SSA has not developed a strategy to address the concerns of other user groups. SSA’s efforts to realize its electronic submission goal also are hindered by the uneven pace of the medical community’s acceptance of electronic records. Despite a presidential call for widespread adoption of electronic health records by the year 2014, the Robert Wood Johnson Foundation estimated that less than one-fifth of responding U.S. physicians (17 percent) had at least basic electronic health records and only about 4 percent had fully functional electronic records systems. Nationwide, in September 2008, SSA received 52 percent of records for disability claims on paper, 21 percent through online submission, and 27 percent by fax. (See fig. 6.) One large provider accounts for most of the records SSA receives online. In September 2008, 57 percent of online submissions came from this large medical record copy service. We found variation among the DDSs in the percentages of records received online. In September 2008, 13 DDSs received more than 25 percent of records online while another 11 DDSs received less than 10 percent. DDSs varied in the percentage of records received by electronic fax, with 10 DDSs receiving less than 15 percent of records by fax, and 5 DDSs receiving more than 50 percent. Although providers have submitted an increasing share of records via fax and online over the last few years, the growth in nationwide use of online submission options has slowed in recent months. SSA Is Beginning to Transform Its Process with Computer-to-Computer Requests and Receipts of Records in Uniform Formats While encouraging providers to submit medical records electronically speeds the collection of medical evidence, SSA is participating in preliminary tests of new computer processes that are expected to bring substantial additional efficiencies. With these new procedures, SSA computers request and receive electronic medical records directly from providers’ computers—records in uniform formats that SSA’s computer system can search and use to begin analysis of the claimant’s condition. The electronic images of medical records they currently use are not as suited for analysis as are electronic medical records in uniform formats. For example, currently, DDS examiners cannot electronically search a record or file for particular diagnoses and test results. Instead they must review all the medical records—hundreds of pages of records in some cases—in order to find the pertinent evidence. Most surveyed DDS directors (32 of 51) reported that options for submitting medical evidence in these new formats would be of great or very great value. In its strategic plan for fiscal years 2008 to 2013, SSA established a goal to transform its medical evidence collection process by automatically requesting and receiving electronic medical records through a nationwide health information network. This network is expected to enable medical providers to securely exchange electronic medical records in uniform formats. This will enable SSA to automatically search and analyze the records at the start of the disability determination process. Software will flag medical records that contain references to diagnoses and tests specified in SSA’s medical listings, and thus help examiners promptly determine whether claimants have impairments that qualify as disabilities. To help encourage the use of these processes, SSA is working with other agencies and health providers to develop electronic methods to request, receive, and analyze electronic medical records. For example, SSA and a Boston hospital have launched a prototype effort by which SSA electronically queries the hospital’s computer and retrieves the hospital’s electronic medical records for specific claimants. SSA plans to expand the Boston initiative to additional providers in the future. However, industry standards and protocols need to be further developed before this process can be replicated widely. For example, standards have only recently been developed for the document format used in the Boston initiative called the “continuity of care document.” This format is an electronic exchange standard for sharing patient summary information. In addition, challenges remain in electronic authorization procedures designed to protect the privacy of patients’ health records, as we have reported in previous reports and testimonies. Conclusions The collection of medical evidence in the disability determination process poses many challenges. The DDSs are operating in a high-volume environment and must balance reasonable efforts to obtain complete medical information with the need for timely determinations. Medical providers have constraints on their time and resources as well, and typically focus on diagnosis and treatment rather than assessment of functional ability. The difficulties some DDSs have in obtaining requested medical records and ensuring that claimants attend consultative exams suggest opportunities for continued improvement in the medical evidence collection process. Some DDSs have independently developed varied approaches to respond to these challenges; and all DDSs might benefit from learning from one another and testing and adopting some of these approaches, as appropriate. SSA, however, currently lacks some important data necessary to evaluate these approaches and identify promising practices, which might be shared to promote more timely and complete collection of relevant medical evidence by all DDSs. Meanwhile, SSA efforts to improve the use of consultative examinations and the collection of medical records proceed as the medical community undertakes a major transformation from paper to computer records. With a presidential goal of widespread adoption of electronic medical records by 2014, increasing numbers of providers may have certified electronic records systems capable of fulfilling DDS records requests in electronic formats. As a high-volume user of these records, SSA has incentives to keep pace with industry standards. As such, the prospect of electronically requesting and receiving medical records being explored by SSA and a Boston hospital, and in the development of the nationwide health information network, among other projects, holds promise for achieving even greater efficiencies in medical evidence collection for disability cases in the long run. In the near term, SSA has opportunities to realize greater efficiencies in the collection of medical evidence by encouraging providers to submit records online, saving both time and money by dispensing with inefficient copying and scanning. SSA has taken measures to improve its online submission options, but some providers continue to face difficulties using them and utilization remains limited. Reasons for this are unknown, even to SSA. An evaluation that studies the utilization of SSA’s online submission options, identifies barriers to wider usage, and develops strategies to address these barriers, may help SSA identify cost-effective ways to encourage wider use of online submission methods, especially as more providers begin to use electronic medical records. Recommendations for Executive Action To foster timely and effective collection of medical evidence for disability determinations, we recommend that the Commissioner of SSA identify DDS medical evidence collection practices that may be promising, evaluate their effectiveness, and encourage other DDSs to adopt effective practices where appropriate. As a part of these evaluations, the Commissioner should work with the DDSs to find cost-effective ways to gather consistent data on the effectiveness of DDS medical evidence collection activities. Such data should include key indicators, such as the proportion of requests that yield medical records, the timeliness of medical record receipts, and how frequently claimants fail to attend consultative exams. To achieve a more timely and efficient collection of medical records by encouraging medical evidence providers to submit records electronically, until the nationwide health information network is in operation, we recommend that the Commissioner of SSA conduct an evaluation of the limited utilization of its online submission options. This evaluation should include an analysis of the needs of small, medium, and large providers; identify any barriers to expanded use; and develop strategies to address these barriers. Agency Comments We provided a draft of this report to officials at SSA for their review and comment. In its comments, SSA agreed with our findings and recommendations. Specifically, SSA noted the need for consistent nationwide data but indicated that this is complicated by fact that each DDS uses one of 5 separate case processing systems. To address this limitation, SSA plans to include consistent management data in its common disability case processing system, currently in the planning stage with implementation to begin in 2011. The agency also described current and planned activities to identify and address barriers to electronic submission of data. SSA’s comments are reproduced in appendix IV. We are sending copies of this report to the Commissioner of SSA and others who are interested. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staffs have any questions about this report. Other major contributors to this report are listed in appendix V. Appendix I: Scope and Methodology To determine how Disability Determination Services (DDS) and the Social Security Administration (SSA) collect medical evidence, we used four primary sources of information: (1) a survey of the 51 DDSs including all 50 states and the District of Columbia; (2) in-depth interviews and site visits with 5 states; (3) a review of 100 randomly selected initial claims files and 50 claim files at the appeals level; and (4) analysis of SSA data concerning disability determinations. To assess progress in moving from paper to electronic collection of medical evidence, we reviewed SSA documents concerning SSA and the health industry’s efforts and analyzed data compiled by SSA’s computer system regarding receipts of evidence and discussed efforts to encourage electronic submission with SSA and DDS officials, as well as several medical providers. GAO Survey of DDS Directors on Collection of Medical Evidence for Initial DDS Disability Decisions Our survey of DDSs addressed the timeliness of provider responses to DDS requests for medical records, practices and challenges associated with collecting medical records, practices and challenges associated with obtaining consultative exams, outreach to the medical provider community, and SSA and DDS initiatives associated with medical evidence collection. We pretested the complete survey questionnaire at four of the five DDSs we visited during our site visits and tested selected questions during our fifth DDS site visit. We revised our questionnaire following these pretests, incorporating suggestions and feedback from DDS and SSA regional office officials who reviewed the draft questionnaire during these pretests. In May 2008, we sent confidential access information to each of the 51 DDS directors in the 50 states and the District of Columbia. We received a response from all 51 of these directors, for a 100 percent response rate. We analyzed the survey responses and present selected results in our report. In a few instances, we include results only from DDSs that submitted complete responses and computed national totals from DDS- supplied information. For example, we limited our analysis of DDS responses to questions about receipt of requested medical records to the 37 DDSs that provided the numbers of requested records received within 10 days, 11 to 20 days, 21 to 30 days, more than 30 days, and the number not received. Several DDSs responded to some, but not all of these questions, and other DDSs did not respond to any of these questions. Some of the DDSs estimated their responses while others indicated they were able to compute the information about medical record requests and receipts from their database. One DDS director indicated that the number of records not received included provider responses indicating that the requested records were not available. Another indicated that the number the DDS provided for records not received included instances in which the DDS received records for which no payment was due. Checking with DDS directors in our site visit states, we determined that some of these DDSs used these same approaches, but others did not. In addition, we enforced skip patterns that were published in the survey. State DDS Site Visits We visited DDS in five states—California, Mississippi, New York, Vermont, and Wyoming—to gain a more detailed understanding of the medical evidence collection process, related challenges, and the availability of relevant data. At each of the DDSs we visited, we typically met with the DDS Director, Professional or Medical Relations Officer, and the Information Technology Specialist(s). SSA regional office representatives joined us for some meetings as well. We also met individually with several experienced claims examiners selected by the DDS directors in each state. In addition to describing their collection practices and challenges, DDS officials provided valuable feedback on the content and organization of our questionnaire on medical evidence collection in advance of its release to DDS directors in all 50 states and the District of Columbia. In California and New York, we visited two of those states’ multiple DDS branch offices: Sacramento and Oakland, California; and Albany and Manhattan, New York. During each of these branch office visits we also spoke with experienced claims examiners. The information we obtained from each DDS we visited provided useful context to DDS operations and detailed examples of DDS responses to challenges, but information from these site visits is not intended to describe the operations of all DDSs. We consulted a variety of factors in determining which DDSs to visit including geographic diversity, size, type of administrative computer processing system used, and SSA-provided performance data. These performance data included productivity, accuracy, percentage of claims with at least one invoiced medical record, percentage of all medical records received electronically, and percentage of claims with at least one consultative exam. We selected DDSs with both high and low indicators on these measures to illustrate examples of states with a variety of different medical evidence collection practices. The information we obtained at our site visits is illustrative and not intended to reflect the experiences of DDSs in other states. Table 4 presents some of the indicators we consulted in selecting the five DDSs to visit. Reviews of Random Samples of Claimants’ Folders To obtain more detailed information about the medical evidence collection process, we reviewed two sets of randomly selected, but not projectable, samples of case files: (1) 100 initial disability claims files—electronic folders containing documentation of the disability determination for individual disability claimants and (2) 50 folders for claims decided at the administrative law judge level (ALJ) or appeal. For results from these reviews, see appendixes II and III. To select these 100 initial disability claims folders, we reviewed all DDS decisions during fiscal year 2007 for Supplemental Security Income (SSI) and Disability Insurance (DI) disability benefits and excluded reconsiderations, continuing disability reviews, reopenings, and informal remands. For administrative purposes, we also excluded records that SSA maintained using paper records, rather than certified electronic folders. In order to avoid overrepresentation of claimants who filed for both SSI and DI simultaneously (30 percent of DDS initial decisions in fiscal year 2007), we eliminated duplicate listings of these claimants in our data set. We then randomly selected 100 cases from among the approximately 2.3 million cases in the selected data set. These folders contained copies of SSA and DDS forms used in the development of the case including documentation for both DI and SSI claims. These documents often included medical evidence received from physicians and other providers, claimant and third-party assessments of the claimant’s functional abilities, reports from providers of consultative exams of the claimant, forms providing evaluations of the evidence by DDS medical consultants, DDS forms for obtaining medical source statements from providers, forms and letters used to request medical and nonmedical evidence, evidence submitted by the claimant or his or her authorized representatives, and documents related to the disability determination such as SSA form 831, and Personal Decision Notices and similar notices for denied claims. Similarly, to select a sample of cases decided by SSA ALJ hearings offices, we obtained from SSA an extract of SSA’s Case Processing and Management System data set managed by SSA’s Office of Disability Adjudication and Review. We selected records for decisions by the ALJ hearing offices during the first 6 months of fiscal year 2008 concerning initial claims for SSI and DI disability benefits that had been denied at the DDS initial level. Some had been appealed to the DDS (a “reconsideration”) or to the federal reviewing official, while others were appealed directly to the SSA ALJ hearing office. We also excluded records for which SSA had paper records, rather than certified electronic folders. We randomly selected 50 of these records. SSA staff prepared a CD for each case folder. These electronic folders provided documents compiled by SSA and the DDS during the initial determination, as well as additional documents compiled subsequently, including those obtained during reconsideration of the initial decision by the DDS, documents provided by authorized representatives of the claimant, copies of medical evidence concerning treatment and examinations after the initial determination, medical source statements, an interrogatory, a deposition, and ALJ decision documents. Analysis of SSA Data To obtain more detailed data concerning DDS collection practices and to examine variations among DDSs, we obtained from SSA and analyzed a variety of computerized data. These included data for initial and reconsideration filings received, decided, and pending at year end; filings approved and denied; filings for which one or more medical evidence of record was purchased; filings for which one or more consultative exam was requested; expenditures for purchase of medical records and consultative exams; errors in DDS initial determinations identified by SSA quality assurance the results of evaluations of medical records collected and consultative exam reports by SSA quality assurance reviewers; and responses to medical records obtained via methods, including paper and faxed submissions, and online submission options such as SSA’s Electronic Records Express Web site. We used these data to summarize and compare how DDSs display these data graphically. We also used these data to provide additional information concerning the initial claim case files described above. To conduct limited tests of the reliability of these data we obtained copies of 831 data and Case Processing Management System data from SSA and compared results provided by SSA with results from our analysis of these data sources. Appendix II: Selected Results from Analysis of 100 Randomly Selected Initial Disability Cases The following tables provide selected findings from our review of 100 randomly selected cases for claimants with initial DDS determinations in fiscal year 2007. Appendix III: Medical Evidence Collection Process at the Administrative Hearing Level The process for collecting medical evidence at the administrative hearing level typically differs from the process at the DDS level. If the claimant for disability benefits is dissatisfied with the DDS’s initial decision, he or she can appeal. In many cases the initial appeal is a request for a reconsideration by the DDS. Then, if is the claimant is not satisfied with the DDS decision, he or she can appeal and request a hearing before an administrative law judge (ALJ), who will review the case in light of the evidence gathered by the DDS as well as additional evidence obtained. The responsibility for providing evidence to support the appeal falls on the claimant. A claimant may be represented by an attorney or other representative, to collect the additional evidence on his or her behalf. If necessary evidence is not provided, the ALJ must attempt to fully and fairly develop the evidence. Most claimants who appeal to an SSA hearings office are represented by attorneys or others who enter into agreements with SSA providing payment to the representative, which may be from a specified proportion of awarded retroactive disability benefits in cases where claimants win their appeal. SSA requires ALJs to conduct a prehearing review of all evidence and determine whether additional development is needed. Claimants’ representatives may submit updated medical records. If the ALJ is unable to obtain adequate evidence, the ALJ also can request consultative exams or tests. Similarly, if additional evidence is needed, the ALJ may have an independent medical expert review the file and answer written interrogatories, or testify at the hearing. Some ALJs ask the DDS to gather additional evidence on their behalf. Others have SSA hearings office staff gather evidence for the hearing. ALJs have additional options to obtain opinion evidence from claimants’ providers, including sending interrogatories or questionnaires, requesting testimony at the hearing, and, under certain circumstances, issuing administrative subpoenas. Claimants’ representatives told us that letters describing the possibility of such subpoenas are sometimes sent, but subpoenas are rare. As part of SSA’s continuing efforts to reduce the backlog of claims at the hearing level, it has implemented the Medical Expert Screening Initiative Business Process. This is a new pre-hearing initiative to identify disability claimants whose impairments are most likely to meet the requirements for disability with a pre-hearing interrogatory sent to medical experts. If the medical expert responses to the interrogatories show that a fully favorable decision may be made on the record, without the need for additional evidence or a hearing, the case is referred to an attorney adjudicator in that hearing office to issue the decision, if warranted. ALJs and DDSs use the same definition of disability, but use different administrative guidance. SSA guidance for DDSs is included in SSA’s Program Operations Manual System. Its counterpart for ALJs is called the Hearings, Appeals, and Litigation Law Manual. To obtain information on how medical evidence is collected at the ALJ hearing level, we reviewed electronic copies of 50 claims that were decided at the appeals level during the first half of fiscal year 2008. Claims were randomly selected from all decided initial disability claims nationwide which had a certified, fully electronic folder. The small sample size means that the information we obtained from these selected cases cannot be considered representative of all cases at the appeals level, but it provides examples of how medical evidence is collected at the appeals level. These included 34 fully favorable decisions, 1 partially favorable decision (a changed date for onset of the claimant’s disability), and 10 unfavorable decisions. In 4 cases, the case was dismissed or the claimant withdrew. The tables below summarize results from our review of these cases: ALJs often gather nonmedical as well as medical evidence to reach a decision. They typically observe the claimant during the hearing, in- person, or by video conference. One ALJ wrote, for example, “Furthermore, the state agency consultants did not adequately consider that the claimant’s statements concerning the intensity, persistence and limiting effects of these symptoms are generally credible.” Hearings also sometimes involve evidence from vocational experts—experts in assessing a claimant’s ability to perform various jobs. In 3 of the 50 cases reviewed, the ALJ cited medical-vocational rules as the basis of their decision. By the time the cases we reviewed were decided by the SSA hearings office, medical evidence had typically been added that was not available at the time of the initial DDS decision. In most of these cases, the claimant’s representative collected the new evidence and submitted it to SSA. Often this included evidence from sources that had not provided medical records at the initial DDS level. In several cases the representative obtained a medical source statement from a source that had not previously submitted one, but had provided medical records. In 12 cases, evidence indicated that the claimant’s condition proved more prolonged than the DDS expected. Appendix IV: Comments from the Social Security Administration Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Michael J. Collins, Assistant Director; Benjamin P. Pfeiffer; Susan L. Aschoff; Alexander G. Galuten; Catherine M. Hurley; Karen A. Jarzynka; Katherine N. Laubacher; Jennifer R. Popovic; Suzanne C. Rubins; Meghan H. Squires; Vanessa R. Taylor; Rachael C. Valliere; and Walter K. Vance, made key contributions to this report. Related GAO Products Social Security Disability: Management Controls Needed to Strengthen Demonstration Projects. GAO-07-331. Washington, D.C.: September 26, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Social Security Disability: Better Planning, Management, and Evaluation Could Help Address Backlogs. GAO-08-40. Washington, D.C.: December 7, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Disability Programs: SSA Has Taken Steps to Address Conflicting Court Decisions, but Needs to Manage Data Better on the Increasing Number of Court Remands. GAO-07-331. Washington, D.C.: April 5, 2007. Social Security Administration: Agency Is Positioning Itself to Implement Its New Disability Determination Process, but Key Facets Are Still in Development. GAO-06-779T. Washington, D.C.: June 15, 2006. Electronic Disability Claims Processing: SSA Is Proceeding with Its Accelerated Systems Initiative but Needs to Address Operational Issues. GAO-05-97. Washington, D.C.: September 23, 2005. Social Security Administration: More Effort Needed to Assess Consistency of Disability Decisions. GAO-04-656. Washington, D.C.: July 2, 2004. Social Security Disability: Commissioner Proposes Strategy to Improve the Claims Process, but Faces Implementation Challenges. GAO-04-552T. Washington, D.C.: March 29, 2004. Electronic Disability Claims Processing: SSA Needs to Address Risks Associated with Its Accelerated Systems Development Strategy. GAO-04-466. Washington, D.C.: March 26, 2004. Social Security Administration: Strategic Workforce Planning Needed to Address Human Capital Challenges Facing the Disability Determination Services. GAO-04-121. Washington, D.C.: January 27, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearings Level. GAO-04-14. Washington, D.C.: November 12, 2003. Electronic Disability Claims Processing: Social Security Administration’s Accelerated Strategy Faces Significant Risks. GAO-03-984T. Washington D.C.: July 24, 2003. Social Security Disability: Efforts to Improve Claims Process Have Fallen Short and Further Action is Needed. GAO-02-826T. Washington, D.C.: June 11, 2002 Social Security Disability: Disappointing Results from SSA’s Efforts to Improve the Disability Claims Process Warrant Immediate Attention. GAO-02-322. Washington, D.C.: February 27, 2002. SSA Disability Redesign: Actions Needed to Enhance Future Progress. GAO/HEHS-99-25. Washington, D.C.: March 12, 1999.
The timely collection of relevant medical evidence from providers, such as physicians and psychologists, is key to the Social Security Administration (SSA) process for deciding whether an estimated 2.5 million new claimants each year have impairments that qualify them to receive disability benefits. The initial determinations are generally made by state agencies called Disability Determination Services (DDSs). We evaluated: (1) the challenges, if any, in collecting medical records from the claimants' own providers and ways SSA and the DDSs are responding to these challenges; (2) the challenges, if any, in obtaining high-quality consultative exams and ways SSA and the DDSs are responding to these challenges; and (3) the progress SSA has made in moving from paper to electronic collection of medical evidence. We surveyed 51 DDS directors, visited 5 state DDSs, reviewed sample case files, and interviewed officials with SSA, DDSs, and associations for claimants and providers. Obtaining timely and complete medical records is a challenge to DDSs in promptly deciding disability claims, and DDSs have responded with additional provider contacts and adjustments to their payment procedures. Although DDSs pay most medical providers for medical records and SSA pays the DDSs to cover these expenses, 14 of 51 DDSs reported the percentage of requests for which they did not receive records was 20 percent or more in fiscal year 2007. In response to this challenge, all DDSs conduct follow-up with providers and claimants to urge them to provide records. Over half of the DDSs (34 of 51) have also implemented more timely payments for records and six increased the amount they pay. Although SSA evaluates DDS collection of medical records, it does not compile key data necessary to identify and share promising collection practices. Recruiting and retaining qualified providers is a challenge to obtaining consultative exams needed to supplement insufficient medical records. For example, 41 of 51 DDSs reported routinely asking claimants' own providers to perform these exams; yet 34 reported providers never or almost never agree to do so. DDSs directors in our survey believe that current payment rates account for some of the difficulty recruiting and retaining consultative exam providers. In response to these challenges, 32 DDSs rely on medical providers who specialize in performing disability evaluations, and 20 pay providers for time spent preparing for appointments claimants fail to attend. SSA evaluates evidence from consultative exams, but these evaluations and the data they yield are too limited to identify and share promising DDS practices. SSA has made progress moving to electronic collection of medical records, but faces challenges in fully implementing electronic retrieval and analysis of medical evidence. SSA now uses electronic images instead of paper copies of new claimants' records. Though SSA seeks to obtain all records electronically and provides options for online submission of records, only one large provider accounts for most of the records submitted online, and about half of all records received are on paper. To date, SSA has taken only limited action to identify and analyze the barriers providers face in using current electronic record submission options, and has not developed a strategy to address them. In the long run, SSA is participating in an advanced prototype to collect medical records in formats that can be searched and analyzed by electronically querying a hospital's records database and directly retrieving the claimants' records.
Background FDA is responsible for the safety and quality of domestic and imported pharmaceutical products under the Federal Food, Drug, and Cosmetic Act. Specifically, FDA’s Center for Drug Evaluation and Research (CDER) establishes standards for the safety, effectiveness, and manufacture of prescription pharmaceutical products and over-the-counter medications. CDER reviews the clinical tests and manufacture of new pharmaceutical products before they can be approved for the U.S. market, and it regulates the manufacture of pharmaceutical products already being sold to ensure that they comply with federal statutes and regulations, including current “good manufacturing practice” (GMP). GMP requirements are federal standards for ensuring that pharmaceutical products are high in quality and produced under sanitary conditions. In addition, CDER enforces the act’s prohibitions against the importation of adulterated, misbranded, and counterfeit pharmaceutical products. CDER regulates the manufacture of pharmaceutical products by requesting that FDA’s Office of Regulatory Affairs (ORA) inspect manufacturers both at home and abroad to ensure that pharmaceuticals are produced in conformance with GMPs. ORA manages investigators located in FDA’s 21 district offices. Approximately 375 investigators and 75 microbiologists and chemists conduct inspections of foreign pharmaceutical manufacturers. ORA’s investigators inspect manufacturers that produce pharmaceuticals in finished form as well as manufacturers that produce the active ingredients used in finished pharmaceutical products. Typically, ORA investigators travel abroad for about 3 weeks at a time during which they inspect approximately three manufacturers. Each inspection ranges from 2 to 5 days in length, depending on the number and types of products inspected. In fiscal year 1996, FDA reviewed the results of 287 inspections of foreign pharmaceutical manufacturers conducted by its investigators in 35 countries (see figure 1). About 70 percent of these inspections were performed in manufacturing facilities that produce the active ingredients used in finished pharmaceutical products. CDER requests that ORA’s investigators conduct inspections for three reasons. First, CDER requests pre-approval inspections to ensure that before a new drug application is approved, the manufacturer of the finished pharmaceutical product as well as each manufacturer supplying a bulk pharmaceutical chemical used in the finished pharmaceutical product comply with GMPs. Each step in the manufacture and processing of a new drug, from the sources of raw materials to final packaging must be approved by FDA. Second, CDER requests postapproval or routine surveillance inspections to periodically assess the quality of marketed pharmaceutical products. During these inspections, investigators verify that manufacturers of finished pharmaceutical products and bulk pharmaceutical chemicals comply with GMPs. Third, CDER requests for-cause inspections when it receives information indicating problems in the manufacture of approved pharmaceutical products. In addition, CDER requests for-cause inspections of manufacturers that were not in compliance with GMPs during previous inspections. In for-cause inspections, FDA investigators determine whether the manufacturer has improved its production processes to comply with GMPs. During an inspection, the ORA investigator examines the pharmaceutical manufacturer’s production processes, product packaging and labeling processes, product contents, warehouse practices, quality control, laboratories, recordkeeping systems, and other manufacturing practices. The investigator reports observations of significant objectionable conditions and practices that do not conform to GMPs on the list-of-observations form, commonly referred to as FDA form 483. At the end of the inspection, the investigator gives a copy of the form 483 to the highest ranking management official present at the manufacturing facility. The investigator also discusses the observations on the form 483 with the firm’s management to ensure that they are aware of any deviations from GMPs that were observed during the inspection and suggests that the manufacturer respond to FDA in writing concerning all actions taken as a result of the observations. Figure 2 shows FDA’s process for managing foreign pharmaceutical inspections. After returning to the district office, the investigator prepares an establishment inspection report that describes the manufacturing operations observed during the inspection and any conditions that may violate federal statutes and regulations. The investigator also recommends whether the manufacturer is acceptable to supply pharmaceutical products to the United States. The investigator’s district office formally endorses the recommendation after reviewing the inspection report to determine if it supports the proposed recommendation. The district office forwards its endorsement along with the investigator’s establishment inspection report and the form 483 to CDER. The foreign inspection team within CDER’s Office of Compliance reviews the documentation and the manufacturer’s written response to FDA about any corrective actions taken. CDER then decides whether the manufacturer complies with GMPs. Inspections of pharmaceutical manufacturers are classified in one of three categories. As table 1 shows, during fiscal year 1996, 238 inspections (or 83 percent) revealed deviations from GMPs. Of these, CDER determined that 46 inspections revealed deviations from GMPs that ranked in the most serious (or “official action indicated”) category. When CDER classifies a foreign pharmaceutical inspection as “official action indicated” (OAI), it sends the manufacturer an enforcement letter. CDER issues two types of enforcement letters: untitled letters and warning letters. CDER issues an untitled letter to a foreign manufacturer when the inspection was conducted as part of its review of a new drug application and the manufacturer has not previously been inspected and accepted to supply approved pharmaceutical products to the United States. The untitled letter notifies the manufacturer that its manufacturing process does not comply with federal statutes and regulations and that failure to take corrective action may result in the disapproval of any new drug application on which the manufacturer is listed. CDER issues a warning letter to a foreign manufacturer when a subsequent inspection of its facility is classified as OAI. Warning letters are issued to manufacturers that are already supplying approved pharmaceutical products to the United States. Warning letters indicate that serious manufacturing deficiencies can and are affecting commercially marketed products. The warning letter notifies the manufacturer of its violation of federal statutes and regulations and that failure to take corrective action may result in further FDA enforcement action. CDER issued 17 untitled letters and 19 warning letters to foreign pharmaceutical manufacturers in fiscal year 1996. If CDER classifies an inspection as OAI and believes the manufacturer’s product is adulterated because it was not produced in compliance with GMPs, CDER can instruct the district offices to cooperate with the U.S. Customs Service in detaining the manufacturer’s product when it is offered for entry into the United States. In such a situation, the warning letter may also threaten to detain the manufacturer’s products at U.S. entry points or notify the manufacturer that detention will occur. Customs, which controls the points where foreign shipments enter the United States, ensures that adulterated pharmaceutical products are either exported from the United States or destroyed. In fiscal year 1996, CDER determined that the pharmaceutical products made by two foreign manufacturers should be detained. Timeliness of Inspection Reports Has Improved, but Delays in Taking Prompt Enforcement Actions Continue FDA’s 1988 internal evaluation found that delays in the submission of final inspection reports by investigators made it difficult for FDA to take prompt enforcement action against foreign manufacturers that did not comply with federal regulations that ensure the safety, purity, and quality of pharmaceutical products. Since then, FDA has taken several actions that have reduced the average time required by investigators to submit foreign inspection reports to headquarters. Despite this improvement, only about a quarter of the warning letters FDA issued in fiscal year 1996 to foreign pharmaceutical manufacturers found to have serious deficiencies met FDA’s timeliness standards. The lack of prompt enforcement action may impair FDA’s ability to prevent foreign manufacturers from exporting contaminated or adulterated pharmaceutical products to the United States. FDA Has Acted to Improve the Timeliness of Enforcement Actions FDA’s 1988 internal evaluation of its foreign inspection program reported that the average length of time required from the completion of an inspection to CDER’s receipt of a final report was slightly more than 3 months. Delays in submitting inspection reports may hinder CDER’s ability to initiate timely enforcement actions to prevent contaminated or adulterated products from entering the United States. To reduce these delays, the evaluation recommended that FDA explore new ways of processing inspection reports. To strengthen its enforcement strategy, FDA revised its timeliness standards for new drug applications in October 1991 by requiring investigators and districts to submit all inspection reports classified as OAI or “voluntary action indicated” (VAI) to CDER within 30 work days of completing inspections. FDA also revised its enforcement policy to require CDER to review OAI inspection reports containing recommendations for warning letters and issue the letters within 15 work days. According to FDA officials, additional changes were made to help investigators submit more timely inspection reports on foreign manufacturers. In the early 1990s, FDA reduced the length of foreign inspection trips from about 6 to 3 weeks as well as the number of inspections an investigator conducted during the trip. The agency also revised inspection requirements for international travel to build time into foreign inspections for investigators to prepare their reports and provided investigators with notebook computers so that they could begin preparing their reports overseas. Inspection Reports Are More Timely, but Many Miss FDA’s Reporting Deadline Although FDA has reduced the average time it takes to submit reports after inspections are completed from slightly more than 3 months to 2, over half of the reports in fiscal year 1996 did not meet FDA’s timeliness standard. Our analysis of 287 foreign inspection reports CDER reviewed during fiscal year 1996 showed that about 42 percent (102) of the inspections that identified GMP deficiencies (either OAI or VAI) were submitted on time or within 30 work days of completing inspections. However, 58 percent (141) of the inspection reports were not timely (see table 2). About half of the inspections with the most serious deficiencies (classified as OAI or requiring official action) were submitted on time and half were not. Most of the OAI inspection reports that were submitted to CDER after the 30-day deadline were submitted within 60 work days. CDER received about one-third of the inspection reports with less serious deficiencies (classified as VAI, allowing foreign manufacturers to voluntarily make corrections) on time; two-thirds were late. FDA reported more recently that its analysis of fiscal year 1997 data showed a modest improvement in the submission times for OAI and VAI inspection reports. FDA reported that in its analysis of 230 foreign inspection reports reviewed during fiscal year 1997, about 47 percent (75) of the inspections that identified GMP deficiencies (either OAI or VAI) were submitted on time. However, 53 percent (85) of the inspection reports were not timely. Our review of inspection reports for China and India showed that regardless of the seriousness of the GMP deficiencies found, CDER did not receive the majority of the inspection reports within the 30-work day requirement. Specifically, 22 of the 36 OAI and VAI inspection reports (61 percent) we reviewed for China and India were not submitted on time. Although there was no one reason for the late submissions, CDER officials told us that an investigator may return to the United States 3 weeks after conducting his first inspection, making it impossible for him or her to submit an inspection report within 30 work days. Some investigators told us that the paperwork, which includes preparing numerous documents and exhibits to support the deficiencies observed, is time-consuming. In addition, after returning to their district offices, some investigators stated that they are often confronted with competing demands on their time, such as responding to problems with domestic pharmaceutical manufacturers. FDA Enforcement Actions Still Take Too Long Although FDA established a 15-work-day standard for issuing warning letters, about one out of four warning letters issued by CDER during fiscal year 1996 was issued on time. The extent of these delays can be significant. For example, CDER took 4 months (80 work days) to issue a warning letter to one Chinese manufacturer inspected in September 1994. In the inspection report, received by CDER 2 months after the inspection, the investigator noted 20 significant deviations from U.S. GMPs and wrote that the manufacturer was incapable of producing the injectable pharmaceutical product for which it was seeking approval. The investigator wrote that “Virtually all of the processing equipment for the first phases of processing is filthy, in extreme state of disrepair, and was removed during this inspection.” Despite the severity of the inspection findings, it was not until March 1995 that CDER sent a warning letter to the manufacturer. As shown in table 3, it took more than 15 work days to issue 23 of the 30 warning letters sent to foreign pharmaceutical manufacturers. After receiving the inspection reports from investigators, it took CDER between 21 and 148 work days to issue the 23 late warning letters, with an average of 57 work days. According to a CDER official, CDER experienced staffing shortages during the period we examined that delayed the review of incoming foreign inspection reports. More recently, FDA reported that its analysis of fiscal year 1997 data showed a substantial improvement in the time CDER spent in processing warning letters. FDA reported that 30 percent or 3, of the 10 warning letters issued to foreign pharmaceutical manufacturers during fiscal year 1997 were sent within 15 work days. On average, FDA issued the 10 warning letters in about 24 work days. However, compared with the number of warning letters issued during fiscal year 1996, FDA issued two-thirds fewer warning letters during fiscal year 1997. Our analysis of inspections conducted in China and India between January 1, 1994, and May 15, 1996, showed that CDER did not issue any of the six warning letters within the agency’s 15-work-day standard. The number of work days from CDER’s receipt of inspection reports to the issuance of these warning letters ranged from 24 to 86 days, with an average of 40 days. In one case, a February 1994 inspection of a plant in India making an antibacterial agent identified serious problems, including failure to ensure that the proper manufacturing process was followed and inadequate testing of impurities in the product and water used by the plant. The investigator also found that two deficiencies identified during a 1985 FDA inspection had not been fully corrected to meet U.S. quality standards.Given the significance of the deficiencies found during the 1994 inspection, the investigator and his district office recommended that CDER (1) not approve the new drug application, (2) advise FDA district offices to deny entry into the United States of any pharmaceutical products from this manufacturer, and (3) pursue additional enforcement actions against pharmaceutical products from the manufacturer that were already distributed in the United States. Notwithstanding the seriousness of the problems or the recommended enforcement action, it took 2 years for CDER officials to determine that they had not taken any enforcement action against this foreign manufacturer. While CDER officials agreed with the district recommendation and planned to issue a warning letter, the letter was never sent to this foreign pharmaceutical manufacturer because CDER lost track of it during staffing changes. In March 1996, CDER officials determined that they had allowed this foreign manufacturer to continue shipping already approved bulk pharmaceutical products to the United States, even though the inspection had identified manufacturing problems such as unacceptable impurity testing procedures, no periodic review of the production process, and the failure to investigate product yields that were lower than the specified amount. In another case, it took CDER about 3 months to issue a warning letter to a foreign pharmaceutical manufacturer operating with 17 serious GMP deficiencies. FDA inspected this foreign manufacturer in April 1995, after receiving several new drug applications listing the manufacturer as a supplier of bulk pharmaceutical chemicals for use in U.S. finished drug products. The investigator found that the manufacturer did not have an appropriate impurity testing system and identified questionable results from impurity testing. The investigator believed that these questionable results represented a deliberate attempt to conceal instances in which the pharmaceutical products contained higher levels of impurities than permitted by U.S. standards. As a result, the investigator and his district office recommended that CDER not approve the new drug applications and that it issue a warning letter to the manufacturer. Notwithstanding the serious nature of the investigator’s findings, it took ORA about 2 months to submit the inspection report to CDER and another month for CDER to review the report. On August 1, 1995, slightly more than 3 months after the inspection, CDER issued a warning letter stating that it would not approve any applications listing this foreign pharmaceutical manufacturer as a supplier. During the time it took CDER to act on the serious deficiencies and possible fraud identified by the investigator, a U.S. finished-drug manufacturer discovered that several containers labeled as a bulk pharmaceutical chemical product from the same foreign manufacturer contained an herbicide rather than a bulk chemical. FDA Verifies Corrective Actions in Only About Half the Cases in Which Serious Deficiencies Are Identified Members of the Congress and industry representatives have been concerned about the consistency of FDA inspections and subsequent enforcement actions taken against domestic and foreign pharmaceutical manufacturers. In FDA’s 1993 internal evaluation, these concerns were attributed to differences in how field investigators and headquarters staff evaluated foreign inspection results and determined the appropriate follow-up activity. Moreover, the internal evaluation acknowledged that there was a perception that FDA relied on foreign facilities to correct manufacturing deficiencies because there were insufficient resources to conduct follow-up inspections to confirm that corrective actions had been implemented. Our analysis of the foreign inspection reports reviewed during fiscal year 1996 showed that in about half the instances in which field staff concluded that the severity of inspection findings warranted a reinspection, headquarters disagreed. For domestic manufacturers with a history of serious GMP manufacturing problems, FDA typically conducts a reinspection to verify that promised corrective actions have been implemented. However, current FDA policy does not address the need for verifying the corrective actions of foreign pharmaceutical manufacturers in instances in which FDA headquarters downgrades the severity of inspection findings. As a result of downgrading, FDA conducted far fewer reinspections of foreign manufacturers than was recommended by its investigators. Without reinspections, FDA cannot adequately verify that foreign manufacturers have corrected serious deficiencies that could affect the safety, purity, and quality of their pharmaceutical products. FDA’s 1993 Internal Review Identified Differences in the Evaluation of Inspection Findings That Affected the Frequency of Reinspections In the 1993 internal discussion paper, FDA managers found that agency headquarters’ personnel downgraded the severity of the manufacturing deficiencies identified in foreign inspections and the need for reinspecting violative foreign manufacturers. However, they stated that FDA did not downgrade the severity of inspection findings for domestic manufacturers that had similar deficiencies. According to the review, this was caused by different FDA units being responsible for reviewing and evaluating inspection results and planning reinspections of foreign and domestic pharmaceutical manufacturers to verify corrective actions. The discussion paper identified several instances in which approval of new drug applications was withheld, based on significant GMP deficiencies discovered during domestic inspections, whereas similar deficiencies found at foreign manufacturing facilities resulted in the approval of applications. In the discussion paper, FDA managers stated that differences between the evaluations of foreign and domestic inspection results existed for two reasons. First, unlike for domestic inspections, decisions regarding the severity of the manufacturing deficiencies identified during foreign inspections are made by CDER staff rather than by the field investigators who actually conducted the inspections and their district office managers who endorse their recommendations. Second, they indicated that a perception existed that FDA has too few resources to conduct a reinspection of a foreign manufacturer to verify that corrections have been made. According to the review, this leads CDER staff to “trust” a foreign manufacturer to correct serious manufacturing deficiencies. The review described several instances in which significant GMP deficiencies at foreign facilities received little or no enforcement action, while similar deficiencies at domestic facilities resulted in product recalls or application denials. To correct this problem, the discussion paper recommended that district offices, where the investigators are located, rather than CDER be responsible for evaluating the results of foreign inspections and determining the appropriate enforcement action, including the need for reinspecting the manufacturer. FDA officials disagreed with the assertion that its inspection and enforcement programs were applied disparately to domestic and foreign pharmaceutical manufacturers. Further, they argued that district offices already had this responsibility. CDER Often Downgrades Investigators’ Recommended Classifications of Inspection Findings Our analysis of FDA computer data of foreign inspection reports reviewed during fiscal year 1996 showed that CDER and field investigators often disagree on the classification of inspection findings and the severity of the enforcement action that should be taken against foreign pharmaceutical manufacturers when GMP deficiencies are found. For 82 of the 287 foreign inspections reviewed during this period, field investigators concluded that the severity of the GMP deficiencies they observed warranted that CDER initiate official action against the manufacturers. The investigators’ district offices also endorsed their classifications of these inspections and their recommendations for enforcement action before these were forwarded along with the inspection reports and the form 483s to CDER. However, CDER officials downgraded the inspection classifications and recommendations for enforcement action in 41 of these inspections, based on foreign manufacturers’ promises to implement corrective actions. CDER officials decided that rather than OAI, 40 of these inspections should be classified as VAI and 1 should be classified as “no action indicated” (NAI). Conversely, CDER officials upgraded the field investigators’ classifications and recommendations for enforcement action in 11 foreign inspections and classified them OAI rather than VAI. In instances in which inspections found serious GMP deficiencies but CDER downgraded the inspection classifications, FDA’s procedures allow foreign manufacturers to continue exporting pharmaceutical products to the United States without reinspections to evaluate whether they comply with U.S. quality standards. The classification of an inspection determines to a large degree whether a reinspection is conducted. The OAI classification is the most serious and requires FDA to reinspect the manufacturer to verify that it has improved its production processes to comply with GMPs. When CDER does not accept the investigators’ recommendations and classifies inspections as VAI rather than OAI, foreign manufacturers are allowed to voluntarily correct their deficiencies and respond in writing to FDA about the corrections made. FDA officials have acknowledged that they sometimes base their downgrades of inspection classifications and approvals of new drug applications on foreign manufacturers’ promises to implement corrective actions. They contend that during the next inspection, whenever it may be, FDA confirms that the corrections were made. Our analysis of FDA computer data of foreign inspection reports reviewed during fiscal year 1997 showed that CDER and field investigators continue to disagree on the classification of inspection findings and the severity of the enforcement action that should be taken against foreign pharmaceutical manufacturers when GMP deficiencies are found. For 49 of the 230 foreign inspections reviewed during this period, field investigators concluded that the severity of the GMP deficiencies they observed warranted that CDER initiate official action against the manufacturers. However, CDER officials downgraded the inspection classifications and recommendations for enforcement action in 32 of these inspections. CDER officials decided that rather than classify these inspections OAI, 32 of the 49 inspections (65 percent) should be classified VAI. CDER officials also upgraded the field investigators’ classifications and recommendations for enforcement action for two foreign inspections and classified them OAI rather than VAI. FDA officials believe that in some instances the agency can adequately verify that foreign manufacturers have corrected serious deficiencies without reinspecting them. They said that foreign pharmaceutical manufacturers nearly always respond in writing concerning corrective actions taken as a result of the observations listed on the FDA form 483. They said that these responses typically include copies of the manufacturer’s documentation of the corrective actions taken, such as photographs, laboratory test results, and corrected manufacturing procedures. Consequently, FDA officials said they can evaluate a manufacturer’s corrective actions to ensure the safety, purity, and quality of its pharmaceutical products without conducting a reinspection based on the deficiencies found, the documentation provided, and the manufacturer’s history of implementing corrective action. While we recognize that there may be instances in which documentation could suffice to verify the correction of manufacturing deficiencies, inspections of facilities in China and India that we reviewed give instances in which such documentation may not have been sufficient. A pre-approval inspection of a bulk drug manufacturer in India found several deficiencies in the procedures used to test impurity levels in the product being manufactured. Although ORA personnel recommended withholding approval of the new drug application until corrective actions had been implemented, CDER changed the final inspection classification based on its review of the manufacturer’s written explanation of the actions it was taking to correct the deficiencies identified during the inspection. CDER did not request a reinspection to verify that the corrective actions had been taken, even though FDA documents raised questions about the trustworthiness of the manufacturer. According to these documents, FDA had been notified several years earlier that this manufacturer had informed the U.S. Department of Commerce that it was no longer making a particular pharmaceutical product, despite evidence that the manufacturer was still shipping the product to the United States. In another case, FDA conducted a for-cause inspection of a bulk pharmaceutical manufacturer in India to investigate reports that the manufacturer was using chloroform in its manufacturing process (a substance that had been found at higher than acceptable levels in the bulk pharmaceutical chemical). While the investigators found that the manufacturer was no longer using chloroform, they identified other deficiencies in how the company was measuring the impurities present in other bulk drug products that an FDA chemist characterized as “incompetence bordering on fraud.” The investigators recommended from these deficiencies that the manufacturer be considered an unacceptable source of bulk pharmaceutical chemicals. CDER disagreed with this recommendation after reviewing the manufacturer’s response to the investigators’ findings and accepted the manufacturer as a supplier of bulk pharmaceutical chemicals without verifying that it had corrected deficiencies in its impurity testing procedures. FDA Conducts Infrequent Routine Inspections of Foreign Pharmaceutical Manufacturers FDA’s 1988 and 1993 internal evaluations found that while FDA routinely conducted surveillance inspections of domestic pharmaceutical manufacturers, foreign manufacturers were typically inspected only when they were listed in new drug applications. The evaluations concluded that this practice, which FDA said was because of limited resources, was unreasonable and unfair to domestic manufacturers. In addition, FDA’s 1993 evaluation concluded that in the absence of reinspections, FDA could not adequately verify that foreign manufacturers corrected deviations from GMPs that had been observed during prior FDA inspections. Both evaluations recommended that FDA increase the frequency of its inspections of foreign manufacturers that supply approved pharmaceutical products to the United States. FDA has authority to inspect foreign pharmaceutical manufacturers exporting their products to the United States under the Food, Drug, and Cosmetic Act. The purpose of the foreign inspection program is to ensure that internationally manufactured pharmaceutical products meet the same GMP standards for quality, safety, and efficacy that are required of domestic manufacturers. However, FDA is not required to inspect foreign pharmaceutical manufacturing facilities every 2 years as it is required by statute to do for domestic pharmaceutical manufacturers that must be registered with the agency. Enforcing GMP compliance through routine surveillance inspections is FDA’s most comprehensive program for monitoring the quality of marketed pharmaceutical products. FDA also uses routine surveillance inspections to verify that manufacturers have corrected all less-serious GMP deficiencies that were observed in prior FDA inspections. Each year, FDA classifies about 65 percent of its foreign pharmaceutical inspections as VAI, which means that deviations from GMPs were found but they were not serious enough to warrant FDA intervention to ensure that corrections were made. In such instances, manufacturers agree to voluntarily correct any manufacturing procedures that do not comply with U.S. GMPs. FDA’s foreign inspection program has been predominantly a pre-approval inspection program—that is, most inspections of foreign manufacturers occur only when they are listed in new drug applications, with no routine follow-up thereafter. We found that the majority of FDA’s foreign inspections of pharmaceutical manufacturers were conducted to ensure that before a new drug application was approved, each manufacturer listed as a supplier of a bulk pharmaceutical chemical used in the manufacture of the finished pharmaceutical product had been inspected within the previous 2 years and found to comply with GMPs. During fiscal year 1995, about 80 percent of FDA’s foreign inspections were of pharmaceutical manufacturers listed in new drug applications. The remaining 20 percent consisted of routine surveillance inspections of accepted foreign pharmaceutical manufacturers. Consequently, FDA had few opportunities to verify that foreign pharmaceutical manufacturers had implemented prescribed corrective actions in response to prior inspections where less-serious GMP deviations were observed and were producing pharmaceutical products in compliance with GMPs. FDA officials could not tell us how often accepted foreign manufacturers are inspected. FDA has inspected about 1,100 pharmaceutical manufacturers since the foreign inspection program began in 1955. For each fiscal year from 1990 through 1996, FDA conducted about 100 routine surveillance inspections of accepted foreign pharmaceutical manufacturers annually. At this rate, assuming that resources for the program remain constant, FDA will inspect each accepted foreign pharmaceutical manufacturer only once every 11 years, provided it is not listed on a new drug application. Of the 39 inspections we reviewed for pharmaceutical manufacturers in China and India from January 1, 1994, through May 15, 1996, 11 (28 percent) were routine inspections of manufacturers producing approved pharmaceutical products rather than inspections conducted as part of FDA’s review of new drug applications. On average, we found that approximately 4 to 5 years elapsed between routine inspections of manufacturers in China and India producing approved pharmaceutical products for the U.S. market, more than twice FDA’s 2-year inspection requirement for domestic pharmaceutical manufacturers. FDA Plans to Conduct More Routine Inspections of Foreign Pharmaceutical Manufacturers In June 1997, FDA’s foreign inspection working group proposed a strategy for scheduling more routine surveillance inspections of accepted foreign pharmaceutical manufacturers. Led by the Deputy Commissioner of Operations, the group was asked to review the program and identify areas for improvement. The working group found that serious deviations from GMPs were identified more often in foreign pre-approval inspections (42 percent), compared with 18 percent at U.S. manufacturers. They concluded that by relying primarily on pre-approval inspections, FDA did not provide the necessary assurance that imported pharmaceutical products were manufactured in compliance with GMPs. The foreign inspection working group proposed that FDA’s foreign inspection program include more routine surveillance inspections and fewer pre-approval inspections. To accomplish this, they suggested that FDA conduct fewer pre-approval inspections of accepted foreign manufacturers. Instead, they recommended that FDA use information from routine surveillance inspections in approving new drug applications in which accepted foreign manufacturers are listed. Recognizing that FDA does not have sufficient resources for frequent inspections of all foreign manufacturers of pharmaceutical products imported into the United States, the working group proposed using risk-based criteria to prioritize the foreign manufacturers that FDA inspects. FDA’s four-tier surveillance inspection strategy would vary the frequency of routine surveillance inspections depending on the public health risk associated with an accepted foreign manufacturer of an approved pharmaceutical product. Foreign pharmaceutical manufacturers whose prior inspections found serious deviations from GMPs would be placed in tier 1 and inspected annually. Routine surveillance inspections of all other foreign pharmaceutical manufacturers would vary from 3 to 6 years. Foreign manufacturers of pharmaceutical products that pose higher public health risks, such as sterile pharmaceutical products, would be placed in tier 2 and inspected every 3 years. Foreign manufacturers producing 10 or more pharmaceutical products for the U.S. market and those producing nonsterile bulk ingredients used in sterile finished pharmaceutical products would be placed in tier 3 and inspected every 5 years. All other foreign pharmaceutical manufacturers would be placed in tier 4 and inspected every 6 years (see table 4). The working group estimated that when the strategy is fully implemented, 60 percent of FDA’s foreign inspections will be routine surveillance inspections. The remaining 40 percent will be inspections of foreign pharmaceutical manufacturers listed in new drug applications. FDA began implementing its four-tier surveillance inspection strategy in fiscal year 1997 by including routine surveillance inspections within its pre-approval inspections. FDA reported that 151 of the 230 foreign pharmaceutical inspections conducted during fiscal year 1997 (66 percent) were classified pre-approval and routine surveillance inspections. In addition, FDA planned to conduct routine surveillance inspections of about 150 accepted foreign pharmaceutical manufacturers placed in tiers 1 and 2. This group includes manufacturers that produce sterile pharmaceutical products and manufacturers that had prior inspections that revealed serious deviations from GMPs. FDA reported, however, that it conducted only 60 inspections of these manufacturers. As a result, although FDA conducted more routine surveillance inspections, most foreign pharmaceutical inspections still are limited predominantly to manufacturers listed in new drug applications rather than those considered high risk. In developing its new four-tier surveillance inspection strategy, however, FDA did not include all foreign pharmaceutical manufacturers that it should consider for a routine surveillance inspection. According to FDA data, about 3,200 foreign manufacturers have submitted information to FDA listing the pharmaceutical products that they intend to export to the United States. However, FDA prioritized for routine surveillance inspections only the 1,100 foreign pharmaceutical manufacturers that it had previously inspected. Consequently, FDA’s scheduling strategy does not account for almost two-thirds of the foreign manufacturers that may be exporting pharmaceutical products to the United States. Moreover, according to the FDA official in charge of developing the surveillance inspection strategy, FDA may never inspect the majority of foreign manufacturers placed in tiers 3 and 4. However, while FDA has recognized that it does not have sufficient resources to routinely inspect all foreign manufacturers of pharmaceutical products imported into the United States, its strategy does not ensure that every foreign manufacturer exporting pharmaceutical products to the United States complies with U.S. quality standards. Serious Problems Persist in Managing Foreign Inspection Data Although both FDA’s 1988 and 1993 internal evaluations identified serious problems in its foreign inspection data systems, the agency still lacks a comprehensive, automated system for managing its foreign inspection program. Instead, the information FDA needs to identify the foreign pharmaceutical manufacturers it is responsible for inspecting, manage its foreign inspection workload, and monitor inspection results and enforcement actions is contained in 15 different computer systems, very few of which are integrated. As a result, essential foreign inspection information is not readily accessible to the different FDA units that are responsible for planning, conducting, and reviewing inspections and taking enforcement actions against foreign manufacturers. While FDA’s working group recently proposed several actions that FDA officials hope will correct these data system problems, they have not been implemented. Lack of Comprehensive Automated Information System Inhibits Effective Management of Foreign Inspection Data FDA’s 1988 internal evaluation found that its automated field management information system did not contain complete information for 37 percent of the foreign inspections that FDA conducted during fiscal years 1982 through 1987. Specifically, the Program Oriented Data System (PODS) did not contain the results of 673 of the 1,813 foreign inspections that FDA investigators had conducted during this period. Moreover, the system did not contain any data for 251 of these inspections (14 percent). The evaluation attributed the missing inspection results to PODS not being updated after CDER’s review and classification of the inspection reports. The evaluation recommended that FDA revise its procedures for entering foreign inspection data in PODS. FDA’s 1993 internal evaluation found that essential data on foreign pharmaceutical manufacturers were not readily accessible to agency personnel. The evaluation indicated that comprehensive data for a foreign pharmaceutical manufacturer should include (1) its inspection history, (2) the results of its last FDA inspection, (3) the identification of responsible company personnel, (4) its U.S. agent or representative, (5) the products that it supplied to the United States, and (6) the domestic manufacturers and distributors that it supplied. The evaluation found that comprehensive foreign inspection information could be obtained only by searching multiple computerized databases and FDA headquarters’ files. For example, the evaluation noted several instances in which ORA investigators conducting domestic inspections suspected that U.S. manufacturers had received adulterated bulk pharmaceutical chemicals from foreign manufacturers. However, the investigators’ efforts to substantiate these suppositions were hampered because they could not readily gain access to comprehensive data for foreign pharmaceutical manufacturers. The evaluation recommended that FDA use its field management information system to provide agencywide access to complete data for all foreign manufacturers shipping pharmaceutical products to the United States. In 1994, FDA began using a new information system to support the foreign inspection program. The Travel and Inspection Planning System (TRIPS) was specifically developed to assist FDA’s foreign inspection planning staff in managing foreign inspection assignments and the program’s budget. TRIPS is also used to monitor whether the inspection report has been completed as well as the results of the inspection. However, TRIPS is accessible to only ORA headquarters staff. As a result, foreign inspection data are not readily accessible to the different FDA units responsible for conducting foreign inspections and reviewing inspection results. FDA plans to make data from TRIPS more broadly available within the agency when it upgrades its field management information system in fiscal year 1998. TRIPS and PODS have not significantly improved the quality of FDA’s foreign inspection data. Our analysis of data recorded in TRIPS and PODS disclosed that these systems did not contain the results of 111 of the 759 inspections (15 percent) FDA conducted of foreign pharmaceutical manufacturers between January 1, 1994, and May 15, 1996. For 68 of the 111 inspections, the database did not identify the foreign manufacturer that was inspected. TRIPS and PODS also did not include the correct inspection results for 10 of the 39 pharmaceutical manufacturers FDA inspected in China and India during this period. Specifically, the inspection results were missing for two of these manufacturers and were incorrect for eight others. The database errors in recording the results of inspections conducted in China and India occurred because the systems were not updated after CDER staff reviewed and classified the inspection reports. Without complete and accurate data, FDA cannot ensure that all “high-risk” foreign pharmaceutical manufacturers are targeted for more frequent routine surveillance inspections. We also found that essential foreign inspection data are not readily accessible to the different FDA units responsible for planning and conducting domestic and foreign inspections, and conducting import operations. The information that FDA needs for identifying foreign pharmaceutical manufacturers, verifying their compliance with federal laws and regulations, and screening foreign-produced pharmaceutical products for importation is dispersed among 15 automated databases, most of which do not interface. FDA’s multiple and unlinked databases inhibit the effective management of the foreign inspection program by impeding the flow of foreign inspection data to agency personnel for use in screening foreign pharmaceutical products offered for entry into the United States. For example, table 5 illustrates how the lack of linkage between 8 of FDA’s 15 databases not being linked impedes the flow of essential foreign inspection data. The first four databases described in the table are used by FDA’s district offices to support import operations. The four other databases described in the table are used by FDA headquarters staff for monitoring foreign pharmaceutical manufacturers’ compliance with federal statutes and regulations. However, because these systems do not interface, comprehensive data about foreign manufacturers are not readily available to FDA district personnel screening imported pharmaceutical products. Consequently, much of the same data must be retrieved from one automated system to be manually entered into others. Moreover, staff must search multiple data systems to obtain a comprehensive profile of a foreign pharmaceutical manufacturer. FDA also cannot easily match foreign manufacturers that have listed with the agency with their compliance status and the pharmaceutical products that are imported into the United States. FDA’s foreign inspection working group concluded in June 1997 that the agency continues to be plagued by having too many databases that do not automatically interface. FDA is relying on a new automated field management information system to provide agencywide accessibility to comprehensive foreign inspection data. The Field Accomplishments and Compliance Tracking System is expected to replace approximately 22 computerized databases and support automated interfaces with several existing databases. The first installment of FACTS, which is to include an inventory of foreign and domestic pharmaceutical manufacturers, is scheduled to go on line during fiscal year 1998. FDA also plans to develop additional FACTS components to assist the agency in managing its foreign inspection workload and compliance activities. These components will be included in the second installment of FACTS, which is scheduled for fiscal year 1999. Incomplete List of Foreign Manufacturers Shipping Drugs to the United States Hinders Inspection Planning FDA’s 1988 internal evaluation found that the agency did not maintain an inventory of all foreign pharmaceutical manufacturers that were subject to FDA regulation. At that time, the only computerized file of foreign manufacturers shipping pharmaceutical products to the United States was maintained on a personal computer that could be accessed only from within one FDA unit. The file listed the foreign pharmaceutical manufacturers that FDA had inspected and the results of the last inspection. The internal evaluation concluded that this file was inadequate because it did not contain an inspection history for each foreign pharmaceutical manufacturer that had advised FDA that it intended to ship pharmaceutical products to the United States. As a result, FDA could not ensure that it was aware of, and therefore inspecting, all foreign pharmaceutical manufacturers that were under its jurisdiction. FDA’s 1988 evaluation recommended that the agency develop a comprehensive inventory of all foreign manufacturers shipping pharmaceutical products to the United States that could be used to improve long-range inspection planning and scheduling. To use resources better and increase knowledge agencywide, the evaluation also recommended that this inventory be available on FDA’s automated field information system. FDA’s 1993 internal evaluation found the same problem. According to the evaluation, the lack of an inventory of the foreign manufacturers that were shipping pharmaceutical products to the United States made it virtually impossible for FDA to inspect foreign manufacturers as frequently as domestic pharmaceutical manufacturers. The evaluation detailed several instances in which a database with a comprehensive history of each establishment’s previous inspections would have assisted in identifying problems in foreign pharmaceutical manufacturers. FDA’s 1993 evaluation recommended that the agency use its automated field information system to develop an accurate and comprehensive inventory of all foreign manufacturers shipping pharmaceutical products to the United States. It remains difficult for FDA to determine the number of foreign manufacturers shipping pharmaceutical products to the United States that should be considered for periodic inspections. Recently, an FDA official told us that the agency had to search four data systems just to determine the number of foreign manufacturers that should be considered for routine postapproval surveillance inspections. They found that the systems did not include a common data element to permit them to easily identify a foreign manufacturer from system to system. Because the names and addresses of foreign manufacturers are sometimes incomplete or inaccurate, FDA officials found that matching data among the systems was an arduous, manual, and inconclusive effort. The June 1997 report by FDA’s foreign inspection working group acknowledged that the agency still lacked a complete list of foreign manufacturers that were shipping pharmaceutical products to the United States. According to the report, about 3,200 foreign pharmaceutical firms were listed with FDA as indicating their intent to ship products to the United States. However, FDA internal databases indicated that only about 1,100 pharmaceutical firms had been inspected by the agency. FDA officials could not explain why the remaining 2,100 firms had not been inspected. The foreign inspection working group proposed two options for developing an official inventory of all foreign manufacturers that ship pharmaceutical products to the United States. One option would be for FDA to seek authority to require foreign pharmaceutical manufacturers to register and update their registration information annually. The other would use data from existing information systems to develop an official establishment inventory of foreign pharmaceutical manufacturers. FDA’s efforts to reconcile data from several of its databases to more accurately estimate the number of manufacturers that it should consider for inspection under its four-tier inspection strategy should identify all foreign manufacturers that are shipping pharmaceutical products to the United States. When completed by April 1998, FDA should have a comprehensive inventory of all foreign manufacturers shipping pharmaceutical products to the United States. This information could then be used to improve FDA’s planning and scheduling of foreign pharmaceutical inspections. Conclusions Since 1955, FDA has inspected foreign pharmaceutical manufacturing facilities to ensure that drug products exported to the United States meet the same standards of safety, purity, and quality required of domestic manufacturers. However, two internal FDA evaluations in the past 10 years identified serious problems with the foreign inspection program that raised questions about FDA’s ability to ensure that American consumers are protected from contaminated or adulterated drug products. FDA has taken some action to address these problems. However, we found indications that certain aspects of the foreign inspection program still need improvement. FDA continues to experience problems in ensuring that inspection reports are submitted in a timely manner and that necessary enforcement actions are promptly initiated to prevent contaminated and adulterated pharmaceutical products from entering the United States. In addition, when FDA headquarters downgrades the severity of the inspection classifications recommended by field investigators, FDA is not verifying corrective actions that foreign manufacturers have promised to take to resolve serious manufacturing deficiencies. This impairs FDA’s ability to ensure that American consumers are protected from potentially serious health risks posed by adulterated drug products. FDA’s risk-based inspection strategy recognizes that the agency does not have sufficient resources to routinely inspect all foreign manufacturers of pharmaceutical products imported into the United States. However, even though the strategy is intended to direct inspection resources according to risk, FDA’s foreign inspection program continues to be driven by new drug applications and the agency acknowledges that it may never inspect most foreign manufacturers exporting pharmaceutical products to the United States. Recommendations to the Commissioner of the Food and Drug Administration To improve the effectiveness of FDA’s foreign inspection program to ensure that only safe, pure, and high quality drugs are imported into the United States, we recommend that the Commissioner of FDA ensure that serious manufacturing deficiencies are promptly identified and enforcement actions are initiated by requiring investigators to prepare inspection reports and CDER to issue warning letters within established time periods and reexamine and revise FDA’s foreign inspection strategy to provide adequate assurance that all foreign manufacturers exporting approved pharmaceutical products to the United States comply with U.S. standards. At a minimum, the strategy should include (1) timely follow-up inspections of all foreign manufacturers that have been identified as having serious manufacturing deficiencies and that promised to take corrective action and (2) periodic surveillance inspections of all foreign pharmaceutical manufacturers, not just high-risk manufacturers. Agency Comments and Our Response In commenting on a draft of this report, FDA took issue with a number of our findings and recommendations. As discussed earlier, FDA believes it has made substantial improvement in the timeliness of inspection reports and enforcement actions. While we recognize FDA’s progress, we note that the agency is still falling short of its standards for timeliness. As a result, we believe that FDA needs to monitor its investigators and CDER to ensure that they comply with established time periods in preparing inspection reports and issuing warning letters. FDA was critical of our draft on several counts. FDA said we had accepted the recommendations in the 1993 discussion paper without verifying their validity or feasibility. FDA claimed that the findings and recommendations in the 1993 discussion paper were flawed in significant ways that limited its usefulness to the agency. We note, however, that subsequent to the discussion paper, in a 1995 memorandum to the agency’s Assistant Inspector General, FDA officials reported that they had thoroughly reviewed the discussion paper, investigated the issues raised, verified program weaknesses, and had either begun or agreed to implement 10 of the 13 recommendations contained in the discussion paper. FDA also took issue with how our report described the processes followed by its district and headquarters for classifying domestic and foreign inspection reports. Specifically, FDA stated that the review performed by the supervisor or team leader in the district office is not considered to be a district endorsement of the investigator’s recommendation. However, our review of FDA documents that describe the process for classifying domestic and foreign inspection reports supports our characterization. FDA issued guidance to its district offices in September 1996 indicating that beginning in fiscal year 1997, before inspection reports are forwarded to CDER, they “will be reviewed and endorsed by district management consistent with local procedures and timeframes for domestic reports.” Also, in its memorandum to the Assistant Inspector General, FDA officials reported that district offices had began endorsing foreign drug inspection reports before the 1993 discussion paper was issued. FDA did not concur with our recommendation for conducting more frequent inspections of all foreign manufacturers that have been identified as having serious manufacturing deficiencies and have promised to take corrective action. FDA incorrectly suggests that our recommendation was based on the premise that a final classification that is lower than the recommended classification is always wrong if it results in a less-serious classification. Rather, our report questions FDA’s ability to verify the adequacy of some corrective actions that foreign manufacturers promised to take to resolve serious manufacturing deficiencies without reinspecting them. FDA also did not concur with our recommendation regarding the implementation of its routine surveillance inspection strategy. Given further clarification of the strategy, we have modified our recommendation. FDA’s written comments on a draft of this report are reproduced in appendix I. FDA also provided technical comments, which we considered and incorporated where appropriate. As we arranged with your office, unless you publicly announce the report’s contents earlier, we plan no further distribution until 30 days after its issue date. We will then send copies of this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, the Director of the Office of Management and Budget, and others who are interested. We will also make the report available to others upon request. Please contact me on (202) 512-7119 or John Hansen, Assistant Director, on (202) 512-7105, if you or your staff have any questions. Others who contributed to this report are Gloria E. Taylor, Brenda R. James, and David Bieritz. Comments From the Food and Drug Administration The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Food and Drug Administration's (FDA) efforts to correct problems identified in earlier evaluations of its foreign drug inspection program, focusing on FDA's efforts to: (1) prepare inspection reports and take enforcement actions against foreign pharmaceutical manufacturers in a timely manner; (2) improve the consistency with which FDA evaluates the results of foreign inspections and conducts reinspections to verify that foreign pharmaceutical manufacturers have corrected serious deficiencies; (3) conduct routine inspections of foreign pharmaceutical manufacturers to monitor their compliance with U.S. quality standards; and (4) improve the management of data needed for planning inspections, monitoring inspection results, and taking enforcement actions. GAO noted that: (1) FDA has taken several actions to address problems with its foreign inspection program that were identified in two previous internal evaluations; (2) although FDA has improved the timeliness with which investigators submit inspection reports, in fiscal year (FY) 1996, almost 60 percent were still submitted later than called for by agency standards, including half the reports that identified the most serious deficiencies in manufacturing quality; (3) during FY 1996 and FY 1997, headquarters review personnel continued to downgrade the classifications of inspections recommended by its field investigators who conducted the inspections; (4) most of the decisions to downgrade the classifications were based on foreign manufacturers' promises to implement corrective actions; (5) as a result, FDA conducted fewer reinspections of these facilities to verify that foreign manufacturers had corrected serious manufacturing deficiencies; (6) FDA conducts infrequent routine inspections of foreign manufacturers to ensure that they continue to comply with U.S. quality standards, although routine surveillance inspections constitute FDA's most comprehensive program for monitoring the quality of marketed pharmaceutical products; (7) most inspections of foreign pharmaceutical manufacturers are performed to approve the marketing of new products; (8) routine surveillance inspections of manufacturers producing approved pharmaceutical products already marketed in the United States accounted for only 20 percent of FDA's foreign inspections during FY 1995; (9) as a result, routine inspections of foreign pharmaceutical manufacturers occur with far less frequency than the 2-year interval required for domestic manufacturers; (10) FDA has been striving to improve its management of data needed for planning inspections, monitoring inspection results, and taking enforcement actions; (11) at present, FDA relies on 15 separate systems to identify foreign pharmaceutical manufacturers, plan foreign inspection travel, track inspection results, and monitor enforcement actions; (12) as a result, essential foreign inspection data are not readily accessible to the different FDA units that are responsible for planning, conducting, and reviewing inspections and taking enforcement actions against foreign manufacturers; and (13) FDA is developing a comprehensive, agencywide automated system to provide better data for managing its foreign inspection program.
Background The NIPP provides the framework for developing, implementing, and maintaining a coordinated national effort to bring together government at all levels, the private sector, nongovernmental organizations, and international partners to manage the risks to CIKR. In addition to the Homeland Security Act, various statutes provide legal authority for both cross-sector and sector-specific protection and resiliency programs. For example, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 was intended to improve the ability of the United States to prevent, prepare for, and respond to acts of bioterrorism and other public health emergencies and the Pandemic and All-Hazards Preparedness Act addresses public health security and all-hazards preparedness and response. Also, the Cyber Security Research and Development Act of 2002 authorized funding for the National Institute of Standards and Technology (NIST) and the National Science Foundation to facilitate increased research and development for computer and network security and to support research fellowships and training. CIKR protection issues are also covered under various presidential directives, including HSPD-5 and HSPD-8. HSPD-5 calls for coordination among all levels of government as well as between the government and the private sector for domestic incident management, and HSPD-8 establishes policies to strengthen national preparedness to prevent, detect, respond to, and recover from threatened domestic terrorist attacks and other emergencies. These separate authorities and directives are tied together as part of the national approach for CIKR protection through the unifying framework established in HSPD-7. The NIPP outlines the roles and responsibilities of DHS and other security partners—including other federal agencies, state, territorial, local, and tribal governments, and private companies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 18 CIKR sectors. The NIPP is prepared by the NIPP Program Management Office (PMO) within the Infrastructure Protection office of the National Preparedness and Protection Directorate of DHS. The NIPP PMO has the responsibility for coordinating and ensuring development, implementation, and maintenance of the NIPP and the associated sector-specific plans. HSPD-7 and the NIPP assign responsibility for CIKR sectors to SSAs. As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 11 CIKR sectors. The remaining sectors are coordinated by eight other federal agencies. The NIPP depends on supporting SSPs for full implementation of this framework within and across CIKR sectors. SSPs are developed by the SSAs designated in HSPD-7 in close collaboration with sector partners, including sector and government coordinating councils. These SSPs contain the plan to identify and address the risks to CIKR specific to each sector and are reviewed by DHS for adherence to DHS guidance which follows the format of the NIPP. Table 1 lists the SSAs and their sectors. The concept of resilience has gained particular importance and application in a number of areas of federal CIKR planning. Both Congress and executive branch agencies have addressed resilience in relation to the importance of the recovery of the nation’s critical infrastructure from damage. In February 2006, the Task Force of the Homeland Security Advisory Council defined resiliency as “the capability of a system to maintain its functions and structure in the face of internal and external change and to degrade gracefully when it must.” Later in 2006, the Department of Homeland Security’s National Infrastructure Protection Plan defined resilience as “the capability of an asset, system, or network to maintain its function during or to recover from a terrorist attack or other incident.” In May 2007 the President issued Homeland Security Presidential Directive 20—National Continuity Policy. This directive establishes a comprehensive national policy on the continuity of federal government structures and operations and a single National Continuity Coordinator responsible for coordinating the development and implementation of federal continuity policies. It also directs executive departments and agencies to integrate continuity requirements into operations, and provides guidance for state, local, territorial, and tribal governments, and private sector organizations in order to ensure a comprehensive and integrated national continuity program that will enhance the credibility of our national security posture and enable a more rapid and effective response to and recovery from a national emergency. As part of Homeland Security Presidential Directive 20, the Secretary of Homeland Security is directed to, among other things, coordinate the implementation, execution, and assessment of continuity operations and activities; develop, lead, and conduct a federal continuity training and exercise program, which shall be incorporated into the National Exercise Program; and develop and promulgate continuity planning guidance to state, local, territorial, and tribal governments, and private sector critical infrastructure owners and operators. For additional discussion of the concept of resiliency, see appendix 1. DHS Has Incorporated Changes into the 2009 NIPP that Reflect Stakeholder Input and Sectors’ Experience Protecting Critical Infrastructure and an Increased Emphasis on Risk Management DHS incorporated changes in the 2009 NIPP—including a greater emphasis on CIKR regional planning and updates to DHS’s overall risk management framework—that NIPP PMO officials said are based on stakeholder input and sectors’ experiences performing critical infrastructure protection. Based on DHS guidance, SSAs are expected to address many of the changes to the NIPP in their SSPs, based on consultation with sector partners. Table 2 provides an overview of key changes to the NIPP. DHS Changes to the 2009 NIPP Include Increased Emphasis on Regional Planning and Risk Management DHS changes to the 2009 NIPP include increased emphasis on regional planning and risk management and, according to PMO officials, these changes are based on stakeholder input and sectors’ experiences performing critical infrastructure protection. The changes we identified in the 2009 NIPP were generally foreshadowed in the 2007/2008 NIPP Update provided to SSAs in 2008. While most of the changes in the 2009 NIPP were minor and related to changes in programs or activities that have occurred since the publication of the 2006 NIPP, several could have an impact on the sector planning process and the development of SSPs. These included changes that placed a greater emphasis on regional planning, coordination and information-sharing across sectors; changes in how critical infrastructures are identified and prioritized; developments in risk management to include how threats, vulnerabilities, and consequences are assessed; and a greater emphasis on cyber security and international interdependencies. In contrast to the 2006 NIPP, DHS increased its emphasis on regional planning, coordination and information-sharing in the 2009 NIPP. DHS discussed the need for regional coordination in the 2006 NIPP and encouraged stakeholders to address CIKR protection across sectors within and across geographic regions. In the 2006 NIPP, regional bodies were to be formed on an “as needed” basis. By contrast, the 2009 NIPP called for regional coordination through the formation of a consortium of representatives from multiple regional organizations. The 2009 NIPP states that this was done to help enhance the engagement of regionally based partners and to leverage the CIKR protection activities and resiliency strategies that they lead. In comparison to the 2006 NIPP, DHS included a discussion of changes in how critical infrastructures are identified and prioritized in the 2009 NIPP. Both the 2006 NIPP and the 2009 NIPP stated that CIKR inventory lists were developed from multiple sources, including sector inventories maintained by SSAs and government coordinating councils, voluntary submissions from CIKR partners in the public or private sector, and the results of studies conducted by various trade associations, advocacy groups, and regulatory agencies. While the 2006 NIPP briefly discusses its efforts to determine which assets are nationally critical, the 2009 NIPP includes a more detailed discussion of the national CIKR prioritization program that places CIKR into categories according to their importance, nationally or regionally. Specifically, DHS prioritized assets using a tiered approach. Tier 1 or Tier 2 assets are those that if destroyed or disrupted could cause significant casualties, major economic losses, or widespread and long-term disruptions to national well-being and governance capacity. According to DHS, the overwhelming majority of the assets and systems identified through this effort are classified as Tier 2. Only a small subset of assets meet the Tier 1 consequence threshold—those whose loss or damage could result in major national or regional impacts similar to the impacts of Hurricane Katrina or the September 11, 2001, attacks. DHS also provided a detailed discussion of risk management methodologies in the 2009 NIPP, as compared to the 2006 NIPP. Whereas the 2006 NIPP listed the baseline criteria—minimum requirements—for conducting risk analyses to ensure they are credible and comparable, the 2009 NIPP includes the use of a common risk assessment approach, including the core criteria—updated requirements—for threat, vulnerability, and consequence analyses designed to allow the comparison of risk across sectors. For example, regarding consequence assessments, the 2006 NIPP discusses the use of consequence screening to help CIKR owners and operators determine whether it is necessary to provide additional information to DHS or the SSA. Consequence screening is an approach that allows CIKR owners and operators to identify their projected level of consequence based on the nature of their business, proximity to significant populations or other CIKR, relative importance to the national economy or military capability, and other similar factors. In contrast the 2009 NIPP includes a discussion of consequence uncertainty where a range of outcomes is possible. The 2009 NIPP states that where the range of outcomes is large, greater detail may be required to calculate consequence and inform decisionmaking. As part of this risk management discussion, DHS has also made changes regarding how sectors are to measure the performance of their CIKR programs. While both the 2006 and 2009 NIPP discuss descriptive and process or output data, the 2009 NIPP included additional discussion regarding the development of metrics that assess how well programs reduced the risk to the sector. The 2009 NIPP also discusses changes made to the approach for conducting these assessments. For example, whereas the 2006 NIPP focused on facility vulnerability assessments, the 2009 NIPP discusses broader assessment efforts, including DHS’s efforts to conduct a systemwide vulnerability assessment. To illustrate a systemwide vulnerability assessment, DHS used the example of the California Water System Comprehensive Review, a DHS-led assessment effort to identify critical water system assets, analyze and track the gaps in protection, and identify potential enhancements. In addition, DHS included a greater emphasis on cyber security in the 2009 NIPP than it did in the 2006 NIPP. The 2006 NIPP identified cross-sector cyber security as an area worthy of special consideration by the sectors. In comparison, the 2009 NIPP lists the progress made and new initiatives related to cyber security, including the development of cross-sector cyber methodologies to identify systems or networks of national significance; the addition of a cross-sector cyber security working group and a public- private cross-sector program specifically for cyber security. The 2009 NIPP also lists new responsibilities for CIKR partners to conduct cyber security exercises to test the security of these systems as well as the development of cyber security-specific vulnerability assessments by DHS. Furthermore, in contrast to the 2006 NIPP, DHS also expanded its emphasis on international coordination and identified it as an area warranting special consideration in the 2009 NIPP. Whereas the 2006 NIPP highlighted the importance of international coordination, the 2009 NIPP instructs the SSAs to identify foreign critical infrastructure—whether American owned or foreign owned—of national importance, and lists the procedures for doing so. The 2009 NIPP discusses the importance of identifying and prioritizing infrastructure located outside the United States that if disrupted or destroyed would have a negative impact on the United States, lists additional SSA responsibilities for international coordination, and lists various international organizations that are assisting in the implementation of international CIKR agreements. The 2009 NIPP also highlights DHS’s role in a 15-nation effort specific to cyber security. NIPP PMO officials said that the changes highlighted in the 2009 NIPP were the result of knowledge gained and issues raised during regularly scheduled—bimonthly or quarterly—or specially called meetings with security partners such as the Federal Senior Leadership Council, the CIKR Cross-Sector Council, and other contacts with security partners such as SSAs, sector coordinating councils, and government coordinating councils. DHS said concerns on CIKR issues were also elevated to DHS based on their inclusion in Sector CIKR Protection Annual Reports, as well as from outside organizations like GAO which NIPP PMO officials credited for the increased attention to cyber security. NIPP PMO officials said DHS began an effort to revise the NIPP in the Spring of 2008 and as part of this process, DHS held discussions with infrastructure protection components and senior leadership. Figure 1 shows the process DHS used to update the NIPP for publication in 2009. NIPP PMO officials said that because they view NIPP updates as an ongoing process, they will continue to reassess the NIPP and make changes based on knowledge gained from the various partners and stakeholders, as needed. For example, between the release of the 2006 and 2009 NIPP DHS issued the 2007/2008 NIPP Update. The 2007/2008 NIPP Update contained references to changes that ultimately appeared in the 2009 NIPP, including the introduction of the system used to gather and distribute information on critical infrastructure assets, the process used to develop metrics to measure performance and progress in critical infrastructure protection, and the emphasis on regional coordination in the partnership model. The 2007/2008 NIPP Update also included discussion of a training needs assessment DHS conducted which was followed by the creation of the CIKR competency areas that define CIKR training requirements in the 2009 NIPP. DHS Guidance Calls for SSAs to Develop Plans and Reports That Consider Specific Issues in the 2009 NIPP Following the publication of the 2009 NIPP, DHS issued guidance to the SSAs designed to make them aware of the changes to the NIPP and to discuss the issues DHS believed SSAs should consider for increased attention when developing their SSPs and SARs. The guidance provided section-by-section instructions that discussed how SSAs were to update their plans and annual reports to be consistent with the NIPP. For example, the 2010 SSP guidance stated that the NIPP had increased emphasis on DHS’s all-hazards approach to CIKR protection planning and suggested that SSPs should place increased emphasis on their approach to addressing all-hazards events when updating their plans. The guidance also noted that SSAs should give additional attention to topics such as cyber security and international interdependencies. Regarding cyber security, the guidance calls for SSAs to include goals or long-term objectives for cyber security in their sector and explain their approach for identifying their sector’s cyber assets, systems, networks, and functions; incorporating cyber elements into sector risk assessments; and prioritizing cyber elements—such as communication and computer networks—of the sector, among other things. Fourteen of 18 SSA representatives generally described the process they plan to use to incorporate these changes, which for the most part mirrored DHS’s process for revising the NIPP. According to the SSA representatives, after reviewing the guidance provided by DHS, the SSAs plan to employ internal teams or offices to draft the SSP following DHS’s format; the SSAs intend to provide the draft to key stakeholders, such as the sector’s government coordinating council and sector coordinating council, who are to provide feedback and comments on the draft via e-mail, individual and conference calls, and in-person meetings; and the SSAs plan to make revisions and distribute the draft to stakeholders for final review before submission to DHS. See figure 2 for a description of this process. Four of 18 SSA representatives who responded to our inquiries specifically described how changes to the 2009 NIPP either had already been addressed in their 2007 SSPs or would be addressed in the 2010 SSPs. Regarding changes to risk assessment methodologies, for example, the SSA representative for the Water sector stated that three risk assessment tools are available to the Water sector. Furthermore, according to this representative, DHS and its partners are working collaboratively to ensure these existing assessment methodologies are upgraded and revised by using consistent vulnerability, consequence, and threat information, resulting in analysis of risk that is comparable within the sector. The SSA representative said revisions to these tools are to also address the features and elements of risk assessments as identified in the NIPP. The Energy SSA representative also described sector efforts in these areas, including in regional coordination, training and education, international coordination, and cyber security. Those sector officials who did not offer specifics on how they expect to address changes suggested by DHS either provided a general statement of their efforts or said this was because the SSPs were being drafted during our review. Four of the 18 SSA representatives said they have contacted or plan to contact DHS about sector concerns regarding the NIPP format or questions about the instructions provided. For example, the SSA representative for the Healthcare and Public Health sector told us that his agency planned to contact DHS to discuss a change that is designed to make the SSP risk assessment methodology consistent with the NIPP, but could be impractical for the SSA to implement. The Healthcare sector representative said a single risk assessment methodology would not be feasible for the Healthcare and Public Health sector because it is composed of different kinds of partners, such as emergency medical personnel, doctors, and hospitals and is made up of systems— transportation, communication, personnel—as opposed to other sectors which he said may be made up predominantly of facilities. The SSA representative said this makes the use of a single risk assessment methodology difficult for the Healthcare sector. All 14 SSA representatives who responded with a description of the SSP update process said they are taking extra actions to ensure other stakeholder views are considered. For example, the Commercial Facilities SSA said it plans to post a copy of its draft on the Homeland Security Information Network to ensure that sector interests are broadly represented in the review of the document. Another example came from the Dams SSA official who said that his office provided its draft to a dozen organizations and trade associations outside its sector coordinating council and government coordinating council including the American Society of Civil Engineers, the National Dam Safety Review Board, and The Infrastructure Security Partnership for review and comment. SSAs also offered other comments on their efforts to address changes to the NIPP—including changes to regional planning and risk management— in their SSPs. Five of the 18 officials representing different SSAs said that incorporation of these topics would not be difficult. For example, officials representing the Chemical, Dam, and the Emergency Services sectors SSAs said they did not foresee difficulty incorporating the key focus areas from the 2009 NIPP into their 2010 SSP rewrite. Representatives of the Commercial Facilities and Critical Manufacturing sectors said that they found the DHS guidance useful to their SSPs. The Water sector representative discussed programs or activities that were ongoing or planned that addressed each of the topics. For example, the Water sector SSA representative said the Environmental Protection Agency, which is the Water sector SSA, is working with DHS and other security partners to ensure risk assessment methodologies are upgraded and refined to be consistent with the NIPP to produce an analysis of risk that is consistent within the sector. Officials representing the Banking and Finance and Defense Industrial Base SSAs said it was premature to discuss how the changes related to risk management, regional coordination, performance measurement, and cyber security and international interdependencies would affect their agencies’ efforts as the revision process was ongoing. The SSA official representing both the Postal and Shipping sector and the Transportation sector said that each change in the 2009 NIPP would be addressed according to its unique characteristics for the sectors. DHS Increased Its Emphasis on Resiliency in the 2009 NIPP and Directed SSAs to Address Resiliency in Their Sector Plans Although DHS revised the NIPP to increase the use of the term resilience and to highlight it as an important concept paired with protection, the 2009 NIPP uses much of the same language as the 2006 NIPP to describe resiliency concepts and strategies. According to NIPP PMO officials, the 2009 NIPP has been updated to recognize the importance of resiliency and provide SSAs the requested flexibility to incorporate resiliency within the context of their sectors. DHS Increased Its Emphasis on Resiliency in the 2009 NIPP, but Used Much of the Same Language as in the 2006 NIPP DHS increased its emphasis on resiliency in the 2009 NIPP by using the term more frequently and generally treating it as a concept formally paired with protection. Specifically, the 2006 NIPP used resiliency or resiliency- related terms 93 times while the 2009 NIPP used resiliency-related terms 183 times, about twice as often. More importantly, whereas the 2006 NIPP primarily treated resiliency as a subset of protection, the 2009 NIPP generally referred to resiliency alongside protection. Both the 2006 and 2009 NIPPs include building resilience in the definition of protection, but the 2009 NIPP increased the profile of resilience by treating it as separate but related to CIKR protection. For example, whereas the Managing Risk chapter of the 2006 NIPP has a section entitled “Characteristics of Effective Protection Programs,” the same chapter in the 2009 NIPP has a section entitled, “Characteristics of Effective Protection Programs and Resiliency Strategies.” In addition, in contrast to the 2006 NIPP, the 2009 NIPP referred to resiliency alongside protection in the introductory section of the document. Whereas the introduction to the 2006 NIPP states that it “…provides the mechanisms for…enhancing information-sharing mechanisms and protective measures within and across CI/KR sectors…,” the introduction to the 2009 NIPP states that it “…provides the mechanisms for…enhancing information-sharing mechanisms and protection and resiliency within and across CIKR sectors.” Also, in comparison to the 2006 NIPP, the 2009 version of the NIPP discusses resiliency more often in the “Authorities, Roles and Responsibilities” chapter of the document. These differences include a discussion on the expanded roles and responsibilities of key partners, such as SSAs and state and local governments, in CIKR planning with regard to resiliency. In this section of the 2006 NIPP, resiliency was discussed almost exclusively with regard to private sector owners and operators. NIPP PMO officials told us they wanted to recognize resilience as an approach to risk management, but some security partners did not see how they could or should influence the resilience efforts of the private sector. These PMO officials said with the release of the 2009 NIPP, they made a more concerted effort to help security partners understand how they can promote both protection and resilience. NIPP PMO officials told us that changes related to resiliency in the 2009 NIPP were not intended to represent a major shift in policy; rather they were intended to increase attention to and raise awareness about resiliency as it applies within individual sectors. These officials told us that the concept of resiliency was always included in the NIPP. The 2006 NIPP addressed resilience and even talked about it being one way to enhance protection. However, NIPP PMO officials said that many partners interpret or use protection as synonymous with physical protection. To ensure that all NIPP partners properly understand the intent of the NIPP, the NIPP PMO has more explicitly addressed the concept of resiliency in the 2009 NIPP. These officials said that this more explicit emphasis on resilience in the 2009 NIPP is expected to encourage more system-based sector and cross-sector activities that address a broader spectrum of risks. This would include, for example, increased attention to cyber security—which can transcend different sectors—and discussion of the importance of systems and networks within and among sectors as a means of fostering resilience. NIPP PMO officials also told us that the 2006 edition of the NIPP was developed based on the requirements of HSPD-7, which did not include an explicit emphasis on resiliency. They said that the 2009 NIPP was developed taking into account concerns raised by stakeholders that the 2006 NIPP emphasized asset protection rather than resiliency. They explained that, shortly after the 2006 NIPP was released, as the NIPP risk management framework and the sector partnerships matured, some stakeholders believed that the concept of continuity and resilience in and of itself, was not articulated and addressed as clearly as needed for their purposes. In addition, according to these officials, changes in the 2009 NIPP were drawn from many sources, including members of Congress and academic and policy groups, who also expressed increasing interest in the concept of resiliency as a critical part of national preparedness. DHS Is Encouraging SSAs to Emphasize Resiliency in Their 2010 SSPs Although DHS provides SSAs flexibility when developing their SSPs, given increased attention to resiliency in the 2009 NIPP, NIPP PMO officials have encouraged SSAs to emphasize resiliency in guidance provided to SSAs in updating SSPs. One key difference between the guidance for developing the 2007 SSPs and the 2010 SSPs is the inclusion of a resiliency term in many places where there is a reference to protection or protection programs. For instance, Chapter 5 of the 2006 guidance is entitled “Develop and Implement Protective Programs.” By contrast, chapter 5 of the 2009 guidance is entitled “Develop and Implement Protective Programs and Resiliency Strategies.” Related to this change, DHS has also included instructions for where—and at times, how—resiliency is to be incorporated into 2010 SSPs. For example, in the 2009 guidance set forth in Chapter 5, SSAs are advised that in sectors for which infrastructure resiliency is as or more important than physical security or hardening, their SSA chapter on “Protection Program Implementation” should focus on describing the resiliency measures and strategies being used by the sector. The guidance also provided examples of resiliency measures such as building hazard resistance into initial facility design; designing and developing self-healing and self-diagnosing cyber systems; and incorporating smart materials and embedded sensors into new physical and cyber networks. According to DHS officials in the NIPP PMO, greater attention to interdependencies and cyber security in the NIPP are resiliency-related considerations that reinforce the need for SSAs to address systems- and network-based CIKR. We did not examine the 2010 SSPs to determine the extent to which they adhered to DHS’s recent SSP guidance because SSPs were not complete at the time of our review. However, we examined the 2007 SSPs prepared based on 2006 guidance to ascertain the extent to which they contained language about resiliency. Our review showed that 13 of the 17 SSPs used the term resiliency or terms related to resiliency, such as continuity of operations, in their vision statements, goals, or objectives and 14 of 17 included resiliency in their risk management discussions. Whereas the discussion of resiliency in the risk management section was relatively limited in some SSPs, the discussion about resiliency in others— particularly the banking and finance, energy, communications, and postal and shipping and transportation sectors—was relatively extensive. For example, the 2007 National Monuments SSP mentions resilience in the Introduction, in reference to national goals and in a discussion about the importance CIKR protection has in making the nation more resilient. On the other hand, the Banking and Finance and Communications SSPs discuss how resilient these sectors are by design. For example: Banking and Finance Sector: The sector consists of many thousands of depository institutions, securities and futures firms, insurance companies, and other financial service companies, and supports a number of exchanges and over-the-counter markets, all of which contribute to the sector’s resiliency because they provide a high degree of redundancy across the sector. Thus, according to the SSP, the competitive structure of the financial industry and the breadth of the financial instruments provide a level of resiliency against attack and other types of physical or cyber disruptions. The Banking and Finance SSP goes further by listing publications related to resiliency and business continuity planning and notes the Department of the Treasury encourages security partners to develop, enhance, and test business continuity plans. The SSP states that these plans are designed to preemptively identify the core functions and capabilities necessary to continue operations or resume operations after a disruption. The Banking and Finance SSP also notes that there is an annual test of business continuity planning by some members of the sector. Communications Sector: Resiliency is achieved by the technology, redundancy, and diversity employed in network design and by customers who employ diverse and resilient primary and backup communications capabilities, thereby increasing the availability of service to customers and reducing the impact of outages. For example, according to the Communications SSP, the network backbone remained intact on September 11, 2001, and during the hurricanes of 2005 despite the enormity of these incidents. We interviewed SSA representatives about the extent to which they had included a discussion of resiliency in their past SSPs, and their plans to expand on their discussion of resilience in their 2010 SSPs. Seventeen of the 18 SSA representatives who responded to our questions told us they believe that they have already included the concept of resiliency in their existing sector plans, although that term itself may not have been used often. These SSA representatives also said that they intend to further incorporate resiliency into their 2010 SSPs where appropriate based on the characteristics of their sectors and their understanding of DHS guidance. However, based on their comments, it is likely that SSAs will not make significant changes to their SSPs with regard to resiliency. For example: Banking and Finance Sector: The SSA representative said they are reviewing the DHS guidance and working with DHS to coordinate perspectives regarding resiliency and to ensure that it remains central to their efforts regarding infrastructure issues. The Banking and Finance SSA representative added that inasmuch as the Department of the Treasury has long focused on the issue of resilience within the financial services sector, any changes to the Banking and Finance SSP concerning resiliency would be modest. Chemical Sector: The SSA representative said the sector has long recognized that “resilient operations and effective loss prevention are a part of managing risk. These concepts, when woven together, support the umbrella of resiliency.” The SSA representative said that resiliency, in terms of prevention, protection, response, and recovery along the preparedness spectrum was covered in the 2007 SSP and the SSA anticipates highlighting and framing the discussion of these items in terms of resiliency in the 2010 SSP update. Nuclear Sector: The SSA representative responded that, while resiliency is an important goal for some aspects of the Nuclear Sector, most Nuclear Sector programs focus on protection—physical hardening, in additional to other protective strategies—as the underlying goal because of the relatively serious consequences of a successful attack on some nuclear sites. According to the SSA representative, the draft 2010 Nuclear SSP highlights those areas where resilience is most appropriate, while retaining the overall focus on protection. Finally, the SSA representative (a DHS TSA official) for the Transportation and Postal and Shipping sectors said he did not think that DHS merely wanted the SSAs to substitute the term resiliency where the existing plan said “redundancy” or “recovery” and would need to clarify the issue with DHS. For a discussion of resiliency in the SSPs, see appendix II. DHS officials in the NIPP PMO told us that the balance between protection and resilience is unique to each sector and it must be recognized that the degree to which any one SSP increases the emphasis on resiliency will depend on the nature of the sector and the risks to its CIKR. They also said they will rely on the sectors themselves to determine the importance of resiliency in their plans. NIPP PMO officials further stated that by emphasizing both protection and resilience in the NIPP, the private sector better appreciates that the NIPP gives them the flexibility to take actions and implement strategies that are tailored to their risks and situation. DHS officials said that they plan to provide additional guidance or instruction regarding resiliency to any sectors that need additional clarification, and, expect it to take time for resiliency to be fully understood and incorporated across the sectors. Agency Comments We requested comments on a draft of this report from the Secretary of Homeland Security. In commenting on this draft DHS reiterated that it incorporated changes into the NIPP that reflect stakeholder input and the sectors’ experience in protecting critical infrastructure. In addition, DHS said it increased the emphasis on resilience in the 2009 NIPP and directed SSAs to address resilience in the revision of their SSPs. DHS said the changes related to resilience in the 2009 NIPP were not intended to represent a major shift in policy as the concept of resilience was included in the 2006 NIPP. DHS said that the more explicit emphasis on resilience in the 2009 NIPP is expected to encourage more system-based sector and cross-sector activities that address a broader spectrum of risks. DHS also provided technical comments that we have incorporated as appropriate. We also provided a draft of this report to SSAs representatives at the Departments of Agriculture, Defense, Energy, Health and Human Services, Interior, and Treasury and the Environmental Protection Agency and asked them to comment on those areas of the report relevant to their agencies. DOD, Health and Human Services and the Environmental Protection Agency provided technical comments that we have incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Appendix I: The Concept of Resiliency This appendix discusses how resiliency has been addressed in the context of critical infrastructure and key resource (CIKR) protection since 2006. The concept of resiliency has gained particular importance and application in a number of areas of federal CIKR planning. Both members of Congress and executive branch agencies have addressed resiliency in relation to the importance of the recovery of the nation’s critical infrastructure from damage. Accordingly, most of the current focus is on assets, systems, and networks rather than agencies or organizations. Part of the recent discussion over resiliency has focused on the definition of the concept. In February 2006, the Report of the Critical Infrastructure Task Force of the Homeland Security Advisory Council defined resiliency as “the capability of a system to maintain its functions and structure in the face of internal and external change and to degrade gracefully when it must.” Later in 2006, the Department of Homeland Security’s National Infrastructure Protection Plan—again focusing on critical infrastructure, not agencies—defined resilience as “the capability of an asset, system, or network to maintain its function during or to recover from a terrorist attack or other incident.” In May 2008, the House Committee on Homeland Security held a series of hearings focusing on resilience at which government and private sector representatives, while agreeing on the importance of the concept, presented a variety of definitions and interpretations of resilience. Also, in April 2009, we reported that organizational resiliency is based on 21 attributes particularly associated with resilience and assigned them to five related categories. These categories are emergency planning, organizational flexibility, leadership, workforce commitment, and networked organizations. Likewise, government and academic organizations have discussed how resiliency can be achieved in different ways. Among these are an organization’s robustness (based on protection, for example better security or the hardening of facilities); the redundancy of primary systems (backups and overlap offering alternatives if one system is damaged or destroyed); and the degree to which flexibility can be built into the organization’s culture (to include continuous communications to assure awareness during a disruption, distributed decision-making power so multiple employees can take decisive action when needed, and being conditioned for disruptions to improve response when necessary). The concepts associated with resiliency, and related concepts—e.g., recovery and reconstitution and continuity of operations—have evolved over the years. Homeland Security Presidential Directive-7 did not contain specific references to resiliency, but it provided instructions to federal agencies to create protection plans for the facilities they own and operate to include “…contingency planning, including the recovery and reconstitution of essential capabilities.” Also, in May 2007 the President issued Homeland Security Presidential Directive 20 - National Continuity Policy. This directive establishes a comprehensive national policy on the continuity of federal government structures and operations and a single National Continuity Coordinator responsible for coordinating the development and implementation of federal continuity policies. It also establishes "National Essential Functions," directs executive departments and agencies to integrate continuity requirements into operations, and provides guidance for state, local, territorial, and tribal governments, and private sector organizations in order to ensure a comprehensive and integrated national continuity program that is to enhance the credibility of our national security posture and enable a more rapid and effective response to and recovery from a national emergency. As part of Homeland Security Presidential Directive 20, the Secretary of Homeland Security is directed to, among other things, coordinate the implementation, execution, and assessment of continuity operations and activities; develop, lead, and conduct a federal continuity training and exercise program, which shall be incorporated into the National Exercise Program; and develop and promulgate continuity planning guidance to state, local, territorial, and tribal governments, and private sector critical infrastructure owners and operators. In August 2009 the Homeland Security Studies and Analysis Institute released a report that examined the operational framework that could be used by DHS and stakeholders at all levels, both public and private, as a basis for incorporating resilience into our infrastructure and society in order to make the nation safer. This framework approached resilience in terms of three mutually reinforcing objectives: resistance, absorption, and restoration. Resistance is accomplished when the threat or hazard damage potential is limited through interdiction, redirection, avoidance, or neutralization efforts. The entire system experiences less damage than would otherwise be the case. Absorption is accomplished when consequences of a damage-causing event are mitigated. The system experiences damage, but maintains its structure and key functions. It bends, but does not break. Restoration is accomplished when the system is rapidly reconstituted and reset to its present status. Key functions are reestablished, possibly at alternative sites or with substitute processes, and possibly at an enhanced level of functionality. Finally, the study includes funding profiles for the resistance, absorption, and restoration objectives dependent upon whether the facility or entity wants to put an emphasis on avoiding damage up front (protection) or the ability to recover from damage quickly. Most recently, the National Infrastructure Advisory Council issued a report on critical infrastructure resilience in September 2009. The study noted that protection and resilience are not opposing concepts and represent complementary and necessary elements of a comprehensive risk management strategy. It examined current government policies and programs for resilience in CIKR sectors. It also focused on identifying measures to achieve sector- and national-level resilience, cross-sector and supply-chain-related issues as they relate to resilience, and measures implemented by individual enterprises. The NIAC made resilience-related recommendations to the President through the DHS Secretary to improve government coordination, clarify roles and responsibilities, and strengthen public-private partnerships and to encourage resilience using market incentives. Appendix II: Discussions of Resiliency in 2007 Sector-specific Plans This appendix discusses how the Sector-specific Agencies (SSAs) addressed resiliency in their 2007 Sector-specific Plans (SSPs) and how the SSAs will address resiliency in their 2010 SSPs. All 17 SSAs in place at the time the 2007 SSPs were developed incorporated resiliency-related terms—resilient, resilience, resiliency, and continuity planning—into their 2007 SSPs. Specifically, 13 of the 17 SSPs used these terms in their vision statements, goals or objectives, and 14 of the 17 used these terms in their risk management plans. Given the increased attention to resiliency in the 2009 National Infrastructure Protection Plan (NIPP) NIPP, NIPP Program Management Office (PMO) officials encouraged SSAs to devote more attention to resiliency in their 2010 SSPs. Since the Department of Homeland Security (DHS) does not expect these plans to be released until 2010, we contacted representatives of the 18 SSAs to gather information on their plans to adhere to DHS’s revised SSP guidance. Representatives of 7 of the 18 sectors—Agriculture and Food, Communications, Government Facilities, Healthcare and Public Health, Information Technology (IT), Postal and Shipping, and Transportation—responded that they intend to devote greater attention to resiliency, and representatives of 10 of the 18 sectors—Banking and Finance, Chemical, Commercial Facilities, Dams, Defense Industrial Base, Emergency Services, Energy, National Monuments, Nuclear, and Water—responded that they intend to devote the same amount of attention to resiliency as in their 2007 SSPs. Finally, a representative of 1 of the 18 sectors—Critical Manufacturing—responded that the sector’s first SSP, to be released in 2010, will describe the sector’s strategy to increase resiliency and prevent, deter, and mitigate any disruptions caused by man-made threats or natural disasters. The following table gives an overview of how resilience was referenced in the 2007 SSPs and how sector representatives stated they will address resiliency in their 2010 SSPs. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contacts and Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, John Mortin, Assistant Director and Tony DeFrank, Analyst-in-Charge, managed this assignment with assistance from Christy Bilardo and Landis Lindsey. Michele Fejar and Steven Putansu assisted with design and methodology. Tracey King and Thomas Lombardi provided legal support and Lara Kaskie provided assistance in report preparation. GAO Products Related to Critical Infrastructure Protection Critical Infrastructure Protection The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors' Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Critical Infrastructure Protection: Commercial Satellite Security Should Be More Fully Addressed. GAO-02-781. Washington, D.C.: August 30, 2002. Cyber Security Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 2009. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation's Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Critical Infrastructure Protection: DHS Needs to Better Address Its Cybersecurity Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Critical Infrastructure Protection: Further Efforts Needed to Integrate Planning for and Response to Disruptions on Converged Voice and Data Networks. GAO-08-607. Washington, D.C.: June 26, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Sector-Specific Plans' Coverage of Key Cyber Security Elements Varies. GAO-08-64T. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-113. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: September 13, 2006. Critical Infrastructure Protection: Challenges in Addressing Cybersecurity. GAO-05-827T. Washington, D.C.: July 19, 2005. Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities. GAO-05-434. Washington, D.C.: May 26, 2005. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 28, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Posthearing Questions from the September 17, 2003, Hearing on “Implications of Power Blackouts for the Nation's Cybersecurity and Critical Infrastructure Protection: The Electric Grid, Critical Interdependencies, Vulnerabilities, and Readiness”. GAO-04-300R. Washington, D.C.: December 8, 2003. Critical Infrastructure Protection: Challenges in Securing Control Systems. GAO-04-140T. Washington, D.C.: October 1, 2003. Critical Infrastructure Protection: Efforts of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. High-Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation's Critical Infrastructures. GAO-03-121. Washington, D.C.: January 1, 2003. Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems. GAO-02-474. Washington, D.C.: July 15, 2002. Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks. GAO-01-1168T. Washington, D.C.: September 26, 2001. Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities. GAO-01-1132T. Washington, D.C.: September 12, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-1005T. Washington, D.C.: July 25, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-769T. Washington, D.C.: May 22, 2001. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities. GAO-01-323. Washington, D.C.: April 25, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. GAO/T-AIMD-00-268. Washington, D.C.: July 26, 2000. Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000. GAO/T-AIMD-00-229. Washington, D.C.: June 22, 2000. Critical Infrastructure Protection: “ILOVEYOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities. GAO/T-AIMD-00-181. Washington, D.C.: May 18, 2000. Critical Infrastructure Protection: National Plan for Information Systems Protection. GAO/AIMD-00-90R. Washington, D.C.: February 11, 2000. Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection. GAO/T-AIMD-00-72. Washington, D.C.: February 1, 2000. Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations. GAO/T-AIMD-00-7. Washington, D.C.: October 6, 1999. Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences. GAO/AIMD-00-1. Washington, D.C.: October 1, 1999. Defense Critical Infrastructure Protection Defense Critical Infrastructure: Actions Needed to Improve Identification and Management of Electrical Power Risks and Vulnerabilities to DoDCritical Assets. GAO-10-147. Oct. 23, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Consistency, Reliability, and Usefulness of DOD’s Tier 1 Task Critical Asset List. GAO-09-740R. Washington, D.C.: July 17, 2009. Defense Critical Infrastructure: Developing Training Standards and an Awareness of Existing Expertise Would Help DOD Assure the Availability of Critical Infrastructure. GAO-09-42. Washington, D.C.: October 30, 2008. Defense Critical Infrastructure: Adherence to Guidance Would Improve DOD’s Approach to Identifying and Assuring the Availability of Critical Transportation Assets. GAO-08-851. Washington, D.C.: August 15, 2008. Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007. Electrical Power Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations' Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Department of Energy, Federal Energy Regulatory Commission: Mandatory Reliability Standards for Critical Infrastructure Protection. GAO-08-493R. Washington, D.C.: February 21, 2008. Electricity Restructuring: Key Challenges Remain. GAO-06-237. Washington, D.C.: November 15, 2005. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Restructured Electricity Markets: Three States' Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Energy Markets: Results of FERC Outage Study and Other Market Power Studies. GAO-01-1019T. Washington, D.C.: August 2, 2001. Other Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed. GAO-02-961T. Washington, D.C.: July 24, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002.
According to the Department of Homeland Security (DHS), there are thousands of facilities in the United States that if destroyed by a disaster could cause casualties, economic losses, or disruptions to national security. The Homeland Security Act of 2002 gave DHS responsibility for leading and coordinating the nation's effort to protect critical infrastructure and key resources (CIKR). Homeland Security Presidential Directive 7 (HSPD-7) defined responsibilities for DHS and certain federal agencies--known as sector-specific agencies (SSAs)--that represent 18 industry sectors, such as energy. In accordance with the Homeland Security Act and HSPD-7, DHS issued the National Infrastructure Protection Plan (NIPP) in June 2006 to provide the approach for integrating the nation's CIKR. GAO was asked to study DHS's January 2009 revisions to the NIPP in light of a debate over whether DHS has emphasized protection--to deter threats, mitigate vulnerabilities, or minimize the consequences of disasters---rather than resilience---to resist, absorb, or successfully adapt, respond to, or recover from disasters. This report discusses (1) how the 2009 NIPP changed compared to the 2006 NIPP and (2) how DHS and SSAs addressed resiliency as part of their planning efforts. GAO compared the 2006 and 2009 NIPPs, analyzed documents, including NIPP Implementation Guides and sector- specific plans, and interviewed DHS and SSA officials from all 18 sectors about their process to identify potential revisions to the NIPP and address resiliency. Compared to the 2006 NIPP, DHS's 2009 update to the NIPP incorporated various changes, including a greater emphasis on regional CIKR protection planning and updates to DHS's overall risk management framework, such as instructions for sectors to develop metrics to gauge how well programs reduced the risk to their sector. For example, in the 2006 NIPP, DHS encouraged stakeholders to address CIKR across sectors within and across geographic regions; by contrast, the 2009 NIPP called for regional coordination through the formation of a consortium of representatives from multiple regional organizations. DHS also enhanced its discussion of risk management methodologies in the 2009 NIPP. The 2006 NIPP listed the minimum requirements for conducting risk analyses, while the 2009 NIPP includes the use of a common risk assessment approach, including the core criteria for these analyses to allow the comparison of risk across sectors. DHS officials said that the changes highlighted in the 2009 NIPP were the result of knowledge gained and issues raised during discussions with partners and outside organizations like GAO. DHS has also issued guidance for SSAs to consider revisions to the NIPP when updating their sector-specific plans (SSPs). Fourteen of 18 SSA representatives that responded to our query said they used a process similar to DHS's to incorporate NIPP changes into their SSPs. They reported that they intend to discuss the expectations for the SSP with DHS, draft the SSP based on their knowledge of their sectors, and obtain input and feedback from stakeholders. DHS increased its emphasis on resiliency in the 2009 NIPP by discussing it with the same level of importance as protection. While the 2009 NIPP uses much of the same language as the 2006 NIPP to describe resiliency, the 2006 NIPP primarily treated resiliency as a subset of protection while the 2009 NIPP generally referred to resiliency alongside protection. For example, while the Managing Risk chapter of the 2006 NIPP has a section entitled "Characteristics of Effective Protection Programs," the same chapter in the 2009 NIPP has a section entitled, "Characteristics of Effective Protection Programs and Resiliency Strategies." DHS officials stated that these changes are not a major shift in policy; rather they are intended to raise awareness about resiliency as it applies within individual sectors. Furthermore, they stated that there is a greater emphasis on resilience in the 2009 NIPP to encourage more sector and cross-sector activities to address a broader spectrum of risks, such as cyber security. DHS officials also used guidance to encourage SSAs to devote more attention to resiliency. For example, in the 2009 guidance, SSAs are advised that in sectors where infrastructure resiliency is as or more important than physical security, they should focus on describing the resiliency measures and strategies being used by the sector. The 2010 updates to the SSPs are due to be released by DHS in mid-2010 and all sector representatives who responded to our questions said they will address the issue as is appropriate for their sectors. In commenting on a draft of this report, DHS reiterated its process for updating the NIPP and its views on resiliency.
Background GFOs are officers in the four ranks of brigadier general and above (for the Navy, rear admiral and above). GFOs are senior officers with high-level interagency, intergovernmental, and multinational responsibilities. These officers plan and implement military operations by integrated military forces across the domains of land, sea, air, and space. Table 1 displays the pay grade, title of rank, and insignia worn by GFOs. GFOs are assigned based on statutory limits and requirements. Congress establishes statutory limits on the number and distribution across each rank of GFOs for each of the services and joint staff. For fiscal year 2014, Title 10 of the U.S. Code mandated service-specific ceilings totaling 652 active duty GFOs for all services. In addition to the service-specific GFO positions, for fiscal year 2014, Title 10 also specifies 310 GFO positions to be designated by the Secretary of Defense for joint duty positions.These positions are not included in the service ceilings. DOD determines GFO requirements—the number of GFOs DOD components need—by determining the number of positions that should be filled by a GFO. DOD provides active duty personnel with a compensation package made up of cash, such as pay and allowances; noncash benefits, such as health care; and deferred compensation, such as retirement pensions and health benefits. We have previously reported on the costs of military compensation and found that there is variability in how compensation is defined. compensation as other active duty personnel, and may also be provided with security details, travel on military aircraft, aides, and funds for official entertainment and representation functions.discussion of the specific cost elements included in this review. GAO-10-561R. on the personal staffs of GFOs.aides to the personal staffs of GFOs. Officers may also be detailed as officer Various organizations across DOD are responsible for tracking GFOs and associated costs. Within the Office of the Under Secretary of Defense for Personnel and Readiness, the Officer and Enlisted Personnel Management office is responsible for GFO matters, including officer promotion and continuation policies and oversight of the number of GFOs in relation to statutory limits. Also, within the Office of the Under Secretary of Defense for Personnel and Readiness, the Military Compensation office is responsible for formulating, implementing, and administering DOD policy on military personnel compensation, including active duty and reserve military pay and allowances. The Director, Cost Assessment and Program Evaluation, is the principal advisor to the Secretary of Defense and other senior officials in DOD for independent cost assessment, program evaluation, and analysis. In collaboration with the Under Secretary of Defense for Personnel and Readiness and the Under Secretary of Defense (Comptroller), this office developed a tool to collect cost elements required to estimate the full cost of military and civilian In September 2013, personnel as outlined in DOD Instruction 7041.04. we reported that while this effort has improved DOD’s ability to estimate the full cost of personnel, there are limitations in certain areas, such as the lack of specific guidance for estimating certain costs.recommended, among other things, that DOD develop further guidance on certain cost elements. DOD partially concurred with this recommendation, but noted that the department will issue clarifying guidance where necessary or appropriate. We continue to believe that fully addressing this recommendation would enhance the development of DOD’s methodology for estimating and comparing the cost of its We workforces. DOD Instruction 7041.04 specifies that the Director, Cost Assessment and Program Evaluation, is responsible for preparing clarifying guidance as needed for implementing the instruction. The Secretary of Defense may authorize physical protection and personal security within the United States for departmental leadership who, based on their positions, require continuous security and protection. The Secretary of Defense also may authorize protection for additional personnel within the United States when necessary, as provided by law. The Under Secretary of Defense for Policy is the approving authority for designating individuals who are outside the United States as high-risk personnel and authorizing protective security details for those individuals. Each of the services and the Joint Staff has a general or flag officer matters office responsible for management and tracking of GFOs within their organization, which identifies, assigns, and tracks aides to GFOs. The services also track certain GFO costs, including official representation expenditures and costs associated with providing personal security details for eligible GFOs. (Comptroller) is responsible for the budget and financial management policy of DOD, including budget data, justification materials, and performance measures. The Comptroller’s office also formulates composite rates that reflect the estimated cost to DOD of compensation for all ranks of military personnel, including GFOs, and preparing annual reports on the costs associated with housing units used as quarters for GFOs (GFO housing). The Defense Travel Management Office and the Defense Logistics Agency manage the Defense Travel System, which is designed to capture travel costs for GFOs and other military personnel. The Defense Health Agency tracks health expenditures for GFOs and other military personnel, and the Defense Manpower Data Center maintains GFO and other military personnel population data, including end strengths. GFOs, along with members of the Senior Executive Service, have the authority to use official representation funds to host guests of the United States and DOD, such as civilian or military dignitaries and officials of foreign governments. GFO Population Growth Was Generally Consistent with the Growth in Statutory Limits but Was Higher Than Enlisted Personnel Growth, and DOD Has Not Updated GFO Requirements since 2003 The GFO population has experienced higher rates of growth than the enlisted population since fiscal year 2001, but DOD has not comprehensively updated GFO requirements since 2003 to reflect changes in the active duty force. This growth in the GFO population was generally consistent with the growth in statutory limits. In addition, growth varied across all of the active duty military personnel populations (i.e., GFOs, non-GFO officers, and enlisted personnel), with the most growth experienced by non-GFO officers—officers at or below the rank of colonel/captain. DOD officials stated that there continues to be a need for more GFOs than are authorized by Congress, but added that the department has not comprehensively updated GFO requirements since 2003 or advocated for increased GFO statutory limits because of the recent fiscal constraints faced by the department. However, without periodically conducting a comprehensive update of DOD’s GFO requirements, it will be difficult for DOD to help ensure that resources are properly matched to the needs of today’s environment. GFO Population Growth Was Generally Consistent with the Growth in Statutory Limits In fiscal year 2001, there were 871 GFOs, growing to 943 by fiscal year 2013 for an 8 percent overall increase. This growth in the GFO population was consistent with the growth in statutory limits for GFO positions, from 889 in fiscal year 2001 to 962 in fiscal year 2013, also an 8 percent increase (see fig. 1). According to DOD officials, the growth in the GFO population is attributable in part to increases in the number of commands; growth in the number of headquarters staff members needed to support overseas contingency operations; demand for GFOs to support overseas contingency operations; and congressionally directed GFO positions, such as the director of DOD’s Sexual Assault Prevention and Response Office. Our body of work has found that new commands and headquarters organizations have created additional requirements for military personnel, including GFO positions, and we have recommended that DOD take action to consolidate or eliminate military commands that are geographically close or have similar missions, to seek opportunities to centralize headquarters functions, and to periodically evaluate whether the size and structure of commands meet assigned missions.generally concurred with our recommendations and specified steps it will take in response, such as revising agency guidance and establishing timelines for DOD organizations to review data on command personnel. As shown in figure 1, from fiscal years 2006 through 2009 GFO end strengths were above statutory limits. However, DOD officials explained that these end strength data do not reflect exemptions applied by DOD to certain GFO positions during those years, and which allowed the services to exceed statutory limits on the numbers of GFOs. Specifically, GFOs who were on terminal leave immediately prior to retiring were included in end strength data, but were exempt from counting toward statutory limits. Also, 10 U.S.C. § 527 provides the President with authority to suspend the statutory limits on GFO numbers in time of war or national emergency. DOD provided data for fiscal years 2011 through 2013 that provided more detail on the department’s use of exemptions to manage GFO numbers against statutory limits. These data showed that the services claimed an average of 30 exemptions per fiscal year for GFOs. DOD officials stated that it has been necessary to use such exemptions to manage the GFO population since the population has been consistently at the statutory limits, and the department needed flexibility to transition GFOs into their new assignments with sufficient time to allow for knowledge transfer from the GFOs they were replacing. Department of Defense, General and Flag Officer Efficiencies Study Group (2011). This study was conducted in response to direction from the Secretary of Defense to identify 50 GFO positions that would be eliminated within 2 years. The report noted that the effort was not a GFO requirements validation. population is reduced, to preserve the department’s flexibility in managing GFO assignments. The Growth of Active Duty Military Personnel Varied across Populations From fiscal years 2001 through 2013, as the nation’s military engaged in combat operations in Iraq and Afghanistan, all populations of active duty military personnel experienced periods of growth, with the total population peaking at 1,417,200 in fiscal year 2010 (see fig. 2). However, this growth was not consistently distributed across military populations. Growth among active duty personnel varied across active duty military populations. For example, growth of the GFO population after fiscal year 2005 outpaced that of the enlisted active duty military population and has remained higher through fiscal year 2013. As the military began to draw down after fiscal year 2010, enlisted personnel have dropped below fiscal year 2001 levels while officers remained higher. Moreover, the ratios of enlisted to non-GFO officers and enlisted to GFOs are both at their lowest levels since prior to 2001 (5:1 and 1,200:1, respectively). DOD reported in a 1988 officer requirements study that decreases in the enlisted to officer ratio could reflect changing requirements for personnel. For example, the study stated that a new weapon system may need fewer crew members to operate it without changing the number of officers needed to lead the units that use the system, or there may be new requirements for officers in joint-service assignments and in research, development, or contracting activities. DOD officials stated that during military drawdowns, population decreases in the officer population tend to lag behind those of the enlisted population and attributed this to the greater flexibility that military planners have to decrease the enlisted population by rotating them out of the active duty force, while officers tend to remain in theater to manage the drawdown effort. The officials added that because the officer population is much smaller than the enlisted population, relatively small changes to this population can have greater relative effect, and that they expect the officer population to follow a similar decrease as the enlisted population in future years as senior officers retire. DOD Has Not Updated GFO Requirements to Reflect Changes in the Active Duty Force since 2003 The last comprehensive update of DOD’s GFO requirements was in 2003 when Congress mandated that DOD study GFO statutory limits and provide an assessment of whether statutory limits were sufficient to meet all GFO requirements. Since that time, DOD has added new commands and organizations, including U.S. Africa Command (2007), U.S. Cyber Command (2010), the Sexual Assault Prevention and Response Office (2006), and the Defense Health Agency (2013), all of which require additional GFOs for senior leadership positions. For example, the fiscal year 2012 National Defense Authorization Act mandated that the Director position at the Sexual Assault Prevention and Response Office be elevated and filled by either a GFO or a senior executive civilian DOD employee. The office’s current director is now the third GFO to have served in that position since the statute was passed. Also, in November 2013 we found that while the creation of the Defense Health Agency was intended to reduce personnel costs, the organization added new GFO positions at the two- and three-star ranks while retaining existing GFO positions at the service level. We recommended, among other things, that DOD provide Congress with a more thorough explanation of the potential sources of cost savings and an estimate of the number of military, civilian, and contractor personnel who will work in the organization when it reaches full operating capability. DOD concurred with our recommendations. Also, in the past decade, the military concluded the Iraq war and is currently in the process of reducing its presence in Afghanistan. Department of Defense, Guidance for Manpower Management. higher than the GFO statutory limit of 1,311 positions by 319 positions (24 percent). However, DOD has not conducted a comprehensive update of GFO requirements since then. In April 2004 we reviewed DOD’s 2003 GFO requirements study and recommended, among other things, that DOD periodically update GFO requirements. DOD concurred with our recommendation at the time, stating that a requirements database maintained by each of the military services was adequate to update requirements. However, as we noted in our 2004 report, DOD’s process for updating the requirements database was not comprehensive. Since 2003, DOD has not conducted a comprehensive update of GFO requirements. Also, as previously discussed, the ratio of enlisted personnel to GFOs is at its lowest level since prior to 2001. These types of changes to the active duty force suggest that an updated comprehensive validation of GFO requirements against statutory limits would help the department ensure that resources are properly matched to the needs of today’s environment. DOD completed a study in 2011 to determine opportunities for efficiency gains in the GFO corps; the study noted that the objective was not to determine GFO requirements but instead to identify organizational efficiencies that would allow more effective alignment of the force to the priority missions of the department. As such, the study did not identify, assess, and validate positions that the department believes should be filled by GFOs, nor did it assess the impact of any shortfall of GFOs on the department’s mission. The study was conducted in response to direction from the Secretary of Defense to review all active duty GFO positions and their associated overhead and determine how to reallocate positions such that at least 50 GFO positions would be eliminated within 2 years. The study identified 73 positions for elimination within 2 years, and an additional 28 eliminations based on conditions in overseas contingency operations. DOD officials said that these reductions are not complete because of continuing overseas contingency operations and the need to wait for GFOs serving in positions identified for elimination to retire. In commenting on the study, some services disagreed with the study’s recommendation to reduce GFO positions, noting that GFO requirements had increased and that GFO reductions were not distributed fairly across all of the services. DOD officials told us that there continues to be a need for more GFOs than are authorized by Congress, but added that the department has not comprehensively updated GFO requirements since 2003 or advocated for increased GFO statutory limits because of the recent fiscal constraints faced by the department. Without conducting a comprehensive update of DOD’s GFO requirements—to include identifying, assessing, and validating positions that the department believes should be filled by GFOs; defining the circumstances under which subsequent updates should occur; and assessing whether GFO statutory limits are sufficient to meet GFO requirements—it will be difficult for DOD to ensure that the GFO corps is properly sized and structured. It will also be difficult for DOD to ensure that the department can identify opportunities for managing these personnel and their associated resources more efficiently. The Full Cost of Active Duty GFOs Is Unknown; Trends in Available Costs Varied The full cost to DOD for active duty GFOs from fiscal years 2001 through 2013 is unknown because complete cost data for GFOs and their aides were not available. Data for compensation and housing were fully available, and trends for those costs varied from fiscal years 2001 through 2013. Other costs, such as commercial travel and per diem and military and government air travel, were either partially complete or unavailable, thus affecting our ability to identify trends for those costs. Our work found that data availability was affected by reporting practices, retention policies, inconsistent definitions for certain cost elements, and reliability factors. DOD guidance states that DOD officials must be aware of the full costs of manpower, use these costs to support workforce allocation decisions, and have a thorough understanding of the implications of those costs to DOD and, on a broader scale, to the federal government. By defining in guidance the officer aide position and GFO and associated aide costs, DOD will be in a better position to help ensure that a consistent approach is employed when estimating GFO and associated aide costs, better account for the full costs of GFOs, and improve its ability to make sound workforce allocation decisions. Availability of GFO and Aide Cost Data Varied The full cost to DOD for GFOs from fiscal years 2001 through 2013 is unknown because complete cost data for GFOs and their aides were not available. Elements of active duty GFO costs varied in availability from fiscal years 2001 through 2013. We assessed cost elements as complete, partially complete, or unavailable. Figure 3 depicts the extent to which GFO costs were included in our review depending on factors such as reporting practices, data retention policies, and data reliability factors, including completeness and accuracy. Certain GFO cost data are reported in budget materials or other formal reports produced by DOD and were readily available for all of the years included in our review. For example, population and cost data related to GFO compensation were available in DOD budget materials and cost memorandums for fiscal years 2001 through 2013. Similarly, cost data related to GFO housing were reported annually per Title 10 of the U.S. Code, for all of the fiscal years included in our review. In addition, using DOD budget materials, we obtained data needed to estimate the tax expenditure resulting from a portion of GFO compensation being tax exempt. Partial Data Certain cost data within the scope of our review were not available for all years. For example, complete cost data related to GFO commercial travel and per diem expenditures were available from the Defense Travel System for fiscal years 2009 through 2013. According to DOD officials, the services did not fully transition to this system until fiscal year 2009, and cost data predating that transition were spread across disparate systems and not captured in a consistent manner. Similarly, GFO health care cost data were not available prior to fiscal year 2003, because, according to Defense Health Agency officials, DOD transitioned to a new software system for tracking these costs in fiscal year 2003 and legacy data were not migrated to the new system. Cost data needed to calculate enlisted and officer aide compensation costs were also partially complete. For example, the Navy and the Marine Corps were unable to provide historical data on officer aides, such as name, rank, and overall numbers. According to DOD officials, the position of officer aide is not defined in departmental guidance because these positions are not established in statute. As a result, the military services were not able to provide consistent data for these personnel. Additionally, the Marine Corps tracked enlisted aide numbers by calendar instead of fiscal year, and the Navy was unable to provide enlisted aide numbers and ranks for more than 2 fiscal years. Table 2 depicts the availability of population data needed to calculate aide compensation. Some GFO cost data were not readily available or were determined to be unreliable—such as cost data related to GFO travel on military and government flights and cost data associated with providing personal security details to GFOs. While the magnitude of unavailable cost data associated with GFO travel on military and government flights is unclear, the Secretary of Defense has designated certain high-ranking GFOs, including the Chairman and Vice Chairman of the Joint Chiefs of Staff, service chiefs, and combatant commanders as “required use” travelers for official air travel because of threats, secure communications requirements, or scheduling requirements that make commercial travel unacceptable. Other GFOs are not designated as required use travelers, but may use U.S. government aircraft for official travel when the travel complies with specified criteria and when the demands of their travel prevent the use of commercial aircraft. Similarly, while we obtained personal security cost data covering 27 organizations—including the combatant commands, services, and other DOD organizations—DOD officials told us that the data were likely underreported, and the costs could not be separated by fiscal year or adjusted for inflation. Further, DOD officials said that when asked for these data, service officials did not include consistent information (such as costs associated with compensation and travel, equipment, weapons, and vehicles of security personnel) because of the lack of a department-wide definition for security detail costs. As a result, these cost data were excluded from our analysis. Certain cost data related to enlisted and officer aides were also unavailable. For example, the Army, the Navy, and the Marine Corps did not track enlisted and officer aides by name and fiscal year and were therefore unable to provide complete and accurate aide travel and per diem cost data. Similarly, the Army and the Navy were unable to provide enlisted and officer aide population data of sufficient detail—to include names and duration in aide position—to identify aide housing costs. Although the Air Force provided detailed aide population data, it did not track aide housing costs. An official from the Marine Corps told us that Marine aides were generally receiving a basic housing allowance, with one enlisted aide residing in the Commandant’s quarters. In the absence of enlisted and officer aide population data of sufficient consistency and completeness, we were also unable to estimate the tax expenditure associated with housing and subsistence allowances provided to aides. The availability of enlisted and officer aide cost data was affected by the extent to which the services defined and tracked aide personnel. DOD officials told us that there is no department-wide definition for the position of officer aide, and the Air Force is the only service that has established a definition. Prior to March 2011, the secretaries of the military departments were required to provide the Principal Deputy Under Secretary of Defense for Personnel and Readiness biannual reports of enlisted aide authorizations by military service and by GFO position. However, that requirement was rescinded by the Secretary of Defense in March 2011 as part of DOD’s efficiencies initiatives, and officials from the Office of the Under Secretary of Defense for Personnel and Readiness were not able to provide to us reports covering all of the years prior to the removal of this requirement. Recognizing the need to improve oversight of GFO costs, including costs associated with enlisted aides, DOD officials stated that the department plans to reinstate the biannual reporting requirement for enlisted aide authorizations in an upcoming revision to the instruction, but has not established a time frame for completing the revision. Reinstating this requirement could help DOD to track the number of enlisted aides, along with certain related costs, such as compensation. However, the department does not plan to include officer aide population data in this instruction. Without a similar reporting requirement for officer aide population data, DOD will not be able to improve the availability of officer aide costs, such as compensation. DOD Instruction 7041.04, Estimating and Comparing the Full Costs of Civilian and Active Duty Military Manpower and Contract Support, (July 3, 2013). guidance on certain cost elements, such as training; develop business rules for estimating Reserve and National Guard costs; evaluate inclusion or non-inclusion of cost elements related to retirement; assess cost models being used across the department; and reassess sources for contractor data. DOD generally concurred with our recommendations but has not completed actions. Moreover, standards for internal control in the federal government state that financial data are needed for external and internal uses, to make operating decisions and to allocate resources, while federal accounting standards similarly emphasize the need for managers to have relevant and reliable information on the full costs of activities and changes in those costs and the need for appropriate procedures to enable the collection, analysis, and communication of cost information. We have placed DOD on our High-Risk List for financial management beginning in 1995 because of financial management weaknesses that affect its ability to control costs; ensure accountability; anticipate future costs and claims on the budget; detect fraud, waste, and abuse; and prepare auditable financial statements. We have reported that while DOD has made efforts to improve financial management, it still has much work to do if it is to meet its long-term goals of improving financial management and achieving full financial statement auditability. By defining the officer aide position and GFO and associated aide costs, DOD will be better positioned to help ensure that a consistent approach is employed when estimating GFO and associated aide costs, better account for the full costs of GFOs, and improve its ability to make sound workforce allocation decisions. Trends in Complete and Partially Complete GFO and Aide Costs Varied GFO Compensation Growth Was Consistent with That of Other Officers but Was Outpaced by Enlisted Personnel Compensation Growth From fiscal years 2001 through 2013, GFO and other active duty personnel compensation costs increased, with enlisted personnel experiencing the highest percentage per capita cost growth. The following information summarizes changes in inflation-adjusted costs to DOD to provide compensation for active duty GFOs, non-GFO officers, and enlisted personnel, as shown in figures 4 and 5. GFOs. Total compensation costs grew from $199.4 million in fiscal year 2001 to $274.4 million in fiscal year 2013 (38 percent). Per capita costs grew from $228,129 to $268,187 (18 percent). Non-GFO officers. Total compensation costs grew from $26.8 billion in fiscal year 2001 to $36.8 billion in fiscal year 2013 (38 percent). Per capita costs grew from $123,255 to $146,472 (19 percent). Enlisted personnel. Total compensation costs grew from $64 billion in fiscal year 2001 to $86.7 billion in fiscal year 2013 (35 percent). Per capita costs grew from $55,325 to $73,056 (32 percent). Figure 4 shows the percentage change in total compensation costs from fiscal years 2001 through 2013, and figure 5 depicts the percentage change in per capita compensation costs. As shown in figure 5, enlisted personnel experienced the highest per capita growth of the three populations we compared. Per capita costs represent averages of what it costs DOD to compensate military personnel across different ranks and services. We reported in March 2012 that the costs of military compensation have grown significantly, in part because of increases in basic pay and deferred compensation, such as health care benefits, for which DOD officials anticipate significant continued growth because of expansions in health care coverage. Such increases may explain the growth in per capita costs for GFOs, non-GFO officers, and enlisted personnel. Cost growth specific to the GFOs and non-GFO officers may also be attributed in part to the growth in each of these populations, which increased at rates of 8 percent and 10 percent, respectively. In contrast, the population of enlisted personnel in fiscal year 2013 was 2 percent below fiscal year 2001 levels, yet compensation costs for these personnel remained 35 percent higher, outpacing growth for both GFO and non-GFO officers. For the purposes of this report, compensation costs were calculated using DOD’s composite standard pay rates, which include the following military personnel appropriation costs: average basic pay plus retired pay accrual, Medicare-eligible retiree health care accrual, basic allowance for housing, basic allowance for subsistence, incentive and special pay, permanent change of station expenses, and miscellaneous pay. These rates do not include the tax expenditure resulting from the federal government not collecting taxes on basic allowances for housing and subsistence. From fiscal years 2001 through 2013, the GFO tax expenditure fluctuated but fell overall by approximately 1 percent, from an average of $10,689 to an average of $10,604, based on our calculations. Our total and per capita costs using the composite rates GFOs, non-GFO officers, and enlisted personnel are based on active duty average strengths from fiscal years 2001 through 2013. Appendix I provides additional information regarding our approach to calculating total and per capita compensation costs for both GFOs and other active duty personnel. Trends in Other GFO Costs Varied Trends in other complete or partially complete GFO costs—such as housing, travel and per diem, and official representation—varied from fiscal years 2001 through 2013. The following sections describe these costs across the fiscal years for which data were available and reliable for the purposes of our review. GFO Military Housing Costs from Fiscal Years 2001 through 2013 Inflation-adjusted GFO housing costs, depicted in figure 6, decreased from $33.4 million in fiscal year 2001 to approximately $10.9 million in fiscal year 2013—an overall decline of 67 percent. These costs include operations, maintenance, utility, lease, and repair costs associated with DOD-owned and leased properties, as well as certain costs associated with privatized housing. According to DOD officials, DOD’s housing privatization initiative contributed to the cost decrease beginning in fiscal year 2005. DOD policy establishes private sector housing as the primary source of housing for military personnel within the United States. However, the lack of suitable available housing in the community, along with housing requirements for key personnel—such as GFOs—may require military housing on an installation. The number of GFOs residing in DOD-owned or DOD-leased properties decreased from 790 in fiscal 2001 to 240 in fiscal year 2013. GFO Commercial Travel and Per Diem Costs from Fiscal Years 2009 through 2013 GFO commercial travel and per diem costs, adjusted for inflation, grew from $25.9 million in fiscal year 2009 to $34.6 million in fiscal year 2012, before decreasing to $27.0 million in fiscal year 2013—for an overall increase of approximately 4 percent. DOD uses commercial providers to transport military personnel, including GFOs.travel, including airfare, meals and incidentals, lodging, rental cars, and mileage, were available from the Defense Travel System for fiscal years 2009 through 2013. According to an official from the Defense Logistics Agency, the recent decline in GFO commercial travel and per diem costs Costs for GFO commercial is attributable in part to the effects of sequestration, and in part to DOD’s efforts to reduce spending on travel and conferences. As previously discussed, cost data prior to 2009 were not included in our analysis because pre-2009 data in the Defense Travel System were incomplete and not captured in a consistent manner. GFO Health Care Costs from Fiscal Years 2003 through 2013 GFO health care costs, adjusted for inflation, rose from approximately $11.7 million in fiscal year 2003 to approximately $20.6 million in fiscal year 2013, an overall increase of 77 percent. Per capita health care costs increased from $12,744 in fiscal year 2003 to $20,179 in fiscal year 2013, an increase of 58 percent. GFOs, like all active duty military personnel, have access to health care provided through TRICARE. GFO health care under TRICARE includes direct care (i.e., medical care provided by the U.S. military health system), purchased care (i.e., medical care provided by private sector providers in the networks outside of the military health system), and pharmacy costs for both GFOs and their dependents. GFOs also have access to health care at the Executive Medicine Clinic at Walter Reed National Military Medical Center and the Executive Services Health and Wellness Clinic at Fort Belvoir Community Hospital. According to these organizations, the clinics ensure availability, security, and confidentiality for military and government executives, including GFOs, select executive branch civilian executives, members of the U.S. Congress, and foreign dignitaries. GFO health care costs tracked by the Military Health System Data Repository, such as those costs associated with executive medicine clinics, are included in this report; however, the data repository is not structured to allow executive medicine clinic costs specifically associated with GFOs to be separately extracted. As mentioned earlier in this report, we determined that costs prior to fiscal year 2003 were not available because Defense Health Agency officials told us that the TRICARE Management Activity transitioned to a new software system at that time and legacy data were not migrated into the new system. The increase in per capita health care costs is consistent with growth in overall health care costs in the Military Health System. We reported in November 2013 that DOD’s Military Health System costs have grown from $19 billion in fiscal year 2001 to the fiscal year 2014 budget request of $49.4 billion. An advisory committee to the Secretary of Defense has cited increased utilization of services, increasingly expensive technology and pharmaceuticals, and the aging of the retiree population as reasons for increasing health care costs. We have reported on DOD’s military health governance structure and the need for sustained senior leadership to achieve desired cost savings and found that DOD senior leadership has demonstrated a commitment to oversee implementation of its military health system’s reform and has taken a number of actions to enhance reform efforts. GFO Official Representation Costs from Fiscal Years 2008 through 2013 for the Air Force, Marine Corps, and Navy Total official representation costs across the Air Force, the Marine Corps, and the Navy dropped 29 percent from fiscal years 2008 through 2013. GFOs, along with members of the Senior Executive Service, have the authority to use official representation funds to host guests of the United States and DOD, such as civilian or military dignitaries and officials of foreign governments. We received complete official representation costs from the Air Force, the Marine Corps, and the Navy for fiscal years 2008 through 2013. The Army provided cost data for fiscal years 2001 through 2013, but these data were determined to be unreliable because of anomalies identified across multiple years. The official representation cost data were reported in aggregate, by service, and were therefore not attributable specifically to GFOs. As shown in table 3, inflation-adjusted official representation costs across the Navy, the Marine Corps, and the Air Force remained relatively constant from fiscal years 2008 through 2012, before dropping in fiscal year 2013. A Navy official attributed the decrease in Navy representation costs to the fiscal year 2013 budget sequester. GFO Executive Training Costs from Fiscal Years 2001 through 2013 From fiscal years 2001 through 2013, the cost of GFO-specific training courses declined from $2.4 million to $1.8 million, a decrease of about 27 percent. Military officers, including GFOs, are encouraged to enroll in courses to further their military education. Two of these courses, CAPSTONE and PINNACLE, are specific to GFOs and are administered by the National Defense University. These courses are designed to prepare senior officers of the U.S. armed forces for high-level joint, interagency, intergovernmental, and multinational responsibilities. CAPSTONE is a statutorily mandated course for newly selected GFOs administered over 5 weeks involving travel to domestic and international locations. The course objective is to make these individuals more effective in planning and employing U.S. forces in joint and combined operations. PINNACLE is a 1-week classroom course designed to prepare GFOs for senior political and military positions and command of joint and coalition forces at the highest level. On average, according to DOD data, CAPSTONE courses cost $15,684 per student from fiscal years 2001 through 2013, and PINNACLE courses cost $4,067 per student from fiscal years 2008 through 2013. Officials from the National Defense University told us that costs declined because the length of the CAPSTONE course was changed from 6 to 5 weeks beginning in 2012. Aide Compensation in the Army and the Air Force from Fiscal Years 2003 through 2013 Compensation costs for enlisted aides in the Army and Air Force—the number of which are authorized by Congress—grew 10 percent from fiscal years 2003 through 2013, from approximately $14.1 million to $15.5 million. Enlisted aide compensation costs within each of these services fluctuated from year to year but also increased overall during this period. This is due in part to fluctuations in the overall number and rank distribution of enlisted aides within each service. Specifically, during this period the number of enlisted aides in the Army fluctuated from 70 to 97, while the number of Air Force enlisted aides ranged from 73 to 88. From fiscal years 2003 through 2013, on average, the ratio of enlisted aides to GFOs in the Army was 1:4, and the ratio of enlisted aides to GFOs in the Air Force was 1:3.5. Adjusting for changes in the aide population, we estimate that the average per capita compensation cost for enlisted aides in both the Army and the Air Force increased from $84,519 to $100,028, an increase of 18 percent. Figure 7 shows the percentage change in total and per capita enlisted aide costs from fiscal years 2003 through 2013 for the Army and the Air Force. Costs for the Marine Corps and the Navy could not be determined because the Marine Corps could not provide data by fiscal year and the Navy was able to provide only 2 years of enlisted aide personnel data. Compensation costs for officer aides in the Army and the Air Force grew 8 percent from fiscal years 2003 through 2013, from approximately $28.9 million to $31.1 million. Officer aide compensation costs within each of these services also generally increased during these years but fluctuated depending on the number and distribution of aides across ranks. The number of officer aides in the Air Force ranged from 22 to 47, while the number of officer aides in the Army ranged from 191 to 228. Adjusting for changes in population, we estimate that the average per capita compensation cost for officer aides in both the Army and the Air Force increased from $125,759 to $140,288, an increase of 12 percent. Figure 8 shows the percentage change in total and per capita officer aide costs from fiscal years 2003 through 2013 for the Army and the Air Force. As previously mentioned, costs for the Marine Corps and the Navy could not be determined because these services did not track the number of officer aide personnel. Aide Travel and Per Diem for the Air Force from Fiscal Years 2007 through 2013 The travel and per diem costs for enlisted and officer aides in the Air Force rose from approximately $862,000 in fiscal year 2007 to approximately $1.3 million in fiscal year 2010, before decreasing to approximately $768,000 in fiscal year 2013—an overall decrease of approximately 11 percent. As mentioned previously, the Army, Navy, and Marine Corps did not have enlisted and officer aide population data of sufficient detail to query the Defense Travel System for travel costs associated with those aides. Conclusions Given the federal government’s continuing fiscal challenges, it is more important than ever that Congress, the administration, and managers at DOD have reliable, useful, and timely financial information to help ensure fiscal responsibility and demonstrate accountability, particularly for the elite leaders responsible for planning and implementing military operations across the department. As DOD officials continue to manage implementation of the fiscal year 2013 budget sequester, GFOs and related support costs are one of the areas in which efficiencies and reductions are being considered. However, DOD officials have voiced concern about reductions in this area and the need to retain flexibilities in staffing GFOs because of additional commands, joint responsibilities, and the potential to respond to future contingencies. These competing needs underscore the importance of DOD periodically conducting a comprehensive update of GFO requirements to efficiently manage the elite leaders of the military while also providing validation for existing and emerging needs. Moreover, if DOD had complete information on the costs of people and resources required to support the GFO corps (such as aide compensation and travel costs), the department would have an improved ability to manage these resources. More specific and complete definitions of GFO- and aide-associated costs would better position DOD to have more detailed and readily available information on these costs to help decision makers in DOD and Congress to balance resource priorities in a fiscally challenging environment. As the department realigns itself to address new challenges, full awareness of the GFO requirements and costs would help the department to provide congressional decision makers with the information needed for effective oversight and help ensure the efficient use of resources. Recommendations for Executive Action We are making five recommendations to help DOD to improve management of GFO requirements and collect more detailed information on associated costs. To determine the number of GFOs required for DOD’s mission, we recommend that the Secretary of Defense take the following action: Direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the secretaries of the military departments, to conduct a comprehensive update for GFO requirements by identifying, assessing, and validating positions that the department believes should be filled by GFOs, and define the circumstances under which subsequent periodic updates should occur. The update should include an assessment of whether GFO statutory limits are sufficient to meet GFO requirements and the impact of any shortfall on the department’s mission. To help improve the definition and availability of costs associated with GFOs and aides, we recommend that the Secretary of Defense take the following four actions: Direct the Under Secretary of Defense for Personnel and Readiness finalize the enlisted aide population data biannual reporting requirement in the revised DOD Instruction 1315.09, define the position of officer aide, and require the military departments to report on officer aide population data. Direct the Director, Cost Assessment and Program Evaluation, in coordination with the Under Secretary of Defense for Personnel and Readiness and the secretaries of the military departments, to define the costs that could be associated with GFOs—such as security details—for the purpose of providing a consistent approach to estimating and managing the full costs associated with GFOs. Agency Comments and Our Evaluation We requested comments on a draft of this product from the Department of Defense. On August 29, 2014, the Deputy Director, Military Compensation Policy in the office of the Under Secretary of Defense for Personnel and Readiness provided DOD’s comments in an email. DOD also provided technical comments, which have been included as appropriate. In summary, DOD partially concurred with four of the five recommendations, and did not concur with one. DOD partially concurred with the first recommendation to conduct a comprehensive update for GFO requirements and define the circumstances under which periodic updates should occur. In its comments DOD stated that the recommendation is prudent given the changing operational environment and requirements. However, DOD also stated that defining the circumstances of a periodic review would impede the department’s ability to provide flexibility on future requirements and engagements. The intent of the recommendation was to help ensure that regular updates of GFO requirements are conducted—as the report notes, DOD last completed a comprehensive update of GFO requirements in 2003. We believe that DOD can define the circumstances for updating GFO requirements while retaining the ability to provide flexibility on future requirements and engagements, for example, by specifying conditions that may require a change to the frequency or scope of requirements updates. We continue to believe that fully addressing this recommendation by defining the circumstances under which periodic updates should occur would allow the department to efficiently manage the GFO population while also providing validation for existing and emerging needs. DOD partially concurred with the second, third, and fourth recommendations to establish guidance to finalize the enlisted aide population data biannual reporting requirement in the revised DOD Instruction 1315.09, define the position of officer aide, and require the military departments to report on officer aide population data. In its comments, DOD stated that the department concurred with providing biannual reports on enlisted aides and is taking steps to incorporate a new requirement for reports in the updated instruction governing the use of enlisted aides. As noted in the report, DOD has not established a time frame for completing the revision. DOD also stated that officer aide assignments are more along the lines of professional development and staff officer experience, allowing the services flexibility to ensure a broad scope of professional development in an operational or training environment. Further, DOD stated that the department will continue to allow the services to manage officer aides at the local level. However, as noted in the report, due to the lack of a department-wide definition for officer aides, the military services were not able to provide consistent data for these personnel, including cost data. Moreover, the report notes that the lack of a definition for officer aides is preventing DOD from having visibility over the costs associated with those personnel, and from including such costs when calculating the full costs of the GFO population. We continue to believe that defining the position of officer aide and requiring the military departments to report on the number of personnel assigned to these positions would improve the availability of cost information associated with the GFO population. Finally, DOD did not concur with the fifth recommendation to define the costs associated with GFOs—such as security details—for the purpose of providing a consistent approach to estimating the full costs associated with GFOs. We modified the recommendation first provided to DOD in our draft report in response to technical comments from the Cost Assessment and Program Evaluation office. Specifically, the original recommendation was for the Director, Cost Assessment and Program Evaluation, in coordination with the Under Secretary of Defense for Personnel and Readiness and the secretaries of the military departments, to define the costs associated with GFOs—such as security details and other relevant costs. We modified this recommendation to include the clause “…costs that could be associated with GFOs…” (emphasis added) and to delete the phrase “and other relevant costs.” We made this change because officials from the Cost Assessment and Program Evaluation office told us that the original recommendation did not provide them with needed flexibility to decide which GFO associated costs to define. However, DOD did not concur with the modified recommendation. In its comments, DOD stated that the department did not agree with the recommendation for two reasons. First, DOD stated it already defines the full manpower costs associated with GFOs in DOD Instruction 7041.04 (Estimating and Comparing the Full Costs of Civilian and Active Duty Military Manpower and Contract Support). Second, DOD stated that security details and aides assigned to GFOs are managed by the services and are included in their personnel costs, and that those billets should not be included with the costs associated with GFOs. As stated in the report, certain costs associated with GFOs—such as security details and officer aides—were unavailable because the cost elements were not defined. For example, we reported that, according to DOD officials, the military services do not report security detail costs consistently because without a department-wide definition such costs could include compensation, travel, equipment, weapons, and vehicles. Moreover, the Federal Accounting Standards Advisory Board has noted that a cost methodology should include any resources directly or indirectly used to perform work, and GFO’s often rely upon security details and officer aides to carry out their responsibilities. The report also stated that DOD Instruction 7041.04, and the accompanying tool developed by CAPE, are intended to collect cost elements required to estimate the full cost of military and civilian personnel. In commenting on the draft report, DOD did not address the issue of using these full costs to estimate and compare personnel costs and to support workforce allocation decisions, for example, when deciding whether a function should be performed by civilian or military personnel. Without a department-wide definition of the costs associated with GFOs— such as security details, and other relevant costs—the department is unable to include the full costs of GFOs as specified by DOD Instruction 7041.04 when making workforce allocation decisions. As a result, we continue to believe that formulating a consistent definition of the costs associated with GFOs—such as security details, and other relevant costs—for purposes of making specific comparisons of individual functions would enhance the department’s ability to consistently estimate the full costs associated with GFOs. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Air Force, the Army, and the Navy, and the Director, Cost Assessment and Program Evaluation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Scope and Methodology To identify any changes in the population and statutory limits for active duty general and flag officers (GFO) relative to other active duty personnel for fiscal years 2001 through 2013, we reviewed the relevant U.S. Code provisions (10 U.S.C. §§ 525 and 526) as applicable and amended for each of these years, and obtained, analyzed, and assessed the reliability of end strength population data from the Defense Manpower Data Center. We included population data from fiscal years 2001 through 2013 in our review to be consistent with the time frame included in our review of trends in GFO costs. The scope of our analysis included only those personnel on active duty; we did not include reserve component personnel in our review. We also obtained more detailed GFO population data from the Department of Defense (DOD) for fiscal years 2011 through 2013 and analyzed these data for contextual information, such as the exemptions the services applied to GFO positions that were above statutory limits. To determine the extent to which DOD updated GFO requirements since 2003, we met with officials at the Office of the Secretary of Defense and the military services responsible for GFO management, reviewed relevant GAO and DOD studies, conducted trend analyses and discussed reasons for population changes, and reviewed relevant DOD policies and guidance. We assessed the information we collected against DOD guidance that requires military and civilian personnel resources to be programmed in accordance with validated requirements that are periodically reevaluated, and a best practices model for strategic human capital management. To assess what is known about the costs associated with the active duty GFO population and their aides and any trends in such costs from fiscal years 2001 through 2013, including trends in GFO compensation relative to that of other active duty personnel, we identified the relevant DOD, component, and service offices responsible for managing active duty GFOs and enlisted and officer aides, along with related military personnel and operations and maintenance costs, such as compensation, housing, and travel; met with officials at these offices to determine the extent to which data were available; obtained and assessed the reliability of available data; and analyzed available data, when possible. We identified criteria in DOD guidance, federal internal control standards, and federal accounting standards that specify the need for full and reliable manpower and other cost data for the purpose of making operating decisions and allocating resources, and assessed the availability of GFO cost data against those criteria. We defined the GFO-related costs included in our scope as those costs that were (1) specified in the congressional reports that mandated our work, (2) consistent with prior GAO work on military compensation, or (3) determined as GFO-related costs based on discussions with knowledgeable officials. To determine the extent to which data on GFO-related costs were available, we categorized each cost element as “complete,” “partially complete,” or “unavailable” based on the availability, completeness, and reliability of data provided for fiscal years 2001 through 2013. Cost data that were determined to be “complete” were readily available for all of the years included in our review. Cost data that were determined to be “partially complete” were either not available or not reliable for all years. “Unavailable” cost data were either not readily available or were determined to be unreliable across all of the years included in our review. Specifically, we took the following steps to obtain, analyze, and assess the reliability of GFO and aide cost data: Compensation costs (GFOs, aides, non-GFO officers, and enlisted personnel) We used DOD’s composite standard pay rates to calculate the cost of compensation provided to active duty GFOs and enlisted and officer aides from fiscal years 2001 through 2013. The composite rates are used by DOD when determining the military personnel appropriations cost for budget and management studies. The rates include average basic pay plus retired pay accrual; Medicare-eligible retiree health care accrual; basic allowances for housing and subsistence; incentive, miscellaneous, and special pays; and permanent change of station expenses. The composite rates pool these cost elements together, as such trends for discrete compensation cost elements could not be determined from these data. Our previous work has recognized that there are multiple ways of estimating the compensation costs of military personnel depending on the mix of cash, such as basic pay; noncash benefits, like health care; and deferred compensation, such as retirement pension. studies have used varying approaches to study military compensation, certain elements of compensation are commonly incorporated into these assessments—for example, cash compensation beyond basic pay to include housing and subsistence allowances, the federal income tax advantage, and special and incentive pays. We used DOD’s composite rates to calculate compensation in part because they allowed us to obtain data specific to the GFO population over the timeframe covered by our review. GAO-10-561R. for officer and enlisted aides across all the services because of the limited availability of aide population data. We normalized all compensation costs that we were able to calculate to fiscal year 2013 dollars by using the employment cost index published by the U.S. Bureau of Labor Statistics. DOD’s composite rate uses a standard factor to calculate deferred costs (i.e., pension and retiree health care costs) for all active duty personnel, which does not account for the significant differences in retirement rates across ranks that we have previously reported. For example, we reported in 2005 that an estimated 15 percent of new enlisted personnel and 47 percent of new officers become eligible to receive pensions and retiree health care. Also, GFOs generally have already completed 20 years of service at the time of their promotion to the GFO ranks, and as such the retirement rate of GFOs is closer to 100 percent. Therefore, DOD’s approach to calculating the composite rate may significantly undercount the cost of providing retirement benefits to GFOs. We also used data supplied by the Office of the Under Secretary of Defense (Personnel and Readiness) to estimate the tax expenditure resulting from a portion of GFO compensation being tax exempt. value of tax-exempt allowances (basic allowance for subsistence, basic allowance for housing) depends on a servicemember’s pay grade, years of service, and number of dependents. To estimate the tax expenditure for each combination of pay grade (within the range of GFO pay grades), years of service, and number of dependents, we estimated the amount paid in taxes if the allowances are taxed and if they are not taxed, using estimates of applicable marginal tax rates supplied by the National Bureau of Economic Research. A weighted average of the differences in tax liability was then created based on the proportion of GFOs in each pay grade, years of service, and number of dependents category. The tax expenditure is the tax revenue forgone by the federal government by the policy of not taxing the basic allowance for housing and the basic allowance for subsistence. reimbursement.in aggregate, by fiscal year, so we were not able to assess trends across the military services. GFO travel costs prior to fiscal year 2009 were not available because not all military services transitioned to the Defense Travel System at the same time and not all legacy data were migrated to the new system. Aide travel and per diem cost data reported by the Air Force covered both enlisted and officer aides from fiscal years 2007 through 2013. The Marine Corps reported incomplete aide travel costs for personnel who were still on active duty that we excluded from our analysis, while the Army and the Navy did not report aide travel costs. The travel costs we analyzed were normalized to fiscal year 2013 dollars using the gross domestic product price index published by the U.S. Department of Commerce, Bureau of Economic Analysis. GFO cost data from 2009 through 2013 were reported GFO military/government air travel costs. We requested but did not obtain costs for GFO travel on military aircraft for fiscal years 2001 through 2013 from the military services, U.S. Transportation Command, the Defense Logistics Agency, and the U.S. Army Corps of Engineers. It is DOD policy that these data be retained for a period of 2 years, but cost data were not readily available. Ancillary costs associated with these flights, such as lodging, were included in our analysis of commercial travel data. GFO housing. We obtained data on the costs of housing units used as quarters for GFOs from fiscal years 2001 through 2013 from the annual GFOs’ quarters expenditure reports produced by the Office of the Under Secretary of Defense (Comptroller)covering fiscal year 2001 Defense Logistics Agency costs, also provided by the Comptroller’s office. These costs include operations, maintenance, utility, lease, and repair costs associated with DOD-owned and leased properties, as well as certain costs associated with privatized housing. The housing costs we analyzed were normalized to fiscal year 2013 and a supplementary data set dollars using the gross domestic product price index published by the U.S. Department of Commerce, Bureau of Economic Analysis. GFO health care. We obtained data on the costs of health care provided to active duty GFOs and their dependents from fiscal years 2003 through 2013 from the Defense Health Agency. These costs include direct care (i.e., medical care provided by the U.S. military health care system), purchased care (i.e., medical care provided by private sector providers in the networks outside of the military health care system), and pharmacy costs for both GFOs and their dependents. These data include GFO health care costs incurred at the Executive Medicine Clinic at Walter Reed National Military Medical Center and the Executive Services Health and Wellness Clinic at Fort Belvoir Community Hospital. However, while costs associated with executive medicine are included in the health care cost data we obtained, executive medicine costs associated with GFOs could not be separated from costs associated with other individuals eligible for executive medicine care and are therefore not separately reported. Cost data were normalized to fiscal year 2013 dollars using the gross domestic product price index published by the U.S. Department of Commerce, Bureau of Economic Analysis. We determined that costs prior to fiscal year 2003 were not reliable because legacy data were not migrated into the new system when data formats in the Defense Enrollment Eligibility Reporting System were changed in fiscal year 2003. GFO security details. We obtained data on the costs of security details provided to GFOs from fiscal years 2001 through 2013 from 27 different DOD organizations, including combatant commands, services, and other DOD organizations. These data included costs such as pay, travel, and equipment associated with personnel assigned to protective security details. Some of the DOD organizations provided fiscal year expenditures to us, while others reported a total sum of costs over a period covering several fiscal or calendar years. DOD officials told us that the costs were likely underreported. As a result, these data were excluded from our analysis. GFO official representation. We obtained complete official representation costs from the Air Force, Navy, and Marine Corps for fiscal years 2008 through 2013. The Army provided cost data from fiscal years 2001 through 2013, but data for certain years were inaccurate and the data set was therefore deemed unreliable for our purposes. The Navy also provided cost data from fiscal years 2001 through 2007, but these data were excluded from our department-wide analysis in order to present a consistent range of data. Official entertainment and representation cost data were reported in aggregate by service. As a result, costs attributable to GFOs could not be separated from those attributable to senior civilian officials. The cost data were normalized to fiscal year 2013 dollars using the gross domestic product price index published by the U.S. Department of Commerce, Bureau of Economic Analysis. GFO executive training. We obtained costs for the CAPSTONE and PINNACLE training courses, which are specific to GFOs and are administered by the National Defense University. The National Defense University provided complete cost data for CAPSTONE from fiscal years 2001 through 2013. Cost data for PINNACLE were provided for fiscal years 2005 through 2013, but data for fiscal years 2005 through 2007 were excluded from our analysis because during this time the National Defense University combined costs for PINNACLE with another course (KEYSTONE). As such, we determined that PINNACLE data from fiscal years 2005 through 2007 were unreliable for our purposes. The cost data were normalized to fiscal year 2013 dollars using the gross domestic product price index published by the U.S. Department of Commerce, Bureau of Economic Analysis. To assess the reliability of population and cost data obtained for GFOs and other active duty personnel—including non-GFO officers, enlisted personnel, and officer and enlisted aides—we interviewed, corresponded with, or administered questionnaire(s) to ascertain the process by which data were compiled and to identify data aggregation, storage, reporting, and quality control processes. We also independently assessed the completeness of the data and coordinated with the appropriate officials to resolve any data anomalies, and reviewed available documentation describing the processes or system(s) used to house relevant data and to identify issues material to data reliability. We did not assess all of the processes used to create or assemble certain data, such as data sets assembled by DOD officials from multiple sources. We determined that certain data were not sufficiently reliable for the purpose of our review based on the presence of significant errors or incompleteness in some or all of the key data elements, or because using the data might lead to an incorrect or unintentional message. As discussed above, those data were excluded from our analyses. We believe the cost and population data that we did include in this report are sufficiently reliable for the purposes of assessing GFO and other active duty population trends and for determining the costs associated with GFOs and other active duty personnel from fiscal years 2001 through 2013. In addressing both of our audit objectives, we contacted officials from the following DOD organizations: Office of the Secretary of Defense, Cost Assessment and Program Office of the Under Secretary of Defense for Personnel and Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs The Joint Chiefs of Staff, Directorate for Logistics (J-4) National Defense University Defense Manpower Data Center Defense Health Agency Office of the Under Secretary of Defense (Comptroller) Defense Logistics Agency Defense Travel Management Office U.S. Army U.S. Navy U.S. Marine Corps U.S. Air Force We conducted this performance audit from November 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. Staff Acknowledgments In addition to the contact named above, Margaret A. Best (Assistant Director), Timothy J. Carr, Grace Coleman, Ryan D’Amore, Foster Kerrison, Michael Silver, Amie Steele, and Sabrina Streagle made key contributions to this report. Related GAO Products Defense Headquarters: DOD Needs to Reevaluate Its Approach for Managing Resources Devoted to the Functional Combatant Commands. GAO-14-439. Washington, D.C.: June 26, 2014. Defense Headquarters: Guidance Needed to Transition U.S. Central Command’s Costs to the Base Budget. GAO-14-440. Washington, D.C.: June 9, 2014. Human Capital: Opportunities Exist to Further Improve DOD’s Methodology for Estimating the Costs of Its Workforces. GAO-13-792. Washington, D.C.: September 25, 2013. Financial and Performance Management: More Reliable and Complete Information Needed to Address Federal Management and Fiscal Challenges. GAO-13-752T. Washington, D.C.: July 10, 2013. Defense Headquarters: DOD Needs to Periodically Review and Improve Visibility Of Combatant Commands’ Resources. GAO-13-293. Washington, D.C.: May 15, 2013. Defense Headquarters: Further Efforts to Examine Resource Needs and Improve Data Could Provide Additional Opportunities for Cost Savings. GAO-12-345. Washington, D.C.: March 21, 2012. Military Cash Incentives: DOD Should Coordinate and Monitor Its Efforts to Achieve Cost-Effective Bonuses and Special Pays. GAO-11-631. Washington, D.C.: June 21, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Questions for the Record Related to Military Compensation. GAO-10-803R. Washington, D.C.: June 3, 2010. Military Personnel: Military and Civilian Pay Comparisons Present Challenges and Are One of Many Tools in Assessing Compensation. GAO-10-561R. Washington, D.C.: April 1, 2010. Military Personnel: Military Departments Need to Ensure That Full Costs of Converting Military Health Care Positions to Civilian Positions Are Reported to Congress. GAO-06-642. Washington, D.C.: May 1, 2006. Military Personnel: DOD Needs to Improve the Transparency and Reassess the Reasonableness, Appropriateness, Affordability, and Sustainability of Its Military Compensation System. GAO-05-798. Washington, D.C.: July 19, 2005. Military Personnel: DOD Could Make Greater Use of Existing Legislative Authority to Manage General and Flag Officer Careers. GAO-04-1003. Washington, D.C.: September 23, 2004. Military Personnel: General and Flag Officer Requirements Are Unclear Based on DOD’s 2003 Report to Congress. GAO-04-488. Washington, D.C.: April 21, 2004. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002.
GFOs are the elite leaders of the U.S. military. In August 2013 Congress raised questions about costs associated with GFOs as the size of the military forces decreases. GAO was mandated to assess the trends in costs of the active duty GFO population from FY 2001 through FY 2013. This report (1) identifies changes in the population and statutory limits for active duty GFOs relative to other active duty personnel, and the extent to which DOD updated GFO requirements, and (2) assesses what is known about the costs associated with active duty GFOs and their aides and trends in such costs, including trends in GFO compensation costs relative to those of other active duty personnel from FY 2001 through FY 2013. GAO assessed the availability of cost data and analyzed available active duty military personnel population and cost data, including costs for compensation, housing and travel for FY 2001 through FY 2013, using FY 2013 dollars. GAO also met with DOD officials. The general and flag officer (GFO) population (i.e., officers ranked at or above brigadier general or rear admiral) experienced higher rates of growth than the enlisted population since fiscal year (FY) 2001. The Department of Defense (DOD) has not comprehensively updated GFO requirements—the number of GFOs needed to fill positions—since 2003 to reflect changes in the active duty force nor has DOD defined circumstances under which such an update should occur. GFO population growth was generally consistent with the growth in GFO statutory limits. From FY 2001 through FY 2013 growth was not evenly distributed across all military ranks. For example, the GFO and non-GFO officer populations grew from 871 to 943 (8 percent) and from 216,140 to 237,586 (10 percent), respectively, while the enlisted population decreased from 1,155,344 to 1,131,281 (2 percent). DOD officials attributed these differences to the greater flexibility that military planners have to decrease the enlisted population. DOD guidance requires military personnel requirements to be periodically evaluated. DOD conducted a comprehensive update of GFO requirements in 2003 and concluded that the department needed more GFOs than were authorized by Congress. However, DOD officials said that they have not comprehensively updated the requirements since 2003 or advocated for an increase of GFOs because of fiscal constraints. Nevertheless, without periodically conducting a comprehensive update of DOD's GFO requirements, and defining when such an update should occur, it will be difficult for DOD to help ensure that the GFO population is properly sized and structured to meet its assigned missions. The full cost to DOD for GFOs from FY 2001 through FY 2013 is unknown because complete cost data for GFOs and their aides were not available and trends in available cost data varied. Certain cost data were fully available and complete for FY 2001 through FY 2013, while other cost data were either partially complete or unavailable because of reporting practices, retention policies, inconsistent definitions, and reliability factors. Also, the position of officer aide is not defined in departmental guidance and as a result all military services were not able to consistently track the number of personnel in these positions. Cost data related to GFO compensation and housing were readily available, and trends for these costs varied, with compensation increasing by 38 percent and housing decreasing by 67 percent from FY 2001 through FY 2013. Measured on a per capita basis, compensation costs grew by 18 percent for GFOs, 19 percent for non-GFO officers, and 32 percent for enlisted personnel over the same time frame. GAO assessed GFO commercial travel and per diem and GFO health care costs as partially complete because data were not available for FY 2001 through FY 2013. For the years in which complete data were available, travel and per diem costs increased by 4 percent from FY 2009 through FY 2013 and health care costs grew by 77 percent from FY 2003 through FY 2013. Other cost data, including data for GFO travel on military and government flights, GFO personal security details, and certain enlisted and officer aide costs, were not readily available or GAO determined them to be unreliable because of concerns regarding completeness or accuracy. By defining the officer aide position and GFO and associated aide costs, DOD will be able to better account for the full costs of GFOs and improve its ability to make sound workforce allocation decisions.
Background The federal government spends more than $80 billion dollars on IT annually, with more than $2 billion of that amount spent on acquiring cloud-based services. This amount is expected to rise in coming fiscal years, according to OMB. A goal of these investments is to improve federal IT systems by replacing aging and duplicative infrastructure and systems that are costly and difficult to maintain. Cloud computing helps do this by giving agencies the ability to purchase a broad range of IT services in a utility-based model that allows an agency to pay for only the IT services it uses. According to NIST, an application should possess five essential characteristics to be considered cloud computing: on-demand self- service, broad network access, resource pooling, rapid elasticity, and measured service. Essentially, cloud computing applications are network-based and scalable on demand. According to OMB, cloud computing is economical, flexible, and fast: Economical: cloud computing can be a pay-as-you-go approach, in which a low initial investment is required to begin and additional investment is needed only as system use increases. Flexible: IT departments that anticipate fluctuations in user demand no longer need to scramble for hardware and software to meet increasing need. With cloud computing, capacity can be added or subtracted quickly. Fast: cloud computing eliminates long procurement and certification processes, while providing a wide selection of services. In addition, according to NIST, cloud computing offers three service models: Infrastructure as a service—the agency has the capability to provision processing, storage, networks, and other fundamental computing resources and run its own software, including operating systems and applications. The agency does not manage or control the underlying infrastructure but controls and configures operating systems, storage, deployed applications, and possibly, selected networking components (e.g., host firewalls). Platform as a service—the agency deploys its own or acquired applications created using programming languages and tools supported by the provider. The agency does not manage or control the underlying infrastructure, but controls and configures the deployed applications. Software as a service—the agency uses the service provider’s applications, which are accessible from various client devices through an interface such as a Web browser (e.g., Web-based e-mail system). The agency does not manage or control the underlying infrastructure or the individual application capabilities. As can be seen in figure 1, each service model offers unique functionality, with consumer control of the environment decreasing from infrastructure to platform to software. NIST has also defined four deployment models for providing cloud services: private, community, public, and hybrid. In a private cloud, the service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer’s premises. In a community cloud, the service is shared by organizations with similar requirements. The cloud may be managed by the organizations or a third party and may exist on or off an organization’s premises. A public cloud is available to the general public and is owned and operated by the service provider. A hybrid cloud is a composite of two or more other deployment models (private, community, or public) that are bound together by standardized or proprietary technology. According to federal guidance, these deployment models determine the number of consumers and the nature of other consumers’ data that may be present in a cloud environment. A public cloud should not allow a consumer to know or control other consumers of a cloud service provider’s environment. However, a private cloud can allow for ultimate control in selecting who has access to a cloud environment. Community clouds and hybrid clouds allow for a mixed degree of control and knowledge of other consumers. OMB Has Undertaken Initiatives and Issued Guidance to Increase Agency Adoption of Cloud Computing Services According to OMB, the federal government needs to shift from building custom computer systems to adopting cloud technologies and shared services, which will improve the government’s operational efficiencies and result in substantial cost savings. To help agencies achieve these benefits, OMB required agencies in 2010 to immediately shift to a “Cloud First” policy and increase their use of available cloud and shared services whenever a secure, reliable, and cost-effective cloud service exists. In February 2011, OMB issued the Federal Cloud Computing Strategy, as called for in its 25-Point Plan. The strategy provided definitions of cloud computing services; benefits of cloud services, such as accelerating data center consolidations; a decision framework for migrating services to a cloud environment; case studies to support agencies’ migration to cloud computing services; and roles and responsibilities for federal agencies. For example, the strategy stated that NIST’s role is to lead and collaborate with federal, state, and local government agency chief information officers, private sector experts, and international bodies to identify standards and guidance and prioritize the adoption of cloud computing services. In addition, the strategy stated that agency cloud service contracts should include SLAs designed to meet agency requirements. In a December 2011 memo, OMB established the Federal Risk and Authorization Management Program (FedRAMP), a government-wide program intended to provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud computing products and services. All federal agencies must meet FedRAMP requirements when using cloud services and the cloud service providers must implement the FedRAMP security requirements in their cloud environment. To become authorized, cloud service providers provide a security assessment package to be reviewed by the FedRAMP Joint Authorization Board, which may grant a provisional authorization. Federal agencies can leverage cloud service provider authorization packages for review when granting an agency authority to operate, where this reuse is intended to save time and money. Further, at the direction of OMB, the Chief Information Officers Council and the Chief Acquisition Officers Council issued, in February 2012, guidance to help agencies acquire cloud services. In particular, the guidance highlights that SLAs are a key factor for ensuring the success of cloud based services and that federal agencies should include an SLA when creating a cloud computing contract or as a reference. The guidance provides important areas of an SLA to be addressed; for example, it states that an SLA should define performance with clear terms and definitions, demonstrate how performance is being measured, and identify what enforcement mechanisms are in place to ensure the conditions are being met. In addition, NIST, in its role designated by OMB in the Federal Cloud Computing Strategy, collaborated with private sector organizations to release cloud computing guidance, which affirms the importance of using an SLA when acquiring cloud computing services. Moreover, a number of other public and private sector organizations have issued research on the incorporation of an SLA in a cloud computing contract. According to these studies, an SLA is important because it ensures that services are being performed at the levels specified in the cloud computing contract, can significantly contribute to avoiding conflict, and can facilitate the resolution of an issue before it escalates into a dispute. The studies also highlight that a typical SLA describes levels of service using various attributes such as availability, serviceability or performance, and specifies thresholds and financial penalties associated with a failure to comply with these thresholds. Agencies Are Taking Steps to Implement Prior GAO- Identified Improvements for Cloud-based Computing Services We have previously reported on federal agencies’ efforts to implement cloud computing services and on progress oversight that agencies have made to help federal agencies in those efforts. These include In May 2010, we reported on the efforts of multiple agencies to ensure the security of government-wide cloud computing services. We noted that, while OMB, the General Services Administration (GSA), and NIST had initiated efforts to ensure secure cloud computing services, OMB had not yet finished a cloud computing strategy; GSA had begun a procurement for expanding cloud computing services for its website that served as a central location for federal agencies to purchase cloud services, but had not yet developed specific plans for establishing a shared information security assessment and authorization process; and NIST had not yet issued cloud-specific security guidance. We recommended that OMB establish milestones to complete a strategy for federal cloud computing and ensure it addressed information security challenges. These include having a process to assess vendor compliance with government information security requirements and division of information security responsibilities between the customer and vendor. OMB agreed with our recommendations and subsequently published a strategy in February 2011 that addressed the importance of information security when using cloud computing, but it did not fully address several key challenges confronting agencies, such as the appropriate use of attestation standards for control assessments of cloud computing service providers, and division of information security-related responsibilities between customer and provider. We also recommended that GSA consider security in its procurement for cloud services, including consideration of a shared assessment and authorization process. GSA generally agreed with our recommendations and has since developed the FedRAMP program. Finally, we recommended that NIST issue guidance specific to cloud computing security. NIST agreed with our recommendations and has since issued multiple publications that address such guidance. In April 2012, we reported that more needed to be done to implement OMB’s 25-Point Plan and measure its results. Among other things, we reported that, of the 10 key action items that we reviewed, 3 had been completed and 7 had been partially completed by December 2011. In particular, OMB and agencies’ cloud-related efforts only partially addressed requirements. Specifically, agencies’ plans were missing key practices, such as a discussion of needed resources, a migration schedule, and plans for retiring legacy systems. As a result, we recommended, among other things, that the Secretaries of Homeland Security and Veterans Affairs, and the Attorney General direct their respective CIOs to complete practices missing from the agencies’ plans for migrating services to a cloud computing environment. Officials from each of the agencies generally agreed with our recommendations and have taken steps to implement them. In July 2012, we reported on the efforts of seven agencies to implement three services by June 2012, including the challenges associated with doing so. Specifically, we reported that selected federal agencies had made progress in implementing OMB’s “Cloud First” policy. Seven agencies had implemented 21 cloud computing solutions and had spent a total of $307 million for cloud computing in fiscal year 2012, about 1 percent of their total IT budgets. While each of the seven agencies had submitted plans to OMB for implementing their cloud services, a majority of the plans were missing required elements. Agencies also identified opportunities for future cloud service implementations, such as moving storage and help desk services to a cloud environment. Agencies also shared seven common challenges that they experienced in moving services to cloud computing. We made recommendations to the agencies to develop planning information, such as estimated costs and legacy IT systems’ retirement plans, for existing and planned services. The agencies generally agreed with our recommendations and have taken actions to implement them. In September 2014, we reported on the aforementioned seven agencies’ efforts to implement additional cloud computing services, any reported cost savings as a result of implementing those cloud services, and challenges associated with the implementation. All of the seven federal agencies we reviewed had added more cloud computing services; the number of cloud services implemented by them had increased from 21 to 101 between fiscal years 2012 and 2014. In addition, agencies had collectively doubled the percentage of their IT budgets from 1 to 2 percent during the fiscal year 2012–14 period. Further, the agencies reported a collective cost savings of about $96 million through fiscal year 2013. We made recommendations to the agencies to assess their IT investments that had yet to be evaluated for suitability for cloud computing services. For the most part, the agencies generally agreed with our recommendations and have taken actions to implement them. Key Practices for Cloud Computing Service Level Agreements Can Help Agencies Manage Services More Effectively Based on our analysis of practices recommended by the ten organizations with expertise in the area of SLAs and OMB, we compiled the following list of ten practices that are key for federal agencies to incorporate into a contract to help ensure services are performed effectively, efficiently, and securely for cloud computing services. The key practices are organized by the following management areas—roles and responsibilities, performance measures, security, and consequences. Roles and responsibilities: (1) Define the roles and responsibilities of the major stakeholders involved in the performance of the SLA and cloud contract. These definitions would include, for example, the persons responsible for oversight of the contract, audit, performance management, maintenance, and security. (2) Define key terms, including activation date, performance, and identify any ambiguities in the definitions of cloud computing terms in order to provide the agency with the level of service they can expect from their cloud provider. Without clearly defined roles, responsibilities, and terms, the agency may not be able to appropriately measure the cloud provider’s performance. Performance measures: (1) Define the performance measures of the cloud service, including who is responsible for measuring performance. These measures would include, among other things, the availability of the cloud service; the number of users that can access the cloud at any given time; and the response time for processing a customer transaction. Providing performance parameters provides both the agency and service provider with a well-defined set of instructions to be followed. (2) Specify how and when the agency would have access to its data, including how data and networks will be managed and maintained throughout the life cycle of the service. Provide any data limitations, such as who may or may not have access to the data and if there are any geographic limitations. (3) Specify management requirements, for example, how the cloud service provider would monitor the performance of the cloud, report incidents, and how and when they would plan to resolve them. In addition, identify how and when the agency would conduct an audit to monitor the performance of the service provider, including access to the provider’s performance logs and reports. (4) Provide for disaster recovery and continuity of operations planning and testing. This includes, among other things, performing a risk management assessment; how the cloud service would be managed by the provider in the case of a disaster; how data would be recovered; and what remedies would apply during a service failure. (5) Describe applicable exception criteria for when the cloud provider’s service performance measures do not apply, such as during scheduled cloud maintenance or when updates occur. Without any type of performance measures in place, agencies would not be able to determine whether the cloud services under contract are meeting expectations. Security: (1) Specify the security performance requirements that the service provider is to meet. This would include describing security performance metrics for protecting data, such as data reliability, data preservation, and data privacy. Cleary define the access rights of the cloud service provider and the agency as well as their respective responsibilities for securing the data, applications, and processes to meet all federal requirements. (2) Describe what would constitute a breach of security and how and when the service provider is to notify the agency when the requirements are not being met. Without these safeguards, computer systems and networks as well as the critical operations and key infrastructures they support may be lost, and information—including sensitive personal information—may be compromised, and the agency’s operations could be disrupted. Consequences: Specify a range of enforceable consequences, including the terms under which a range of penalties and remedies would apply for non-compliance with the SLA performance measures. Identify how such enforcement mechanisms would be imposed or exercised by the agency. Without penalties and remedies, the agency may lack leverage to enforce compliance with contract terms when situations arise. OMB Guidance Addresses Seven of the Ten Key Practices Guidance issued in February 2012, at the direction of OMB highlighted SLAs as being a key factor for ensuring the success of cloud-based services and advised that federal agencies should include an SLA or a reference within the contract when creating a cloud computing contract. The guidance provides areas of an SLA to be addressed; for example, it states that an SLA should define performance with clear terms and definitions, demonstrate how performance is being measured, and identify what enforcement mechanisms are in place to ensure the conditions are being met. However, the guidance addressed only seven of the ten key practices listed in table 1 that could help agencies better track performance and thus ensure the effectiveness of their cloud services. Specifically, the guidance did not specify how and when the agency would have access to its data, provide for disaster recovery and continuity of operations planning, and describe any exception criteria. OMB staff members said that, although the guidance drafted by the Chief Information Officers Council and the Chief Acquisition Officers Council was a good start, including all ten key practices should be considered. Without complete guidance from OMB, there is limited assurance that agencies will apply all the key SLA practices into their cloud computing contracts, and therefore may be unable to hold contractors accountable when performance falls short of their goals. Selected Agencies Incorporated Most of the Key Practices, but Differed in Addressing Them Many of the 21 cloud service contracts we reviewed at the five selected agencies incorporated a majority of the key practices, but the number of practices differed among contracts. Specifically, seven of the cloud service contracts reviewed met all 10 of the key practices. This included three from DHS, three from Treasury, and one from VA. The following figure shows the total cloud service contracts reviewed and the number that met the 10 key practices at the five selected agencies. Of the remaining 14 cloud service contracts, 13 incorporated five or more of the key practices, and 1 did not meet any of the key practices. Figure 3 shows each of the cloud service contracts we reviewed and the extent to which the agency had included key practices in its SLA contracts. Appendix II includes our analysis of all the cloud services we reviewed, by agency. A primary reason that the agencies did not include all of the practices was that they lacked guidance that addresses these SLA practices. Of the five agencies, only DOD had developed cloud service contracting guidance that addressed some of the practices. More specifically, DOD’s guidance only addressed three of the key practices: disaster recovery and continuity of operations planning, metrics on security performance requirements, and notifying the agency when there is a security breach. In addition, the guidance partially addressed the practice on access to agency data, specifically, with regard to transitioning data back to the agency in case of exit/termination of service. Agency officials responsible for the cloud services that did not meet or only partially met key practices provided the following additional reasons for not including all ten practices: Officials from DOD’s Office of the Chief Information Officer told us that the reason key practices were not always fully addressed is that, when the contracts and associated SLAs were developed, they did not have the aforementioned DOD guidance on cloud service acquisition and use—namely, the agency’s memorandum on acquiring cloud services that was released in December 2014, and the current Defense Federal Acquisition Regulation Supplement, which was finalized in August 2015. However, as previously stated, this updated guidance addressed three of the ten key practices, and part of one other. Officials from DHS’s Office of the Chief Information Officer stated that the Infrastructure as a Service cloud service addressed the partially met and not met key practices but did not provide supporting documentation to show that the practices were in place. If key practices have not been incorporated, the system may have decreased performance and the cloud service may not meet its intended goals. HHS officials from the National Institutes of Health attributed unmet or partially met practices for four cloud services—Remedy Force, Medidata, the BioMedical Imaging and BioEngineering website, and the Drug Abuse public website—to the fact that they evaluate the cloud vendor’s ability to meet defined agency needs, rather than negotiate with vendors on SLA requirements. While this may explain their shortfalls in not addressing all SLA key practices, the agency may be placing their systems at risk of not conducting adequate service level measurements, which may result in decreased service levels. HHS officials from the Administration of Children and Families stated that the reason key practices were partially addressed or not addressed for the Grant Solutions cloud service was that these practices were being managed by HHS personnel using other tools and plans, rather than via the SLA established for this service. For example, according to the officials, they are using a management information system to monitor performance of the cloud provider. In addition, with respect to disaster management, the officials said that they have their own disaster recovery plan. Nonetheless, leading studies show that these practices should still be incorporated as part of the cloud service contract to ensure agencies have the proper control over their cloud services. Treasury officials said the reason, among other things, the SLAs for Treasury Web Services and IRS Portal Environment only partially met certain key practices was because the practices were being provided by support contractors hired by the cloud service provider, and were not directly subject to the SLAs established between Treasury and the cloud service provider. Nonetheless, while having contractors perform practices is an acceptable approach, Treasury officials were unable to provide supporting documentation to show that support contractors were assisting with the practices in question. Officials from VA’s Office of Information and Technology said the reason the key practice associated with penalties and remedies was not included in the Terremark SLA was because penalties were addressed within other parts of the contract; however, officials were not able to provide documentation identifying such penalties. With regard to an SLA for eKidney, officials told us they had not addressed any of the key practices due to the fact that an SLA was not developed between the agency and cloud service provider. Without including an SLA in cloud service contracts, the agency runs the risk of not having the mechanisms in place to effectively evaluate or control contractor performance. Until these agencies develop SLA guidance and incorporate all key practices into their cloud computing contracts, they may be limited in their ability to measure the performance of the services, and, therefore, may not receive the services they require. Conclusions Although OMB has provided agencies guidance to better manage contracts for cloud computing services, this guidance does not include all the key practices that we identified as necessary for effective SLAs. Similarly, Defense, Homeland Security, Health and Human Services, Treasury, and Veterans Affairs have incorporated many of the key practices in the cloud service contracts they have entered into. Overall, this is a good start towards ensuring that agencies have mechanisms in place to manage the contracts governing their cloud services. However, given the importance of SLAs to the management of these million-dollar service contracts, agencies can better protect their interests by incorporating the pertinent key practices into their contracts in order to ensure the delivery and effective implementation of services they contract for. In addition, agencies can improve management and control over their cloud service providers by implementing all recommended and applicable SLA key practices. Recommendations for Executive Action To ensure that agencies are provided with more complete guidance for contracts for cloud computing services, we recommend that the Director of OMB include all ten key practices in future guidance to agencies. To help ensure continued progress in the implementation of effective cloud computing SLAs, we recommend that the Secretary of Defense direct the appropriate officials to ensure key practices are fully incorporated for cloud services as the contracts and associated SLAs expire. These efforts should include updating the DOD memorandum on acquiring cloud services and current Defense Acquisition Regulations System to more completely include the key practices. To help ensure continued progress in the implementation of effective cloud computing SLAs, we recommend that the Secretaries of Health and Human Services, Homeland Security, Treasury, and Veterans Affairs direct appropriate officials to develop SLA guidance and ensure key practices are fully incorporated as the contract and associated SLAs expire. Agency Comments and Our Evaluation In commenting on a draft of this report, four of the agencies—DOD, DHS, HHS, and VA—agreed with our recommendations; and OMB and one agency (Treasury) had no comments. The specific comments from each agency are as follows: In an e-mail received on March 25, 2016, OMB staff from the Office of E-Government and Information Technology stated that the agency had no comments at this time. In written comments, the Department of Defense concurred with our recommendation and described actions it plans to take to address the recommendation. Specifically, DOD stated that it will update its cloud computing guidance and contracting guidance as appropriate. The Department of Defense’s comments are reprinted in appendix III. In written comments, the Department of Homeland Security concurred with our recommendation and described actions it plans to take to address the recommendation. Specifically, the department will establish common cloud computing service level agreement guidance. DHS also provided technical comments, which we have incorporated in the report as appropriate. The Department of Homeland Security’s comments are provided in appendix IV. In written comments, the Department of Health and Human Services concurred with our recommendation, but noted that it was not directed by a federal mandate. We acknowledge that our recommendation is not directed by a mandate; however, implementing leading practices for cloud computing can result in significant benefits. The department also provided technical comments, which we have incorporated in the report as appropriate. The Department of Health and Human Service’s comments are provided in appendix V. In an e-mail received on March 18, 2016, an audit liaison from the Department of the Treasury’s Office of the CIO stated that the department had no comment. In written comments, the Department of Veterans Affairs concurred with our recommendation and described planned actions to address it. For example, the department will develop service level agreement guidance to include the 10 key practices. The Department of Veterans Affairs comments are provided in appendix VI. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Health and Human Services, Homeland Security, the Treasury, and Veterans Affairs; and the Director of the Office of Management and Budget, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) identify key practices used in cloud computing service level agreements (SLA) to ensure service is performed at specified levels and (2) determine the extent to which federal agencies have incorporated such practices into their cloud computing service level agreements. To identify key practices used in cloud computing service level agreements, we analyzed SLA research, studies, and guidance developed and used by federal agencies and private entities. We then performed a comparative analysis of the practices to identify the practices that were recommended by at least two sources. Specifically, we analyzed information from publications and related documentation issued by the following ten public and private organizations to determine key SLA practices: Federal Chief Information Officer Council Chief Acquisitions Officers Council National Institute of Standards and Technology European Commission Directorate General for Communications Networks, Content and Technology Office of Management and Budget Gartner MITRE Corporation Cloud Standards Customer Council International Organization for Standardization International Electrotechnical Commission Next, we organized these practices into management areas and validated our analysis through interviews with experts from these organizations. We also had officials from the Office of Management and Budget (OMB) review and validate that these practices are the ones the office expects federal agencies to follow. In cases where experts disagreed, we analyzed their responses, including the reasons they disagreed, and made changes as appropriate. These actions resulted in our list of key practices for cloud service SLAs. To determine the extent to which federal agencies have incorporated key practices into their cloud computing contracts, we selected five agencies to review based, in part, on those with the largest fiscal year 2015 IT budgets and planned spending on cloud computing services. The agencies selected were the Departments of Defense (DOD), Homeland Security (DHS), Health and Human Services (HHS), Treasury, and Veterans Affairs (VA). We selected these agencies based on the following two factors. First, they have the largest planned IT budgets for fiscal year 2015. Their budgets, which collectively totaled $57 billion, represent about 72 percent of the total federal IT budget ($78 billion). Second, these agencies plan to spend relatively large amounts on cloud computing. Specifically, based on our analysis of OMB’s fiscal year 2015 budget data, each of the five departments were in the top 10 for the largest amount budgeted for cloud computing and collectively planned to spend $1.2 billion on cloud computing, which represents about 57 percent of the total amount that federal agencies plan to invest in cloud computing ($2.1 billion). To select and review the cloud services used by the agencies, we obtained an inventory of cloud services for each of the five agencies, and then, for each agency, we listed their cloud services in a random fashion and selected the first two cloud services in the list for each of the three major cloud service models (infrastructure, platform, and software). In certain cases, the agency did not have two cloud services for a service model, so the number chosen for that service model was less than two. This resulted in a non-generalizable sample of 23 cloud services. However, near the end of our engagement, agencies identified 2 of the services as being in a pilot stage (one from DHS, and one from HHS), and thus not operational. We excluded these services from our analysis, as our methodology to only assess operational cloud services. Due to the stage of the engagement, we were unable to select additional services for review. Further, because no computer-generated data was used we determined that there were no data reliability issues. For each of the selected services, we compared its cloud service contract (if one existed) and any associated SLA documentation to our list of key practices to determine if there were variances and, if so, their cause and impact. To do so, two team analysts independently reviewed the cloud service contracts against the key practices using the following criteria: Met: all aspects of the key practices were fully addressed. Partially met: some key practices were addressed. Did not meet: no key practices were addressed. In cases where analysts differed on the assessments, we discussed what the rating should be until we reached a consensus. We also interviewed agency officials to corroborate our analysis and identify the causes and impacts of any variances. We conducted this performance audit from January 2015 to April 2016 in accordance to generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Agencies’ Cloud Service SLAs against Key Practices The following tables show each of the five agencies’—DOD, DHS, HHS, Treasury, and VA—cloud services we assessed and our analysis of each contract for cloud services against the key practices. In cases where the SLA partially met a practice, the analysis also includes discussion of the rationale for why that assessment was provided. With regard to those services that partially met key practices: The Integrated Risk Information System partially addressed one key practice on how and when the agency was to have access to its data and networks. It included how the data would be transitioned, but did not specify how access to data and networks was to be managed or maintained. The Case Tracking cloud service partially included the practice on specifying metrics for security performance requirements. It specified how security needs were to be met but did not give specific metrics for doing so. Email as a Service partially addressed two key practices. For the practice on specifying service management requirements, it specified how the cloud service provider was to monitor performance, but did not address how the provider was to report performance or how the agency was to confirm the performance. For the other practice on specifying metrics for security performance requirements, it included how security needs were to be met but did not specify the security metrics. The Web Portal partially incorporated two key practices. For the practice on how and when the agency was to have access to its data and networks, it specified how the data was to be transitioned, but not how access to data and networks was to be managed or maintained. For the other practice on specifying metrics for security performance requirements, it included monitoring of the contractor regarding security, but did not specify security metrics. Infrastructure as a Service partially incorporated two key practices. For the practice on how and when the agency was to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but did not provide how data and networks was to be transitioned back to the agency in case of an exit. For the other practice on service management requirements, it described how the cloud service is to monitor performance, but did not specify how and when the agency was to confirm audits of the service provider’s performance. With regard to those services that partially met key practices, National Institute of Health’s Remedy Force partially addressed one key practice on defining measurable performance objectives. It included various performance objectives, such as levels of service and availability of the cloud service, capacity and capability, and measures for response time, but it did not include which party was to be responsible for measuring performance. The National Institute of Health’s Medidata Rave partially incorporated two key practices. It defined measurable performance objectives, specifically it specified levels of service, capacity and capability of the service, and response time, but did not specify the period of time that it was to be measured. For the other practice on specifying a range of enforceable consequences, it specified remedies, but did not identify any penalties related to non-compliance with performance measures. The National Institute on Drug Abuse Public Website partially addressed two key practices. For the practice on specifying how and when the agency is to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but did not identify how data and networks were to be managed throughout duration of the SLA. For the other practice on specifying a range of enforceable consequences, it included a number of remedies, but did not specify a range of enforceable penalties. HHS’s Grant Solutions partially incorporated one key practice on specifying service management requirements. It provided for when and how the agency was to confirm cloud provider performance, but did not specify how the cloud service provider was to monitor performance and report results. With regard to those services that partially met key practices, Treasury’s Internal Revenue Service’s Portal Environment partially included one key practice on specifying how and when the agency was to have access to its data and networks. It specified how and when the agency was to have access to its data and networks, but it did not provide on how data and networks were to be transitioned back to the agency in case of an exit. The Treasury’s Web Solutions partially addressed two key practices. For the practice on specifying how and when the agency was to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but it did not provide how data and networks would be transitioned back to the agency in case of an exit. For the other practice on specifying a range of enforceable consequences, it did not provide detailed information on a range of enforceable penalties and remedies for non-compliance with SLA performance measures. Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Homeland Security Appendix V: Comments from the Department of Health & Human Services Appendix VI: Comments from the Department of Veterans Affairs Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, individuals making contributions to this report included Gary Mountjoy (assistant director), Gerard Aflague, Scott Borre, Nancy Glover, Lori Martinez, Tarunkant Mithani, Karl Seifert, and Andrew Stavisky.
Cloud computing is a means for delivering computing services via IT networks. When executed effectively, cloud-based services can allow agencies to pay for only the IT services used, thus paying less for more services. An important element of acquiring cloud services is a service level agreement that specifies, among other things, what services a cloud provider is to perform and at what level. GAO was asked to examine federal agencies' use of SLAs. GAO's objectives were to (1) identify key practices in cloud computing SLAs and (2) determine the extent to which federal agencies have incorporated such practices into their SLAs. GAO analyzed research, studies, and guidance developed by federal and private entities to develop a list of key practices to be included in SLAs. GAO validated its list with the entities, including OMB, and analyzed 21 cloud service contracts and related documentation of five agencies (with the largest fiscal year 2015 IT budgets) against the key practices to identify any variances, their causes, and impacts. Federal and private sector guidance highlights the importance of federal agencies using a service level agreement (SLA) in a contract when acquiring information technology (IT) services through a cloud computing services provider. An SLA defines the level of service and performance expected from a provider, how that performance will be measured, and what enforcement mechanisms will be used to ensure the specified performance levels are achieved. GAO identified ten key practices to be included in an SLA, such as identifying the roles and responsibilities of major stakeholders, defining performance objectives, and specifying security metrics. The key practices, if properly implemented, can help agencies ensure services are performed effectively, efficiently, and securely. Under the direction of the Office of Management and Budget (OMB), guidance issued to agencies in February 2012 included seven of the ten key practices described in this report that could help agencies ensure the effectiveness of their cloud services contracts. GAO determined that the five agencies and the 21 cloud service contracts it reviewed had included a majority of the ten key practices. Specifically, of the 21 cloud service contracts reviewed from the Departments of Defense, Health and Human Services, Homeland Security, Treasury, and Veterans Affairs, 7 had fulfilled all 10 of the key practices, as illustrated in the figure. The remaining 13 contracts had incorporated 5 or more of the 10 key practices and 1 had not included any practices. Agency officials gave several reasons for why they did not include all elements of the key practices into their cloud service contracts, including that guidance directing the use of such practices had not been created when the cloud services were acquired. Unless agencies fully implement SLA key practices into their SLAs, they may not be able to adequately measure the performance of the services, and, therefore, may not be able to effectively hold the contractors accountable when performance falls short.
Background In 1994, the Congress established NRCS and gave it jurisdiction over programs of the former Soil Conservation Service, as well as other USDA financial or technical assistance programs for natural resource conservation and rural development. With more than 12,000 employees nationwide—about three-fourths located in its state offices and 2,500 field offices—NRCS focuses primarily on private and other nonfederal lands. NRCS staff regularly work in partnership with state, local, and private entities, using the same case files and technical assistance tools. NRCS’s primary partners are state conservation agencies and the approximately 3,000 conservation districts nationwide. These conservation districts are units of local government organized to support local conservation efforts with their own programs and staff. When asked to do so by farmers and others, NRCS staff assess farmers’ needs—a process that generally involves traveling to the site. NRCS staff work with the landowner to develop a conservation plan that describes the strategy to be used, the schedule of activities, and estimated costs. In some instances, plans are revised several times until the landowner selects a final alternative. If the landowner applies to implement the conservation plan under a Farm Bill program and the land meets the eligibility criteria and is accepted, NRCS develops a contract. After a contract is developed and signed, NRCS staff complete paperwork for payments to the landowner. NRCS staff assist with installation of practices, for example, by surveying land, providing practice standards and specifications, and ensuring contractors have carried out the terms of the contract. In addition, NRCS continues to document activities throughout the life of the contract, which may be years or decades. Staff also periodically certify that the participant is complying with the contract terms, depending on the program requirements. In 1998, the Chief of NRCS called for a new agencywide effort to improve NRCS’s accountability by providing better information and analyses on how the agency uses its resources and what it achieves with its funds. As part of these efforts, NRCS has taken steps to estimate its future technical assistance costs and budgets, as described in figure 1. NRCS’s model for estimating the cost of programs consists of an Excel spreadsheet that makes program-by-program calculations using such data as time per task estimates and salary benefits and support costs, as well as assumptions about the average length of a contract and the proportion of work that will be performed during each year of the contract. Figure 2 illustrates the information used in NRCS’s cost of programs model. Time Per Task Estimates NRCS state offices have overall responsibility for developing the time per task estimates used in the model. NRCS divided the country into 218 areas and assigned teams of about 7 to 12 NRCS and partner staff to develop time per task estimates for the work to be done in each area. According to NRCS’s process for developing time per task estimates, these teams first define the typical work in their area—that is, the sizes and types of land they work with and the conservation practices they plan for this land. For example, a Texas team determined that one type of typical conservation work in its area was planting grass cover on a 150-acre dairy farm. Using this description, the team discussed and agreed on estimates of time required for each of the 29 tasks associated with performing this work, such as the time it would take them to design a grass-planting plan. Generally, after developing the time per task estimates, the team leader submits them to the state time per task leader to review and input into an NRCS database. The methods of review the state offices use include comparing the areas’ estimates with each other. If the state leader encounters substantial unexplained differences among estimates, he or she generally contacts the team leader for an explanation and to have the estimate changed, if appropriate. The state leader then submits final time per task estimates to NRCS headquarters, which also reviews them. If questions arise, headquarters may ask the state office to perform additional review, and make changes, if warranted. Since the quality of data entered into a model affects the estimates it produces, NRCS has worked to increase the reliability of its time per task estimates by developing new estimates on three separate occasions and asking for staff feedback on the quality of the estimating process. In addition, NRCS plans to update its time per task estimates again because the tasks and the time required to perform them have changed, along with program requirements and policies, over the past 5 years. Salary Costs NRCS’s cost model also includes data on average salary costs, which are based, in part, on data from NRCS’s time and attendance system. NRCS staff enter hours that they work by program, and by the activities associated with those programs, biweekly in the time and attendance system. For example, they might record working 20 hours on CRP and 20 hours on WRP. For the 20 CRP hours, they may record 10 hours to determine land eligibility for multiple landowners and 10 hours working with landowners on conservation plans. These data are reviewed first by supervisors and then at the state office level. NRCS’s Technical Assistance Cost Estimates Differ from Actual Costs Reported by NRCS NRCS tested cost estimates from its model for fiscal years 2002 and 2003 by comparing them with the agency’s actual costs. Our analysis of these comparisons shows that program-by-program estimates from NRCS’s model vary considerably from the agency’s actual costs as shown in table 1. These results do not meet the agency’s goal of achieving a difference of no more than 10 percent between estimates and reported costs. For fiscal year 2002, the CRP estimate was closest to the actual costs—it was 5 percent lower than the actual costs reported. Of the remaining program estimates for 2002, three were higher than the actual cost data by 48 percent to 302 percent, and four were lower by 19 percent to 36 percent. Altogether, NRCS estimated that its technical assistance costs for eight of the Farm Bill conservation programs would be about $254 million, 19 percent higher than its actual costs of about $213 million. NRCS did not estimate costs for two programs that had not yet been implemented. For fiscal year 2003, the estimate for the Agricultural Management Assistance Program, a program that provides financial assistance to producers in 15 states to, among other things, construct or improve irrigation structures and plant trees, was closest to the reported costs— it was 9 percent greater than the actual costs. Of the remaining program estimates for 2003, six were higher than the actual cost data by 17 percent to 50 percent, and three were lower by 16 percent to 60 percent. Altogether, NRCS estimated its technical assistance costs for 10 Farm Bill conservation programs would be about $295 million, about 15 percent higher than its actual costs of about $257 million. This estimate was 4 percent closer to total actual costs than in fiscal year 2002. For the three largest programs—the Environmental Quality Incentives Program, CRP, and WRP—the estimates varied from the actual cost data somewhat less in fiscal year 2003 than 2002. In fiscal year 2002, the estimates had a spread of 70 percent—from a 22 percent underestimate to a 48 percent overestimate. In fiscal year 2003, the estimates had a spread of 38 percent—from a 16 percent underestimate to a 22 percent overestimate. Differences in Estimated and Actual Costs Have Several Causes We identified several reasons for the differences between NRCS’s estimated and actual costs. First, for fiscal years 2002 and 2003, the model estimated costs based on the assumption that programs would be fully funded in the beginning of the fiscal year. This did not happen, however, because the 2002 Farm Bill was enacted later than expected and because USDA operated under a continuing resolution for a good portion of fiscal year 2003. According to NRCS officials, as a result, less technical assistance work was performed than the estimates reflected. Second, NRCS’s model includes costs for work performed by partners’ staff and paid for by the partner organizations, while the actual cost data generally contains only the costs for the work of NRCS’s staff. Third, NRCS’s model uses some data that are based on inaccurate assumptions. This is likely to have contributed to differences between estimates and actual costs reported by NRCS. Actual Timing of NRCS Work Differed from Timing Assumed When Estimating Costs In several instances, NRCS performed technical assistance work at different times than NRCS originally assumed it would when estimating technical assistance costs. First, NRCS estimates have assumed that full funding would be available for new contracts at the start of the fiscal year, but in practice, this has not occurred. For example, in 2002, the Farm Bill was enacted in May, later than NRCS expected, and OMB apportioned funds to USDA for implementing the Farm Bill programs in July—about three quarters of the way through the fiscal year. Since only 2-1/2 months of the fiscal year remained, different work—and in some cases, less work— was performed under the Farm Bill than NRCS had anticipated, according to NRCS officials. The general 2002 sign-up period for CRP contracts did not start until August 2002, limiting the amount of work performed on new CRP contracts. Moreover, in fiscal year 2003, USDA operated under a continuing resolution until receiving fiscal year 2003 appropriations in February and an OMB apportionment in March—about half way through the fiscal year. While NRCS can adjust its assumptions, it is not possible to eliminate uncertainties and variances related to the timing of funding approvals that cause differences between the estimated and actual program costs. NRCS officials said that they have been studying the model’s estimates and modifying some assumptions about workloads to improve the estimates. NRCS Model Included Costs for Work Performed by Partners Because NRCS’s partners’ efforts are a relatively important part of overall technical assistance efforts, NRCS has included its partners’ costs in its model so that it is in a position to estimate total technical assistance costs to carry out the programs. For example, according to 1999 NRCS staff estimates, the most recent available, NRCS’s partners were responsible for about 17 percent of total CRP costs and 15 percent of total WRP costs that year. Using those percentages, we calculated that NRCS’s partners might have added about $15 million to these two programs’ fiscal year 2002 technical assistance costs. To further illustrate the effects of including partners’ costs, we conducted two comparisons. First, we compared NRCS’s fiscal year 2002 technical assistance cost estimates, which include partners’ costs, for two programs—CRP and WRP—with the costs reported by NRCS. We then made the same comparison but with partners’ costs excluded. (See table 2.) The results show that the differences between the estimated and actual costs increase when partner costs are excluded. For the CRP program, for example, the difference between the estimated and reported costs increases from a 5 percent underestimate to a 22 percent underestimate when partner costs are excluded. For WRP, the difference between the estimated and reported costs also increases from a 22 percent underestimate to a 34 percent underestimate when partner costs are excluded. However, NRCS officials believe that the differences between their estimates, which include partners’ costs, and the reported actual costs are less significant because their partner costs have decreased—they could not say by how much. This may not be the case, however. An NRCS briefing document reported in 2000 that NRCS expected its partners to perform more of its workload than in 1999. Similarly, most of the officials we spoke with from conservation districts—which are key partner organizations— reported that conservation districts have increased the amount of technical assistance work they have performed in the past few years. Finally, partner’s costs should not be included when NRCS reports its costs, and when it compares its estimated and actual costs. In commenting on a draft of this report, NRCS officials noted that it is important to include partner costs when estimating the full costs of the programs. They also stated that estimated costs are used for budgetary purposes and actual costs are the only costs charged to the government and used in final reports. Time Per Task Data Used in the Model Are Based on Inaccurate Assumptions NRCS’s estimates of technical assistance costs for 10 Farm Bill conservation programs are developed, in part, using time per task data that are based on inaccurate assumptions. First, NRCS’s basic assumption used for developing its estimates—descriptions of typical technical assistance work—oversimplifies the situations field staff encounter to the extent that the resulting estimates do not accurately represent the time it takes staff to do the work. Second, NRCS’s time per task estimates are based on some assumptions that do not reflect actual workplace conditions. Finally, some NRCS staff who provide technical assistance are uncertain about how to allocate the time they spend traveling to and from field locations among NRCS’s programs. Descriptions of Typical Work NRCS uses descriptions of typical work as the basis for its time per task estimates. NRCS guidance tells teams to develop descriptions of their typical work, but does not tell them how to do this for areas in which conditions are diverse. In the absence of more specific guidance, staff have encountered difficulty with the concept of typical work. According to one team leader, for example, the guidance does not tell teams how to determine what is typical when there are significantly varying land sizes, types of farms, or conservation practices, and several team leaders told us that such variations made identifying typical work difficult or impossible. For example, a North Dakota team leader said that the conservation work in his area varies considerably, ranging from installing pipelines to creating ponds to building fences. Moreover, a Wisconsin team leader told us that at least three typical descriptions would be needed to represent the variety of farms in his area: one each for dairy, vegetable, and cranberry farms. Other NRCS staff commented that it may not be possible to describe what is typical for western rangeland, where operations vary from hundreds of acres to tens of thousands of acres. Finally, another team leader told us that NRCS’s guidance is vague, and as a result, his team interpreted the term “typical” to mean average, while another understood it to be the median or middle value. NRCS staff raised these concerns in 2000, and NRCS officials are aware of the difficulties staff have encountered in describing their typical work. According to these officials, the scope of this estimating problem is nationwide and resolving this concern is important for making more accurate estimates. NRCS is considering whether it is possible to resolve difficulties staff face in describing typical work by enlarging the number of areas for which estimates are developed or collecting data for more than one “typical” unit per area, and thereby reducing the extent of diversity within each. Additional information will be needed to determine whether such an approach will succeed. Assumptions about the Workplace NRCS has directed its staff to base its time per task estimates on three assumptions that we believe do not reflect actual workplace conditions. First, estimating staff are to assume that all NRCS staff are fully trained. This is not the case, however. About 10 percent of current NRCS staff were hired between 2001 and 2004. According to one NRCS official, staff need 1 to 1.5 years to become able to independently perform most technical assistance for CRP and 3 to 5 years for WRP. Because not all staff are fully trained, assuming that they are is likely to inappropriately lower the time per task estimates. The second assumption is that staff are not interrupted during their workday. Under normal conditions, staff regularly experience interruptions that decrease productivity. Assuming that this is not the case is also likely to contribute to inappropriately lowering the time per task estimates. A third assumption, which would likely lead to raising the time per task estimates, is that NRCS staff completely follow NRCS’s policies, procedures, and guidance in performing work. Actually, however, staff sometimes take shortcuts that do not comply with all policies, procedures, and guidance—thereby completing tasks faster than expected. In contrast, NRCS’s reported actual costs are based on actual work conditions. That is, these costs reflect the additional time taken by new and partially trained staff, the added time caused by interruptions that staff regularly face, and the timesaving shortcuts that staff sometimes take. Although we could not determine the precise effects of these assumptions, some information is available to indicate that they warrant reexamination. For example, NRCS staff reported that by using shortcuts, their CRP and WRP work took 24 percent and 31 percent less time, respectively, than the time they had estimated in their workload analysis. NRCS officials said that they have adjusted the model to take into account some policy and procedural changes that reduce the workloads of NRCS staff and added that they would also reconsider the assumptions that we identified. Allocation of Travel Time NRCS staff also reported some confusion about how to allocate their travel time. The guidance directs staff to divide travel time, which constitutes an important portion of field staff work time, among different program activities when necessary. Staff usually drive to meet with landowners and view land, often traveling to distant locations and working on several program activities with several farmers during a single trip. When this occurs, they must determine how much travel time they should assign to each program. The guidance states that travel time should be “prorated” among different program activities but does not explain how to do this. This lack of guidance results in reporting inconsistencies. For example, staff often visit several farms on a single trip making it difficult to determine how to prorate this time among multiple program activities. NRCS officials said that they are aware of this problem but have not yet developed a solution. Conclusions While we recognize that only 2 years of comparative cost data are available and that NRCS has been striving to improve its technical assistance cost estimates, NRCS’s cost estimates differ enough from actual costs reported by NRCS to be of concern to those who use these estimates. NRCS’s overall technical assistance cost estimate for 10 Farm Bill conservation programs for fiscal year 2003 is closer to the reported cost than the estimate was for fiscal year 2002, but too much variation is evident on a program-by-program basis in both years. Until improvements in NRCS’s technical assistance cost estimating are demonstrated through tests of the model’s results, we believe NRCS cost estimates should be used with caution. Without identifying costs incurred by its partners when assessing the reasonableness of the estimates made by its model, NRCS cannot ensure the validity of its cost comparisons. Also, unless NRCS modifies its assumptions to better reflect actual workplace conditions, its technical assistance cost estimates will not be as precise as they could be. Finally, without pilot testing its plans for improving descriptions of typical work or other changes in data development, NRCS cannot be assured that its investment in its next nationwide workload analysis will be well spent. As NRCS improves the quality of its workload analysis, including its time per task estimates, and the assumptions used in the model, we believe more accurate technical assistance cost estimates will be developed. Moreover, when these improvements have been made, NRCS will be in a better position to evaluate the overall quality of its estimating. Further testing in the years ahead may well be needed to gain a better understanding of the causes of variations in the program-by-program cost estimates. Recommendations To improve the accuracy, and therefore the usefulness of NRCS’s program cost estimating, we are recommending that the Secretary of Agriculture direct the Chief of NRCS to take the following three actions: clearly identify nonreimbursable costs incurred by NRCS’s partners when presenting estimates of NRCS’s costs, ensuring that its model’s estimates are comparable with actual data; change the assumptions used for developing time per task data for the model so that they better reflect actual work conditions; and pilot test the feasibility of proposed changes in the development of the time per task data, including changes in development of typical work descriptions in several diverse areas of the country before proceeding with another nationwide workload analysis. Agency Comments and Our Evaluation We provided a draft of this report to USDA for review and comment. We received oral comments from NRCS, which are summarized below. We also received technical comments, which are incorporated in this report as appropriate. NRCS accepted our findings and said they would develop actions in response to our recommendations. The agency stated that the report provides the basis for updating the agency’s workload analysis, more accurately estimating partner contributions to NRCS’s programs, and making other necessary adjustments to the assumptions in cost estimates. NRCS also stated that our report rightly points out that some assumptions used in estimation were inaccurate, but that portions of the report had an unnecessarily, negative tone. They noted for example that the estimates would have been closer to actual costs if funding had been available at the beginning of fiscal years 2002 and 2003. We agree that earlier funding would have likely helped close the gap between estimated and actual costs. NRCS also stated that partner contributions could be excluded from the model’s estimates, but that NRCS wanted to acknowledge the full cost of programs using partner costs. We agree that partners’ costs can and should be excluded from the model’s estimated costs when cost estimates are used for budgetary purposes or for comparison with actual costs. Lastly, NRCS commented that we found no problems with the logic of the model. We disagree. The model’s inclusion of costs that the agency did not incur, such as partners’ costs, is inappropriate when comparing estimates to NRCS’s actual cost data. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to interested congressional committees, the Secretary of Agriculture, the Chief of NRCS, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix II. Objectives, Scope, and Methodology Our objectives were to (1) review the technical assistance cost estimates produced by the model and (2) identify the sources of differences that may occur between the estimates and NRCS’s reported costs. To review the estimates, we assessed the differences between the NRCS’s model results compared with the actual costs reported by NRCS. We compared NRCS’s fiscal years 2002 and 2003 technical assistance cost estimates by program with the actual costs reported by NRCS, the only years for which these two sets of costs were available. To identify sources of difference between these costs, we assessed assumptions and data used in the cost estimates. To assess NRCS’s cost model, we analyzed the model and related documentation to determine whether the model appeared to be a reasonable method for estimating program costs. In addition, we checked whether the model formulas, contained in a MS Excel file, used the appropriate data, and we reviewed the formulas to ensure that they were logical. In addition, we replicated the model formulas using the proper data and ensured that the resulting figures matched those shown in the model. We interviewed NRCS officials responsible for developing the cost model to gain an understanding of the model and its development. These officials were staffed in NRCS’s Budget Planning and Analysis Division, Operations Management and Oversight Division, and in field offices. We conducted sensitivity analyses to illustrate the possible importance of different variables in the model. These sensitivity analyses were conducted using Monte Carlo simulation, which uses random numbers to measure the effects of uncertainty on model output—in this case, the technical assistance cost estimates. Our analysis was based on general assumptions about the probability distributions characterizing some of the variables in the model. Also, we interviewed and requested documentation from 20 officials of state government agencies and conservation district associations to assess whether the nonreimbursed contributions of conservation districts increased or decreased in the past several years. We did so for each of the 10 states with the most CRP and WRP contracts. The 10 states are Illinois, Iowa, Kansas, Louisiana, Minnesota, Mississippi, Missouri, New York, North Dakota, and Wisconsin. Of the 20 officials we contacted, 13 responded to our request and provided some information about partners’ contributions. To assess NRCS’s means for ensuring the reliability of the data used in the cost model, we traced the time per task and other data to their respective sources, which included agency reports and databases. Since information regarding data sources was not always readily available in writing, we met with NRCS officials who described and provided the sources of the model’s data. Once we identified the source, we verified that the data had been correctly transferred from the source to the model. In addition, we performed limited reliability tests, primarily tests for omitted entries and outliers, of the available source data. Furthermore, to obtain NRCS’s field staff views on the reliability of the time per task estimates and time and attendance data used in the model, we used a semistructured interview guide to interview all 10 officials leading estimate development efforts in the 10 states that had the most CRP and WRP contracts in 2002. The 10 states had over half of NRCS’s total CRP contracts and over 40 percent of its WRP contracts. Using another semistructured interview guide, we interviewed 10 randomly selected NRCS team leaders (out of 44 leaders) who each led a team developing time per task estimates in one area in each of the 10 states. However, we could not assess the quality of NRCS’s reviews of its workload analysis at state offices because NRCS state officials retained insufficient documentation of the reasons for changes in data made during their reviews. To assess the reliability of other data used in NRCS’s model, we reviewed NRCS’s method for developing overhead cost data and salary cost data, which relies on the time and attendance system. Overall, we noted that USDA obtained an unqualified opinion on its financial management activities in fiscal years 2002 and 2003. This opinion covers, in part, the salary cost data that NRCS relies on when reporting its costs. In addition, we reviewed NRCS’s development of overhead cost information that is based on Office of Management and Budget budget object classifications, which include such costs as rent, utilities, equipment, and supplies. In addition, we reviewed an NRCS draft report about the quality of NRCS time and attendance data. That 2001 NRCS draft report found that about half of NRCS field offices had deficiencies in documenting their use of time, but the report did not provide sufficient detail to reveal the precise extent of the problems. Since then, NRCS has implemented corrective actions, according to agency officials, and has verified on a limited basis that improvement has occurred. Our issues with the reliability of data are discussed throughout the report. We performed our work between September 2003 and October 2004 in accordance with generally accepted government auditing standards. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the individuals named above, Charles W. Bausell Jr., Nancy Crothers, Anne E. Dilger, Beverly A. Peterson, Lynne M. Dunlay, and Judy K. Pagano made key contributions.
The U.S. Department of Agriculture's (USDA) Natural Resources Conservation Service (NRCS), working with state and local partners, provides landowners with technical assistance for multiple programs to plan and implement conservation measures that protect soil, water, and wildlife. For years, the Congress has been seeking detailed cost information on this assistance as it examined USDA budget requests. In part, because NRCS's financial system was not designed for estimating future budgets, in 1998 NRCS began developing additional cost data and a computer model for estimating future technical assistance costs. GAO was asked to (1) review NRCS's technical assistance cost estimates and (2) identify causes of any differences between the estimates and actual costs ultimately reported by NRCS. In 2003, NRCS started testing its computer model by comparing estimates of technical assistance costs for 10 Farm Bill conservation programs, with actual costs reported by NRCS. GAO's analysis of these comparisons shows that NRCS's model made estimates, program-by-program, which varied considerably from the agency's actual costs. For fiscal year 2003, for example, NRCS's model estimated that the technical assistance costs for seven Farm Bill programs would be higher by 9 to 50 percent, than NRCS ultimately incurred. For three other Farm Bill programs, the estimates were lower than the agency incurred by 16 to 60 percent. Most of the estimates fell outside NRCS's goal of estimating to within 10 percent of the agency's actual costs. In addition, for the 10 Farm Bill conservation programs combined, NRCS estimated its technical assistance costs at $295 million for fiscal year 2003, which is about 15 percent more than the $257 million that NRCS incurred. NRCS officials generally agreed with this analysis. GAO identified several reasons for the differences between the cost estimates and the actual costs. First, some of NRCS's technical assistance work was delayed, occurring later than NRCS assumed when it estimated its costs. This contributed to some overestimation by the model, according to NRCS officials. Second, NRCS's estimates include costs incurred by NRCS's partners. Such costs are generally not included in the actual costs reported by NRCS. Third, some data NRCS uses in its model are based on inaccurate assumptions. For example, when developing estimates about the time it takes NRCS staff to perform technical assistance tasks for use in the model, NRCS assumes, among other things, that its staff are fully trained and perform technical assistance work without interruption. These assumptions do not reflect actual workplace conditions and lead to underestimates. NRCS officials said they would reconsider these and other assumptions.
Background Plum Island is a federally owned 840-acre island off the northeastern tip of Long Island, New York. It is about 1.5 miles from Orient Point, New York (see fig. 1), and about 12 miles from New London, Connecticut. Access to Plum Island is by a ferry service operated by a contractor that transports employees from Orient Point and Old Saybrook, Connecticut. The U.S. Army used Plum Island during World War II as a coastal defense artillery installation until it was declared surplus property in 1948. In 1952, the U.S. Army Chemical Corps constructed a biological research laboratory, but it was never used. Then, in response to a foot-and-mouth disease outbreak in Canada, the Congress transferred all of Plum Island to USDA in 1954 for the purpose of researching and diagnosing animal diseases from other countries, including foot-and-mouth disease, which has not been seen in the United States since 1929. Foot-and-mouth disease is the most feared foreign animal disease because it is highly contagious and can have serious effects on the economy. Slaughtering susceptible animals and quarantining both animals and humans in affected areas helps limit the spread of the disease, but it can, nevertheless, have devastating economic consequences, as demonstrated during the 2001 outbreak in the United Kingdom. By the time the disease was eradicated, about 8 months later, the United Kingdom had slaughtered over 4 million animals and sustained losses of over $5 billion in the food and agricultural industries, as well as comparable losses in tourism. Many other types of animal diseases are also studied at the Plum Island Animal Disease Center, such as classical swine fever; rinderpest; and a variety of pox viruses, including goat, camel, and deer pox. Some of the diseases are caused by pathogens that are zoonotic—that is, they can infect, and possibly cause death, in both animals and humans. Zoonotic pathogens maintained at Plum Island Animal Disease Center include West Nile virus, Venezuelan equine encephalitis, Rift Valley fever, and vesicular stomatitis. Because of the importance of the livestock industry to the U.S. agricultural sector and economy, protecting livestock from these diseases is an important responsibility. To prevent pathogens from escaping the Plum Island Animal Disease Center and infecting livestock, wildlife, or humans, all research is conducted within a specially designed and sealed biocontainment area within the research facility that adheres to specific safety measures.For example, the biocontainment area has air seals on its doors and operates with negative air pressure so that air passes through a special filter system before leaving the facility. In addition, employees and visitors must change into protective clothing before entering the biocontainment area and shower when going between rooms containing different animal diseases and before leaving the biocontainment area. USDA’s procedures require all people and material leaving the biocontainment area to be decontaminated. The Plum Island Animal Disease Center’s biocontainment area totals approximately 190,000 square feet, and it is unusual because it houses a laboratory facility with 40 rooms for large animals. The three-level laboratory also contains the machinery, such as the air filtration system, necessary for the biocontainment area to function, and the pathogen repository. Individuals entering the biocontainment area have access to all three floors. In contrast, biocontainment areas of other laboratories usually consist of a series of smaller rooms, housing smaller laboratory animals, making it easier to control access to the pathogens. As a result of the September 11, 2001, terrorist attacks, Plum Island Animal Disease Center is now required to abide by new laws and regulations that were generated to help reduce the possibility of bioterrorism. These laws and regulations limit access to pathogens to only approved individuals— those whom USDA has identified as having a legitimate need to handle agents or toxins and whose names and identifying information have been submitted to and approved by the U.S. Attorney General. Specifically, the USA Patriot Act of 2001 prohibits restricted people—such as criminals or those individuals from countries that the Department of State has declared to be state sponsors of terrorism—from shipping, receiving, transporting, or possessing certain dangerous pathogens. In addition, the Agricultural Bioterrorism Protection Act of 2002 requires that USDA develop an inventory of potentially dangerous pathogens. Furthermore, individuals who possess or use pathogens must have background checks and must be registered with the U.S. Attorney General. Implementing this law are regulations that became effective on February 11, 2003, which state that laboratories must be in compliance with the regulations by November 12, 2003. USDA also requires employees to have favorably adjudicated background investigations before working unescorted in the biocontainment area. When USDA contracted with Sandia in October 2001, Sandia evaluated the effectiveness of security at the Plum Island Animal Disease Center and four other USDA laboratories. Using a risk management approach, USDA first identified generic lists of assets, risks, and threats for all five laboratories. Sandia then used USDA’s generic threat definitions to assess the security and vulnerabilities at each laboratory. Sandia officials found that Plum Island’s existing security system was inadequate for protecting against the generic threats that USDA had selected and that it required significant improvement. Sandia officials also found that the biocontainment building was not designed to be a highly secure facility. USDA and Sandia agreed, however, that modifying the facility to withstand an assault would be cost- prohibitive and that, because pathogens occur naturally and are available at other laboratories throughout the world, the risk that a terrorist would try to steal them from Plum Island was not perceived as significant (and their perception has not changed). Consequently, Sandia recommended a limited physical security system designed to deter and detect a security breach and, with assistance from local law enforcement, respond to incidents exceeding the capability of the guard force on the island. DHS assumed formal administration of Plum Island from USDA on June 1, 2003, as provided by the Homeland Security Act of 2002. During a transition period that will last until October 1, 2003, DHS will review USDA’s policies and procedures and determine how best to administer the functions of Plum Island. Until the transition is complete, DHS will administer the facility under the same policies and regulations established by USDA. Ultimately, the two agencies will work together to address national biodefense issues and carry out the mission of the Plum Island Animal Disease Center. While DHS is now formally responsible for security, scientists and support staff of two USDA agencies, the Agricultural Research Service (ARS) and the Animal and Plant Health Inspection Service (APHIS), will continue to implement the Plum Island Animal Disease Center’s research and diagnostic mission. ARS scientists at Plum Island are responsible for research on foreign livestock diseases, while APHIS scientists are responsible for diagnosing livestock diseases. APHIS conducts diagnostic training sessions several times a year to give veterinary health professionals the opportunity to study the clinical signs of animal diseases found in other countries, such as foot-and-mouth disease. According to USDA, scientists from other countries are an integral part of the Plum Island Animal Disease Center’s workforce because they are well qualified and well situated to study the diseases researched there, many of which are endemic to their own countries. These scientists are sponsored by USDA and obtain visas that permit them to work for the department. DHS currently uses USDA’s independent contractor to carry out operations and maintenance functions for Plum Island. The services under the contract include, among other activities, operating the ferries, providing security and emergency fire and medical services, providing buildings and grounds services, meeting utility requirements, and performing custodial functions. On August 13, 2002, 71 of these employees went on strike. The contractor at that time, LB&B Associates, was responsible for handling the strike. On January 6, 2003, LB&B Associates’ contract expired. USDA had initially awarded that contract under a small business program when LB&B Associates still qualified as one. Since that initial award, LB&B Associates had grown so that it no longer was eligible to compete for contracts set aside for small businesses. As a result, USDA awarded the new contract to North Fork Services, a joint venture between LB&B Associates and Olgoonik Logistics LLC, a small minority company of Anchorage, Alaska. Under this arrangement, the more experienced LB&B Associates serves as a mentor to North Fork Services, and most of the employees who worked for LB&B Associates continue to work for North Fork Services. DHS officials told us that they would not renew the contract with North Fork Services. DHS stated that the current terms and scope of the contract are insufficient to operate the facility in accordance with its view of the standards and mission of the Plum Island Animal Disease Center. USDA Has Taken Strides To Improve Security at Plum Island, but Fundamental Concerns Remain Before the September 2001 terrorist attacks, the Plum Island Animal Disease Center, like many other federal laboratories, was less conscious of security and focused primarily on the safety of its programs and operations. Since then, USDA intensified its focus on security and has taken strides in developing and installing a security system. However, Plum Island remains vulnerable to security breaches because its security arrangements are incomplete and limited. USDA Has Taken Strides To Improve Security at Plum Island Security at Plum Island has improved since the fall of 2001. USDA hired a physical security specialist to oversee its efforts to improve security, including the implementation of Sandia’s recommendations, and to provide direction for the security measures being taken for Plum Island. As of July 2003, completed security upgrades include the following: taking measures to prevent unauthorized access to Plum Island by allowing only sponsored visitors on the ferry and island; identifying those sponsored individuals, and allocating passes, when they board the ferry; and staffing Orient Point, New York, with a security guard as well as installing an access gate that can be opened only with an identification card assigned to Plum Island federal personnel; hiring armed guards to patrol the island and observe personnel and visitors entering and leaving the facility. When the nation is on high terrorist alert (code orange) armed guards are added to monitor access to the biocontainment area and to better secure the island’s perimeter. This also allows armed guards to remain in the building while the other armed guards go to the harbor to inspect vehicles unloaded from the ferry and ensure that individuals departing the ferry onto Plum Island have permission to be there; conducting a background check for government staff and contractors working on the island and performing more rigorous checks for individuals with access to the pathogens; installing some video cameras to (1) increase the probability of timely detection of an intruder and (2) monitor the activities of those inside the biocontainment area when they remove pathogens from the storage area—or the repository; installing intrusion detection alarms in the administrative building and the biocontainment area; limiting access to pathogens by installing certain access control devices; and improving pathogen control and accountability by completing and maintaining an inventory of pathogens at the facility, submitting names of those with access to pathogens to the U.S. Attorney General, and creating security and incident response plans, as required by law. Despite Improvements, Security Arrangements at Plum Island Are Incomplete and Have Serious Limitations Although security at the Plum Island Animal Disease Center has improved over the past few years, fundamental concerns remain. Plum Island’s Physical Security Is Incomplete and Limited Plum Island’s physical security system is not yet fully operational. For example, the facility does not yet have in place all the equipment necessary to detect intruders in various places. DHS officials agree that these physical security measures are important and anticipate they will be in place by December 2003. In addition, our Office of Special Investigations identified physical security limitations. For example, we found that lighting is inadequate to support the cameras outside of the research complex and vehicles are not properly screened. (See app. II for other limitations identified by our Office of Special Investigations and observations on how they could be addressed.) Moreover, the physical security measures that USDA chose to implement on Plum Island are largely limited to the biocontainment area, where pathogens are located. Consequently, other important assets remain vulnerable. For example, the continued operation of the Plum Island Animal Disease Center is dependent on its infrastructure, which has limited protection. Protecting the infrastructure is particularly important because the Plum Island Animal Disease Center is the only facility in the United States capable of responding to an outbreak and researching foot-and- mouth disease. Therefore, if the infrastructure was damaged, no other facility could step in and continue this foot-and-mouth disease work. Furthermore, Plum Island is the only facility in North America that has a foot-and-mouth disease vaccine bank. This bank represents years of cooperative research performed by Canada, Mexico, and the United States, yet the room containing it has a window opening covered with only plywood. USDA officials said they intend to improve the physical security of the vaccine bank but have not yet decided on the approach to take. In addition, DHS officials agree that the Plum Island Animal Disease Center is vital to combating bioterrorism, and they are evaluating the physical security on Plum Island. Access to Pathogens Is Not Adequately Controlled Access to pathogens at the Plum Island Animal Disease Center is not adequately controlled. For example, as of July 2003, eight scientists from other countries were working in the biocontainment area without completed background investigations. According to FBI officials, allowing anyone who does not have a completed background investigation access to the biocontainment area—in particular, a scientist from another country—represents a significant security risk. USDA officials told us these scientists were allowed into the biocontainment area to enable research to continue. Furthermore, they stated that background investigations had been initiated for these individuals, and it was assumed that these scientists were being escorted, which USDA policy permits for those with pending background investigations. However, Plum Island officials told us that due to resource constraints, it has not been possible to continually escort and monitor scientists while they are in the biocontainment area. When we brought this concern to the attention of DHS officials, they told us they are developing a more restrictive policy for allowing scientists from other countries to have access to pathogens. In addition, USDA policy does not require background checks on students who attend the foot-and-mouth disease classes that are regularly held in the biocontainment area. In 2002, USDA held six classes with an average of 32 students per class and anticipates continuing these classes in the future. According to USDA’s policy, individuals may enter the biocontainment area without background checks if an approved individual escorts them.We believe this policy warrants reconsideration for several reasons. Allowing students who do not have background checks into biocontainment for purposes of attending foot-and-mouth disease classes, with or without an approved escort, may not be consistent with the regulations implementing the Agricultural Bioterrorism Preparedness Act. These same regulations do not provide an exception for unapproved students or other visitors who may be handling or have access to pathogens. USDA officials told us that maintaining constant visual contact with even one escorted individual is very difficult because of the size and floor plan of the biocontainment area. USDA officials told us that they believe escorting students is sufficient to meet the intent of the regulations. However, DHS officials said that all students should have completed background checks before entering the biocontainment area and told us they will develop a policy that will ensure that this occurs once the transition period is complete. Although USDA’s regulations specifically allow unapproved individuals into the biocontainment area with an approved escort, we found unescorted maintenance workers in the biocontainment area. The regulations provide for unapproved individuals to conduct routine cleaning, maintenance, repair, and other nonlaboratory functions in the biocontainment area if they are escorted and continually monitored by an approved individual.However, early in our investigation we found that as many as five such individuals were working in the biocontainment area without escorts. When we brought this to the attention of USDA officials, they provided an escort for these individuals. DHS officials added that the operating contractor would soon provide security escorts. Controlling access to pathogens is important because no security device can currently ensure that an insider, such as a scientist, will not steal pathogens from the Plum Island Animal Disease Center or other laboratories. According to the director of the Plum Island Animal Disease Center—while under USDA’s administration—and officials from Sandia, the National Institutes of Health, and the U.S. Army Medical Research Institute of Infectious Diseases, pathogens are more difficult to secure than other materials that could be used as weapons, such as nuclear material. This is because there is no existing mechanism capable of detecting the theft of a microgram of pathogenic material and a tiny quantity can be multiplied. Thus, a scientist could covertly generate or divert a pathogen during the normal course of work, remove it from the laboratory undetected, and potentially develop it into a weapon for spreading disease. This inherent problem leaves all facilities with pathogens vulnerable to serious security breaches. Also, the existence of the foot-and-mouth disease pathogen at the Plum Island Animal Disease Center is a particular concern because an undetected theft, followed by the spread of the disease, would have serious economic consequences for the nation. In addition, the presence of zoonotic diseases at the Plum Island Animal Disease Center is worrisome because of the potential for adverse health affects on humans, and two such pathogens are of particular concern. First, U.S. government research has shown that Venezuelan equine encephalitis virus can be developed into a human biowarfare agent. Second, USDA believes that because of the genetic similarities of two pox strains, it may be possible to manipulate camel pox into an agent as threatening as smallpox. Although USDA created an inventory list of the pathogens at the Plum Island Animal Disease Center, as required by law, such a list cannot provide an accurate count of pathogens because quantities of pathogens change as they replicate. Thus far, Plum Island officials have secured pathogens by restricting access to the island itself and to the biocontainment area where the pathogens are located and by locking the freezers containing the pathogens. But DHS officials have not yet had the opportunity to fully consider actions other laboratories are taking to mitigate the likelihood that pathogens could be stolen. Officials at the U.S. Army Medical Research Institute of Infectious Diseases at Fort Detrick, Maryland, told us they are taking several steps, in addition to physical security measures and inventory control, to better safeguard pathogens against theft. For example, they plan to use trained personnel as roving monitors to ensure that unauthorized laboratory work is not being performed, and they will randomly inspect all personnel exiting laboratories. Moreover, they are interviewing scientists periodically and requiring that background checks be updated every 5 years in order to evaluate the continued suitability and reliability of those employees working with pathogens. Although USDA told us background checks were updated every 5 years, according to Plum Island records as of July 2003, 12 current Plum Island employees, some of whom have access to pathogens, had not had their background checks updated in more than 10 years. According to Sandia, other potentially helpful safeguards include creating, implementing, and enforcing strict policies, including those that prohibit researchers from continuing work in the biocontainment area if they do not follow security procedures. DHS officials stated that they have started to work with other laboratories and that measures such as these, while not necessarily a panacea, could help improve the security of pathogens at Plum Island. Incident Response Capability Is Limited Plum Island’s incident response capability is limited in four ways. First, the security guards on each shift carry firearms, although Plum Island does not have statutory authority for an armed guard force. USDA operated the guard force on Plum Island without authority for the guards to carry firearms or make arrests. Furthermore, Plum Island officials have not approved a policy that addresses the use of weapons, and, as a result, the guards do not know specifically how they are expected to deal with intruders on the island and when or if they should use their weapons. When we informed DHS officials of these problems, they agreed to resolve them as soon as possible and raised the possibility that the Federal Protective Service could be assigned to guard Plum Island. The Federal Protective Service, now under DHS, has the authority to carry weapons and make arrests.Since DHS has taken responsibility for the island, the Federal Protective Service has visited Plum Island to assess its security requirements. Second, according to the observations of our Office of Special Investigations, Plum Island has too few guards to ensure safety and effectiveness.DHS officials agree with this observation and said that they have requested funds to hire additional guards. Third, arrangements for local law enforcement support are also limited. According to Sandia’s recommended security plan, in the event an incident exceeds the response capability of the Plum Island guards, they would first contact Southold town police, the closest and primary responding law enforcement agency.If still more resources were needed, Southold town police would contact Suffolk County police, the secondary responder. Because of liability issues, however, arrangements with local law enforcement have not been finalized even though there have been continuing discussions with local law enforcement. The result is that Plum Island officials cannot predict the extent to which the Southold town police will provide backup during an incident. On the other hand, officials of Suffolk County, which includes both Plum Island and Southold, told us that although it takes longer for them to respond than Southold police, they could respond with an adequate number of officers, if necessary. In addition, they have requested a map of the island and a tour of the biocontainment area to become more knowledgeable about the facility and its surrounding terrain. Suffolk County officials pointed out, however, that, for geographical reasons, Southold remains the primary responder. In this vein, Plum Island officials have never defined an adequate response time, nor have they conducted exercises with local law enforcement officials to determine how effectively Plum Island and local officials can address an incident on the island. DHS officials agree that the arrangements for local law enforcement support are limited, and they are trying to overcome this problem as quickly as possible by first resolving the issue surrounding the authority to make arrests and carry weapons. In addition, these officials concur that it is important to develop a better understanding of the response times and capabilities of local law enforcement assistance and to conduct exercises to test the adequacy of arrangements once they are completed. Fourth, according to Sandia officials, the incident response plan for Plum Island is not sufficiently comprehensive. Plum Island’s incident response plan contains certain elements required under law, such as how to respond to an inventory violation or a bomb threat. However, because USDA selected a risk management approach to security, Plum Island officials need an incident response plan that clearly lays out the actions to be taken if events occur that exceed the capability of the facility’s security system. For example, Plum Island officials do not have a road map for actions to be taken in the event of a terrorist attack—who gets notified, in what order, and the responsibilities of staff for responding. This is a critical shortcoming because, according to DHS, the nation faces a significant risk of a terrorist attack. Sandia officials also said that the incident response plan for Plum Island requires significant additional development to properly prepare for the complete range of threats. Moreover, the incident response plan does not identify the security steps that should be taken in the event of an outbreak of foot-and-mouth disease or take into consideration any increased risks to the facility, which could severely impede the nation’s capability to contain an outbreak. Finally, according to the FBI and local law enforcement officials, the island’s incident response plan may need to be coordinated with the incident response plans of such nearby facilities as the Millstone nuclear power plant, the Brookhaven National Laboratory, and the laboratories at the State University of New York at Stony Brook because a terrorist attack on any of these facilities could also involve Plum Island. This type of coordination has not yet taken place. DHS officials agree that the incident response plan needs to be more comprehensive and coordinated with national and local law enforcement agencies. Plum Island’s Security Plan Does The risk that an adversary might try to steal pathogens is, in our opinion, Not Address All Risks and Threats higher than USDA believed it to be in 2001, when it defined the same risks for all of its laboratories, including Plum Island. USDA considered the risk that an adversary would try to steal pathogens from any of its laboratories to be relatively low compared to materials found at other laboratories, such as nuclear material or pathogens of a higher consequence to the human population. Since its evaluation in 2001, however, the level of risk at Plum Island has increased because of the strike that occurred in August 2002 and the hostility surrounding it. For example, one striker has been convicted of tampering with the island’s water distribution and treatment system as he walked off the job the day the strike began. USDA officials suspect that this individual did not act alone. In addition to this incident, USDA asked the FBI and USDA’s Office of Inspector General to investigate the possibility that a boat engine had been tampered with. USDA also asked the FBI to investigate why backup generators failed to come on when Plum Island lost power for more than 3 hours in December 2002. After the backup generators failed to provide power, New York’s ABC news station broadcast an interview with a disguised worker, at that time employed at Plum Island, who discussed his unhappiness with USDA and the contractor and blamed replacement workers for the power outage.In addition, several of the striking workers returned to work for LB&B Associates and are still employed on the island under the new contractor, North Fork Services. In response to the strike, USDA prevented striking workers from accessing Plum Island and it added guards at Orient Point to assure the security of employees as they were arriving and departing near the union picket line. However, USDA did not reevaluate the level of risk, the assets requiring protection, or its incident response plans in light of the strike and accompanying sabotage. USDA believed that this was not necessary because its security plan anticipated a disgruntled worker at any of its laboratories. We disagree because there is a difference between addressing security problems caused by one employee and addressing the hostilities resulting from the strike, which could include several employees working together. We believe that the implications of a disgruntled work force should be taken into account when reevaluating the extent of risks, threats, and assets requiring increased security. Furthermore, Sandia had originally recommended that USDA review the defined threats with the intelligence community and local law enforcement officials to ensure that threats particular to Plum Island and its vicinity were taken into consideration, but this was never done. FBI and Suffolk County officials told us that they consider this step to be very important because if there were such threats, federal and local officials may be aware of them and the risks they pose to the Plum Island Animal Disease Center. In addition, if local law enforcement entities were involved in planning Plum Island’s security, they would be in a better position to respond to incidents on the island. DHS officials agree that rehiring workers who walked off the job could be problematic but told us they are under pressure from the local chapter of the union and the community to rehire those who lost their jobs as a result of the strike. DHS officials also said they recognize the importance of working with local law enforcement and the intelligence community to better define the threats and associated risks for Plum Island. USDA Concluded Its Contractor’s Performance Declined during the Strike but Operations Continued and Overall Performance Was Superior Regarding the contractor’s performance, despite a decline from the previous rating period, USDA rated LB&B Associates’ performance as superior for the rating period during which the strike occurred. When the strike occurred, LB&B Associates, with the assistance of USDA employees, maintained operations at Plum Island. For example, LB&B Associates implemented a strike contingency plan, brought in qualified individuals from its other work sites, and hired subcontractors with the required licenses and certifications to operate certain Plum Island facilities and its boats. Also, as a result of the strike, LB&B Associates exceeded its estimated budget by about $511,000, or approximately 5 percent, for fiscal year 2002 and the first quarter of fiscal year 2003. USDA was aware of and approved the cost increases. Further information about LB&B Associates’ performance, employee qualifications, and costs is contained in appendix III. Conclusions Despite improvements, security arrangements at Plum Island are not yet sufficient. Further actions are needed to provide reasonable assurance that pathogens cannot be removed from the facility and exploited for use in bioterrorism. Until DHS fully implements the physical security measures and addresses those vulnerabilities identified by our Office of Special Investigations, Plum Island’s security system will not provide physical security commensurate with the importance of the facility. Additionally, the Plum Island Animal Disease Center will remain more vulnerable than it needs to be if the physical infrastructure that supports it is not afforded better protection. Similarly, it is important to better secure the foot-and- mouth disease vaccine bank to ensure its availability for combating an outbreak. Also, the lack of comprehensive policies and procedures for limiting access to pathogens unnecessarily elevates the risk of pathogen theft. Moreover, because physical security measures alone are not adequate to secure pathogens, all laboratories containing these materials face the challenge of developing other approaches to mitigate the risk of theft. By consulting with other laboratories to discover methods they are using to mitigate the risk to pathogens, Plum Island officials can learn more about safeguards being employed elsewhere. Furthermore, Plum Island officials cannot effectively respond to security breaches until DHS resolves issues that impede Plum Island’s response capability, such as the authority of the guard force to make arrests, which makes it difficult for the guards and local law enforcement agencies to address criminal situations on the island. Finally, because we believe the level of risk at Plum Island is higher than USDA originally determined, and because USDA did not validate threats with intelligence agencies or local law enforcement officials, DHS cannot be assured that Plum Island’s security, including its physical security system and response plans, is sufficient to address the full range of events that could occur on the island. Recommendations for Executive Action To complete and enhance Plum Island’s security arrangements, we recommend that the Secretary of Homeland Security, in consultation with the Secretary of Agriculture, do the following: Correct physical security deficiencies by (1) fully implementing the physical security measures, (2) addressing the specific security shortcomings identified by our Office of Special Investigations, (3) better securing certain features of the physical infrastructure that supports the continued operation of the Plum Island Animal Disease Center, and (4) better securing the foot-and-mouth disease vaccine bank. Limit access to pathogens by further developing and enforcing specific procedures, including internal control checks, to ensure (1) that all individuals involved in laboratory activities in the biocontainment area—including students and regardless of citizenship—have been approved, in accordance with the law; (2) that background checks of these individuals are updated regularly; and (3) that cleaning, maintenance, and repair staff entering the biocontainment area are escorted at all times by individuals with completed background checks. Consult with other laboratories to identify ways to mitigate the inherent difficulty of securing pathogens. Enhance incident response capability by (1) resolving the issue of the guards’ authority to carry firearms and make arrests; (2) developing and implementing a policy on how guards should deal with intruders and use weapons; (3) increasing the size of the guard force; (4) completing an agreement with local law enforcement agencies to ensure backup assistance when needed; (5) defining an adequate response time for law enforcement to respond to incidents; (6) developing an incident response plan that includes precise detail about what to do in the event an incident occurs that exceeds the capability of the security system, such as a terrorist attack; and (7) conducting exercises with local law enforcement to test the efficiency and effectiveness of Plum Island’s response capability. Reconsider the security risks at Plum Island, taking into account recent acts of disgruntled employees. Consult with appropriate state and local law enforcement and intelligence agencies to revisit the threats specific to the Plum Island Animal Disease Center. Revise, as necessary, security and incident response plans to reflect any redefined, risks, threats, and assets. Agency Comments We provided DHS and USDA with a draft of this report for their review and comment. Both agencies provided written and clarifying oral comments. The agencies also provided technical comments, which we incorporated into the report as appropriate. Overall, DHS agreed with the report and stated that it has started to implement our recommendations, and USDA stated that the report was very useful but also raised several concerns. In its written comments (see app. IV), DHS agreed that fundamental concerns leave the facility vulnerable to security breaches and stated that the report is factually accurate. DHS also commented that it accepts and supports our recommendations. In addition, DHS stated that since it assumed administrative responsibility for Plum Island on June 1, 2003, it has taken the following actions, among others, to address the recommendations in this report: DHS is working with USDA to develop corrective actions to address the physical security deficiencies identified in our report. DHS is working with USDA to develop an access control policy for all personnel who are required to enter the biocontainment area. DHS is working with other federal agencies to develop security policies and procedures to limit access to pathogens. DHS is working with the Federal Protective Service to enhance security at the facility and bring arrest and detention authority to the island. In addition, DHS stated that funds have been requested to increase the guard force. DHS is working with local law enforcement agencies to coordinate incident response plans, mutual aid agreement requirements, and joint exercises to test security response capabilities. DHS is reviewing the island’s entire security plan and will revise the threat assessment as necessary. DHS stated that it expects to complete this assessment in early 2004. In its written comments (see app. V), USDA addressed several aspects of our report. These specific comments and our responses follow. USDA suggested that the report should make judgments about the need for enhanced security against a risk assessment-based approach that considers both the probability and the consequences of specific types of attacks. However, as we report, DHS is now responsible for performing such an assessment, and DHS stated that it has undertaken a review of USDA’s threat statement, which it will complete early in 2004. Our objective was to evaluate the status of security on Plum Island. That evaluation included, among other steps, a review of USDA’s risk-based security plan for Plum Island and its implementation. Our report details substantive flaws in both the planning and the execution of that plan. USDA also commented that the report did not recognize that USDA had a contract to improve security at Plum Island prior to September 11, 2001. We added to the report that USDA contracted with the U.S. Army Corps of Engineers in 2000 to improve security at Plum Island, but noted that few of the Corps’ recommendations had been implemented. Also, USDA officials told us that in light of September 11, 2001, and the subsequent dissemination of anthrax through the postal system, they made a concerted effort to improve security at USDA’s laboratories. The officials added that Sandia was hired to provide USDA with a consistent approach to evaluating security at the department’s major laboratories. Sandia officials told us that they did not agree with the approach taken by the Corps, and they concluded that Plum Island’s existing security system was substantially inadequate for protecting against the threats that USDA defined as relevant. USDA indicated that it took various actions to safeguard pathogens in response to the strike. USDA stated that it increased and armed the guards on Plum Island; added guards at Orient Point, Long Island, where the strikers were picketing; and excluded the strikers from Plum Island facilities. We agree that USDA responded with immediate measures and have revised the report to reflect these steps. However, we believe that USDA’s responses to the strike were insufficient. Although USDA increased the number of guards at Orient Point, this was a temporary measure primarily put in place to ensure the safety of the employees as they passed the union picket line. Also, Plum Island officials told us that the number of guards on Plum Island itself did not change as a result of the strike and that these guards had been armed since 2001. More importantly, USDA’s comments do not recognize that there is a difference between addressing security problems caused by one employee and addressing the security problems resulting from the strike, which could include several employees collaborating to cause problems. We believe that the implications of having a disgruntled work force should be taken into account when reevaluating the extent of risks, threats and assets requiring increased security. USDA stated that it appropriately used armed guards on Plum Island and were in communication with local law enforcement. While we agree that armed guards are necessary for security on Plum Island, our concern is that the guard force did not have authority from USDA to carry firearms and make arrests. Furthermore, USDA never developed a policy instructing its guards when and how they could use force, including the firearms they were carrying. Plum Island officials said they were unable to resolve these important matters with USDA headquarters officials, including the Office of General Counsel. Finally, we noted in the report that while Plum Island officials have communicated with local law enforcement, no agreement was reached to assist Plum Island guards in the event a criminal act occurred on the island. DHS stated that it is working to resolve these issues. USDA stated that it is an accepted practice for a person with an appropriate background investigation to escort those who do not yet have a clearance. USDA also acknowledged that it had problems implementing its escort procedures at Plum Island but now believes its escort procedures are reliable. We agree that the practice of escorting is used in other laboratories that contain pathogens. However, Plum Island officials and scientists repeatedly told us that this procedure is not practical at Plum Island because of staffing considerations. For example, they explained that the escorts were Plum Island employees who had other duties, which compelled them to leave those they were escorting for periods of time. Furthermore, we believe that internal control checks should be established to ensure implementation of escort procedures, and we have added this to our recommendations. DHS commented that more will be done to address this issue—it is planning to develop, in concert with USDA, a limited use policy to identify access control requirements for all personnel who are required to enter the biocontainment area. USDA said that several of the employees we identified had not had their background checks updated in the last 5 years, but that some of those we identified had. We reported based on the actual records of background checks maintained at the Plum Island Animal Disease Center. We also recognize that there may be differences between the records maintained on the island and other USDA records, and that the background checks of several of these individuals may have been updated since the time of our review. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies of this report to the Secretaries of Homeland Security and Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please call me or Charles M. Adams at (202) 512-3841. Key contributors to this report are listed in appendix VI. Scope and Methodology To determine the extent to which USDA has addressed security for Plum Island, we visited the facility several times to examine current physical security measures and to review plans for further security actions. In addition, two security experts from our Office of Special Investigations toured the facility to identify possible vulnerabilities and actions that could be taken to reduce them. We also reviewed numerous security documents, such as Sandia’s assessment of Plum Island security; Plum Island’s draft security and response plans; draft memorandums of understanding with local entities; physical security implementation plans; and policies and procedures for guards, employees, visitors, students, and others with access to pathogens. In addition, we worked closely with Sandia officials to understand how they applied a risk management security approach to Plum Island. We also interviewed numerous officials from Plum Island, including the physical security specialist, scientists, the center director, and others responsible for security changes under both the Agricultural Research Service and the Animal Plant and Health Inspection Service; officials of USDA’s Offices of Homeland Security, Procurement and Property Management, and General Counsel; and officials of the Department of Homeland Security, which assumed the administration of Plum Island. To gain a better understanding of possible threats to Plum Island, we spoke with officials from the Federal Bureau of Investigation, Defense Intelligence Agency, Central Intelligence Agency, Suffolk County police and fire departments, and USDA’s Office of Inspector General. To understand the cooperation between local governments and Plum Island that might be needed if an incident were to occur on the island, we interviewed government and law enforcement officials from Suffolk County, the town of Southold, and the village of Greenport. Finally, we toured the laboratories at and interviewed officials from the National Institutes of Health and the U.S. Army Medical Research Institute of Infectious Diseases to understand how they are handling security challenges since the terrorist attacks of 2001. To determine Plum Island’s compliance with new laws and regulations, we reviewed the USA Patriot Actof 2001, the Agricultural Bioterrorism Protection Act of 2002 and its regulations that went into effect as a final interim rule on February 11, 2003, as well as USDA’s policies and procedures for security at biosafety level 3 facilities. We also considered the Office of Management and Budget’s Circular A-123, Management Accountability and Control, and the standards in our Internal Control: Standards for Internal Control in the Federal Government. To determine how well LB&B Associates performed from the time the strike began on August 13, 2002, to January 5, 2003, we (1) reviewed LB&B Associates’ contract with USDA and identified LB&B Associates’ performance requirements; (2) interviewed officials of USDA, LB&B Associates, and the International Union of Operating Engineers to get their perspective on LB&B Associates’ performance; (3) reviewed USDA’s ratings of LB&B Associates’ performance since 1999 and, in particular, the Award Fee Determination Board’s report on LB&B Associates’ performance during the period the strike took place; (4) reviewed the qualifications of LB&B Associates employees, such as the boat operators and water distribution and treatment system operators, all of whom are required to meet certain qualifications for performing their duties; (5) analyzed 3 years of contract cost data provided by LB&B Associates to learn which items increased as a result of the strike; and (6) validated the contract cost data by spot-checking it against the bills LB&B Associates submitted to USDA. While we took these steps to determine how well LB&B Associates performed, we did not independently rate LB&B Associates’ performance. In addition, we interviewed officials involved in investigating strike-related incidents, including officials of the Federal Bureau of Investigation and USDA’s Office of Inspector General. Our work was conducted in accordance with generally accepted government auditing standards from January through August 2003. Additional Observations on Plum Island’s Security System by GAO’s Office of Special Investigations The security force reports directly to the Administrative Contract Officer and not to the Security Director—it is important for the security force to report directly to the Security Director of Plum Island to ensure that security-related issues are handled promptly. There are no name checks or record checks given to contractors and visitors going into the biocontainment area. Contractors and visitors entering the biocontainment area could be checked for criminal charges (through the National Criminal Information Center) before they are granted access. The area outside of the biocontainment and administrative building is surveilled by stationary closed-circuit television cameras, which are insufficient. Installing pan, tilt, and zoom closed-circuit television cameras in certain areas would enhance surveillance capabilities. The island is easily accessible to the general public by boat, and there are limited “no trespassing” signs present on the island to advise the public that it is a government facility—more “no trespassing” signs in those areas of the island that are easily accessible to the public by boat would address this condition. In the event of a fire, Plum Island is not always able to respond appropriately because the fire brigade has limited hours of operation. The security force could be cross-trained for fire rescues and therefore provide 24-hour coverage. The building used for overnight accommodations lacks panic alarms for emergency response. Panic alarms could be installed in the building and, when visitors are present, security guards could drive by on a regular basis. Control for keys and master keys of the facility is deficient. The security department could be assigned the responsibility for all keys and master keys. A key log could be created to better track possession of keys. LB&B Associates’ Performance, Employee Qualifications, and Costs USDA concluded, in an evaluation of LB&B Associates’ performance, which included the time period involving the strike, that LB&B Associates’ overall performance was superior, although its performance had declined compared to prior rating periods. When the strike occurred, LB&B Associates, with the assistance of USDA employees, continued to perform and maintained operations at Plum Island. LB&B Associates implemented a strike contingency plan, brought in qualified individuals from its other work sites, and hired subcontractors with the required licenses and certifications to operate certain Plum Island facilities and its boats. Also, as a result of the strike, LB&B Associates exceeded its estimated budget by about $511,000, or approximately 5 percent, for fiscal year 2002 and the first quarter of fiscal year 2003. USDA was aware of and approved the cost increases. Performance Although LB&B Associates’ performance declined during the strike relative to previous rating periods, overall, LB&B Associates performed at a superior level during the evaluation period that included several months when workers were on strike, maintaining—and in some cases even improving—operations critical to the functioning of the island, according to Plum Island officials. Plum Island’s Award Fee Determination Board regularly rated LB&B Associates’ performance using a system described in its contract to calculate a composite performance score. According to the board, LB&B Associates’ performance was outstanding—the highest level—for more than 2 years, until the rating period in which the strike began. The board faulted LB&B Associates in several rating categories resulting in a decline in its performance rating. For example, according to the board, LB&B Associates’ strike contingency plan, which describes how essential operations would be continued in the event of a strike, was outdated. As a result, implementation of the plan was slowed because it took up to 48 hours before all of its temporary workers arrived on the island. Moreover, some subcontracts cost more than anticipated. According to the board, LB&B Associates overcame initial problems in implementing its contingency plan and, overall, performed at the superior level. For example, temporary workers and subcontractors hired by LB&B Associates quickly repaired the water system that had been sabotaged on the first day of the strike. Furthermore, according to the board, some activities improved after the onset of the strike, including the maintenance of steam pipes, an important component of the process used to decontaminate laboratory waste contaminated with pathogens. Also, boat maintenance and cafeteria services—both of which, according to the Board, had been problematic before the strike—improved after replacement workers were hired. Figure 2 shows the composite scores the board gave LB&B Associates from fiscal year 2000 through the first quarter of fiscal year 2003, which includes the time during which the strike occurred. More details about how the board evaluated LB&B Associates’ performance are contained in table 1. Employee Qualifications To maintain operations at Plum Island after the strike began, LB&B Associates brought in temporary replacements from some of its other contract sites, hired subcontractors, and subsequently hired permanent replacement workers, as described in the strike contingency plan. We confirmed that workers in certain positions, including boat operators and operators for the wastewater treatment system, were licensed as prescribed by LB&B Associates’ contract with USDA. In addition, many of the replacement workers appear to have significant and relevant work experience for the positions for which they were hired. Although LB&B Associates and USDA staff worked together to maintain vital functions, operations were affected at times by the strike because of the reduced workforce and the loss of some workers with specific skills and/or qualifications. For example, the ferries that take workers to and from the island operated on a reduced schedule until all three boat masters who had walked out were replaced by individuals with the necessary Coast Guard license. Also, some USDA officials stepped in to fulfill duties that were normally performed by qualified contract staff, such as monitoring the air filters in the laboratory, until qualified replacements were hired. By July 2003, most positions left vacant by the strike were filled, most of them by permanent replacement workers and 16 by striking workers who returned to work on the island. Costs Attributable to the Strike With USDA’s approval, LB&B Associates exceeded its estimated budget by about $511,000, or approximately 5 percent, during the 15-month period covering fiscal year 2002 and the first quarter of fiscal year 2003, the period during which the strike began. USDA allowed the additional expenditures, which occurred in the last 2 months of fiscal year 2002 and the first 3 months of 2003, because it recognized that the strike would result in higher expenses and it found LB&B Associates’ estimate for exceeding the budget to be acceptable, under the circumstances. As required by Federal Acquisition Regulations, LB&B Associates notified USDA that it expected to exceed its budget as a result of the strike.Figure 3 shows the total costs LB&B Associates charged to USDA from October 1, 2001 through January 5 2003; the graph also incorporates costs billed to USDA by North Fork Services from January 6 through May 31, 2003, illustrating the continued fluctuation in contract costs. According to LB&B Associates’ data, there were fluctuations in Plum Island’s costs, as shown in figure 3. Also, as a result of the strike, additional costs were incurred in the following areas from August 1, 2002, through January 5, 2003,unless otherwise noted: labor (salary and benefits), subcontracts, cafeteria, and travel (including lodging and transportation). Labor: The cost of labor peaked at $428,161 in August 2002, a 16 percent increase over the average monthly cost of $370,118 for the previous 10 months. Monthly labor costs then gradually decreased until November, when the cost of labor was about 1.6 percent more than the average monthly cost. Labor costs increased because most of the temporary replacements were management-level employees from other LB&B Associates contract sites, who earned more than the employees they replaced. According to its documents, LB&B Associates used management-level employees because union members from other localities usually honor a picket line and would not temporarily replace union strikers. As new permanent employees were hired, the cost of labor gradually decreased. Subcontracts: Subcontracts related to the strike, such as for providing security guards at the picket line, added about $523,000,or 77 percent of the total subcontract costs billed to USDA by LB&B Associates. Cafeteria: Cafeteria expenses increased by about $12,000, or 51 percent of the total cafeteria expenses because the cafeteria provided two meals per day for the temporary replacements, who spent more time on the island to ensure continued operations than employees had before the strike began. Travel: Travel expenses attributed to the strike, such as transporting and housing the temporary replacement workers, totaled more than $125,000, constituting 98 percent of the total travel costs billed to USDA during that time period. Comments from the Department of Homeland Security Comments from the U.S. Department of Agriculture GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, Aldo Benejam, Nancy Crothers, Mary Denigan-Macauley, Jonathan Gill, Thomas Farrell, Wyatt R. Hundrup, and Ramon Rodriguez made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Scientists at the Plum Island Animal Disease Center are responsible for protecting the nation against animal diseases that could be accidentally or deliberately introduced into the country. Questions about the security of Plum Island arose after the 2001 terrorist attacks and when employees of the contractor hired to operate and maintain the Plum Island facilities went on strike in August 2002. GAO reviewed (1) the adequacy of security at Plum Island and (2) how well the contractor performed during the strike. The Department of Homeland Security (DHS) assumed the administration of Plum Island from the Department of Agriculture (USDA) on June 1, 2003. While DHS is now responsible for Plum Island, USDA is continuing its research and diagnostic programs. Security at the Plum Island Animal Disease Center has improved, but fundamental concerns leave the facility vulnerable to security breaches. First, Plum Island's physical security arrangements are incomplete and limited. Second, Plum Island officials have been assuming unnecessary risks by not adequately controlling access to areas where pathogens are located. Controlling access is particularly important because pathogens are inherently difficult to secure at any facility. Although this risk may always exist, DHS could consult with other laboratories working with pathogens to learn different approaches to mitigate this risk. Third, Plum Island's security response has limitations. For example, the guard force has been armed but has not had the authority from USDA to carry firearms or make arrests. Moreover, Plum Island's incident response plan does not consider the possibility of a terrorist attack. Fourth, the risk that an adversary may try to steal pathogens is, in our opinion, higher at the Plum Island Animal Disease Center than USDA originally determined because of hostilities surrounding the strike. Also, when USDA developed its security plan for Plum Island, it did not review their defined threats with the intelligence community and local law enforcement officials to learn of possible threats--and their associated risks--relevant to the Plum Island vicinity. Although these reviews did not occur, USDA subsequently arranged to receive current intelligence information. Despite a decline in performance from the previous rating period, USDA rated the contractor's performance as superior for the rating period during which the strike occurred.
Background The Arms Export Control Act authorizes the sale of defense articles and services to eligible foreign customers under the FMS program. Under the program, the purchased items must be used and secured properly by the customer and cannot be sold to third parties. Also, the FMS program must be administered at no cost to the U.S. government. To recover administration costs, DOD applies a surcharge to each FMS agreement that is a percentage of the value of each sale. Multiple organizations have a role in the FMS program, including DSCA and the military services, State, and CBP. DOD’s responsibilities, which are described in the Security Assistance Management Manual, largely focus on the overall administration of the program and FMS agreements. DSCA carries out key functions, such as managing the FMS administrative surcharge account and supervising end use monitoring of FMS items, and the military services carry out the day-to-day implementation of FMS agreements. State regulates the export of defense articles, in cluding the implementation of the FMS program, through its International Traffic in and CBP enforces export control laws and Arms Regulations (ITAR), regulations at U.S. ports and monitors the dollar value and quantity of defense articles exported under each FMS agreement. Typically, the FMS process begins when a foreign government submits a letter of request to State or DOD to purchase defense articles under the FMS program. The request is then forwarded to the military service responsible for the particular defense article, which then develops a letter of offer and acceptance, or a sales agreement between the United States and the foreign government. State and DOD officials approve the sale, and Congress is notified if the proposed sale meets certain dollar thresholds and other requirements. The military service sends the agreement to the foreign government for its acceptance. After the foreign government accepts the agreement, case managers within the military services can begin carrying out agreement actions such as contracting to procure defense articles, issuing requisition orders, providing program management, transporting defense articles if required, and administering financial transactions. A single FMS sales agreement may result in hundreds or thousands of individual shipments to a foreign government. In most cases, the military service provides the defense article to the foreign country’s freight forwarder, the authorized agent for the foreign customer. However, some countries use DOD’s defense transportation system to ship defense articles. The ITAR requires that freight forwarders register with State, which must receive a letter from the foreign government designating the registered freight forwarder as its authorized agent. CBP port officials rely on a list provided by State to confirm that the freight forwarder for a shipment is the registered freight forwarder for the foreign government. CBP port officials also verify export documentation and subtract the value of each shipment from the total value of exportable goods for each FMS agreement. If the items shipped are incorrect or damaged upon receipt, the foreign government submits a supply discrepancy report to the military service. Every FMS sales agreement has certain security requirements, including end-use monitoring requirements. To provide reasonable assurance that the foreign customer complies with these requirements, DSCA established the Golden Sentry end-use monitoring program in 2001. As part of this program, security assistance officers stationed in a foreign country monitor the use and security of defense articles purchased through the FMS program, and the officers conduct additional checks on certain sensitive defense articles such as Stinger missiles. DSCA officials conduct regional forums and familiarization visits where the foreign country and DOD representatives work together to mutually develop effective end-use monitoring compliance plans. In addition, DSCA officials conduct country visits to review and assess compliance with the requirements of the FMS agreement and perform investigative visits when possible end-use violations occur. We have previously reported on weaknesses in the FMS program, including lack of accountability for shipments to some foreign countries, lack of information on end use monitoring, and insufficient information on the costs to administer the program. Table 1 outlines our previous findings. Weaknesses in Shipment Verification Process Continue, and Expanded Monitoring Program Lacks Guidance for Country Visits Agencies responsible for the FMS program have not taken the actions needed to correct previously identified weaknesses in the FMS shipment verification process, and DOD’s expanded end-use monitoring program lacks written guidance for selecting countries for compliance visits using a risk-based approach. First, agencies are not properly verifying FMS shipment documentation, in part because State has not finalized revisions to the ITAR to establish DOD’s role in the verification process. Second, DOD lacks mechanisms to fully ensure foreign governments receive their FMS shipments—in part because DOD does not track most FMS shipments and continues to rely on FMS customers to notify the department when a shipment has not been received. Finally, while DOD has visited an average of four countries each year since 2003 to assess compliance with FMS agreement requirements, it does not have written guidance using a risk-based approach to prioritize the countries it visits to monitor compliance and has not yet visited several countries with a high number of uninventoried defense articles. Agencies Lack Adequate Export Information to Verify FMS Shipments To control the export of FMS defense articles, freight forwarders are required to provide the following information before CBP allows an FMS shipment to leave a U.S. port: the FMS sales agreement, State’s export authorization form (DSP-94), and evidence that shipment data was entered in the government’s Automated Export System (see table 2). CBP port officials review this information to confirm that the items are authorized under the FMS agreement and that the agreement has an exportable value remaining. The officials also subtract the shipment’s value from the total value of the defense articles permitted under the FMS agreement. Although we recommended in 2003 that State revise the ITAR to clearly establish control and responsibility for all FMS shipments, it has yet to do so. Shortly after our report, representatives from State, DSCA, and CBP met to draft proposed ITAR revisions that would require DOD to verify that the correct value and type of defense article is listed on the export documentation. According to agency officials involved in the process, agency representatives went through multiple iterations of the draft ITAR revisions over a period of several years. However, these revisions have been in State’s final clearance stages since May 2008. In the meantime, weaknesses we previously identified in the verification process continue to go unaddressed. Anticipating the ITAR updates, in 2004 DOD issued guidance in its Security Assistance Management Manual instructing the military services to verify that the sales value listed on the DSP-94 by the freight forwarders includes only the value of the exportable defense articles listed in the FMS agreement. However, because the guidance only applies to DOD and not the freight forwarders, we found cases where freight forwarders did not submit DSP-94 forms for DOD review. For example, in 10 of our 16 case studies, freight forwarders—who are not bound by DOD’s guidance—did not submit DSP-94s to the military services for verification. In addition, in the six cases that were verified by the military services, one listed the full FMS agreement value on the DSP- 94, including administration charges, rather than only the value of the exportable defense articles, as DOD policy requires. Further, officials from one military service were uncertain who within their security assistance command was supposed to verify the documents and how they were supposed to be verified. CBP port officials lack key information in export documentation that is needed to properly record the value of defense articles shipped under an FMS sales agreement and ensure the value of the shipments made are not more than the exportable value of the agreement. According to CBP guidance, each FMS agreement should have one port that records the value of the exports made against an agreement. However, freight forwarders are not required to identify the primary port on the DSP-94 they provide to CBP at the time of the shipment. For example, freight forwarders listed multiple ports on this form for several of the agreements we reviewed. In one case, the DSP-94 listed seven ports. While information from the Automated Export System is required to accompany all FMS shipments, we found that this system only lists the port of export—not the primary port. CBP port officials have told us that they have no way of knowing if an FMS agreement or a DSP-94 is filed at more than one port because CBP does not have a method to prevent these documents from being filed at multiple ports. Without accurate and complete information on the primary port, officials at other ports cannot notify the primary port regarding shipments that are made through their ports so that the value of these exports can be properly recorded. In some cases, port officials were reducing the exportable value of FMS agreements at ports that were not the primary port. For example, two ports contained duplicate entries for 67 FMS agreements, and, for many of these agreements, both ports were independently recording the value of shipments made against the agreement. In one case, the records for one port showed that the agreement value was exhausted, while the records for the second port still showed an exportable value of $2.9 million. Although CBP agreed to develop guidelines for FMS shipment verification and reduction of allowable export value after a shipment in response to our 2003 report recommendations, the U.S. Customs Control Handbook for Department of State Licenses has not been updated since 2002, and it does not provide instructions to CBP port officials on tracking shipment and agreement values. CBP issued a policy memorandum in 2004 directing port personnel on how to record shipment values for FMS sales agreements and coordinate with other ports to designate one primary port to track and record shipments against each FMS agreement, but CBP port officials we met with in July 2008 did not have the memorandum, and it was not posted on CBP’s intranet, a resource that CBP began to use after 2004 to distribute policy information among the ports. The list of closed agreements included 467 cases that were identified as closed for the 2003 GAO report and 2212 agreements that were closed from October 2007 to September 2008. determine if shipments are authorized. We determined that multiple shipments were made against six of these agreements, including agreements for the sale of technical defense publications, avionics components, and missile components. According to DOD officials, one o these agreements was closed before any orders were placed against it; however, we found that 21 shipments were made against this agreement by a freight forwarder. In October 2008, DSCA officials provided a list of recently closed FMS agreements to CBP, and they plan to meet with officials to discuss how to use the information. However, this list only covers agreements that were closed in fiscal year 2008, which could all shipments to continue to be made against agreements that were closed prior to that time. DOD Lacks Mechanisms to Fully Ensure That the Correct FMS Shipments Reach the Right Foreign Customers According to DOD guidance, DOD considers its responsibility for the shipment of FMS articles complete when the title transfers from DOD to the foreign government, which typically occurs when the item is picked up by the freight forwarder at a DOD supply center or other point of origin. DOD does not usually notify the foreign customer when a defense article has been shipped. If a foreign customer has not received an FMS shipment or it is damaged upon receipt, problems that may not be identified until months after the article was shipped, the customer files a supply discrepancy report. Each FMS sales agreement may have thousands of shipments associated with it, and discrepancy reports could be filed against each shipment. For example, in our 16 case studies, 188 supply discrepancy reports were filed. Thirty-one of these reports were filed because an incorrect item was received. In such cases, DOD officials may tell the foreign government to dispose of the item and give the foreign government a credit against their account. However, if the report is not submitted within one year, DOD is not required to take action on the discrepancy. If a country chooses not to submit a report, DOD has no procedures in place to identify a lost or diverted FMS shipment as it does not generally track such shipments once they leave the DOD supply center. According to DOD officials, DOD investigates the whereabouts of defense articles that foreign governments claim they did not receive, or received but never ordered, when the foreign customer notifies DOD. Without notification from the customer, DOD may not know when defense articles are mistakenly transferred to a foreign customer. This occurred in 2006 when DOD mistakenly transferred forward section assemblies for the Minuteman III intercontinental ballistic missile to Taiwan instead of the helicopter batteries the country had requested through the FMS program. DOD only became aware of an error in 2007, when Taiwanese officials notified U.S. officials that they did not receive the requested batteries. At the time, DOD did not fully investigate the discrepancy and also did not realize that it had sent missile components to Taiwan until 2008—more than one year after being notified of the error. In 2008, the Defense Logistics Agency (DLA) —which manages the inventory for weapon system spare parts and other consumable items in the DOD supply system—took action to ensure that defense articles for shipment are properly labeled in an effort to minimize the risk that an incorrect article is provided to a foreign customer. According to a DLA headquarters official, DLA found it had a high inaccuracy rate for its supply center shipments. DLA inspectors found, for example, that if a shipping label got caught in the printer, the rest of the shipments on the line may have incorrect shipping labels because the personnel on the line may unknowingly skip the jammed label and affix subsequent labels on the wrong packages. DLA’s two largest FMS supply depots have recently put in place a double inspection process in which inspectors at the depots ensure that the shipping documentation matches the items in the package. A DLA official stated that this new process should address the problem of improperly labeled defense articles leaving the supply depot—the first part of the shipment process. However, Navy officials responsible for FMS shipments noted that DLA needs to determine the source of the problems to ensure that its solutions are correct. It is too early to know whether DLA’s new process will reduce the inaccuracy rate for supply center shipments. According to DSCA officials, while DOD currently does not track all shipments under FMS sales agreements, it has mechanisms intended to improve visibility over shipments in limited circumstances. For example, DOD established the Enhanced Freight Tracking System, which is intended to allow DOD personnel, freight forwarders, and foreign customers to track shipments from their point of origin to their final destination. Currently, participation by FMS customers is voluntary. DSCA and military service officials stated that the system was designed for customers to track their shipments, and the officials do not plan to use the system to track all FMS shipments. DOD also faces challenges in successfully implementing the new system. First, the system is in the first phase of implementation, which focuses on tracking defense articles from the initial location in the military depot to the freight forwarder, and subsequent phases will allow for shipment tracking to the final destination in the foreign country. Second, in some cases the transportation control numbers that are used to track shipments have been incomplete or changed when shipments were consolidated and therefore are not a reliable method to track shipments. According to DOD officials, while the freight tracking system has multiple searchable fields, for some FMS shipments the transportation control number is the only searchable field. In addition, DOD officials identified another mechanism for tracking FMS shipments that is being used for countries within the U.S. Central Command area of responsibility, in particular Iraq and Afghanistan. All such shipments are required to have radio frequency identification tags that allow for electronic tracking of shipments through the Enhanced Freight Tracking System to their destination. DSCA officials noted that DOD developed this requirement to address the unique security situation in those countries, and DOD does not have plans to expand it to include shipments to other countries. DSCA Does Not Have Guidance for Prioritizing Selection of Countries for Compliance Monitoring Visits In 2003, we found that DSCA lacked the information needed to implement and report on its Golden Sentry end-use monitoring program. Since then, DSCA expanded this program and has been reporting annually on its resources. According to DSCA’s fiscal year 2009 monitoring report to Congress, the purpose of the program is to scrutinize the foreign purchaser’s use of U.S. defense articles to ensure compliance with U.S. security requirements. The report further notes that to conduct end-use monitoring with available resources, DSCA uses a risk-based approach. Countries are to secure all defense articles purchased through the FMS program. They are also required to maintain a detailed inventory of every item received by serial number for 16 defense articles DOD designated as sensitive. These sensitive defense articles have been purchased by 76 countries and include night vision devices, communication security equipment, and certain types of missiles, such as Stingers. To ensure that foreign governments and security assistance officers are complying with monitoring requirements, DSCA headquarters officials lead in-country compliance visits, which DSCA has identified as an important part of the Golden Sentry program. Specifically, the visit objectives are to: assess in-country security assistance officers’ overall compliance with the end-use monitoring program; assess the foreign government’s compliance with specific physical security and accountability agreements through facility visits, records reviews, and reviews of local security policies and procedures; conduct routine or special inventories of U.S.-origin defense articles; appraise possible violations of the U.S. laws, international agreements, or FMS agreements. To conduct these compliance visits, DSCA assigned three officials to particular regions of the world. These DSCA personnel periodically lead teams made up of several military service and overseas DOD personnel with expertise on sensitive weapon systems or the country visited. DSCA budgeted $1.4 million for such visits in each of the fiscal years 2006 through 2008. Since DSCA began conducting compliance visits in 2003, it has visited 19 countries—or 25 percent—of the 76 countries that have purchased sensitive defense articles, averaging about four visits per year. According to DSCA policy, DSCA compliance visits should focus on the countries that have purchased sensitive defense articles, with a particular emphasis on visiting those with Stinger missiles. DSCA officials stated that they determine compliance visits based in part on foreign policy considerations, such as the need to coordinate visits with foreign governments to respect their sovereignty. While no written guidance exists, officials stated they consider a variety of risk-based factors in determining countries to visit, including considering whether the country is in a stable region of the world or if the officials have information indicating sensitive defense articles may not be properly protected or inventoried. Yet, out of the 19 countries they visited, about 50 percent were in a stable region of the world. In addition, DSCA has not yet conducted compliance visits in three countries that have a high number of uninventoried defense articles, including Stinger missiles and related components and night vision devices, as identified by DSCA’s data system. According to a DSCA official responsible for the compliance visits, these three countries are now scheduled for visits in 2009. DSCA also noted that one of these countries needed assistance to help it meet standards before it could have a successful compliance visit. However, as DSCA has not created written guidance for selecting countries for compliance visits, it is unclear how it applied a risk-based approach in prioritizing its country selections to date. DOD Lacks Information Needed to Effectively Administer and Oversee the FMS Program While DOD has implemented initiatives aimed at improving the overall administration of the FMS program, it lacks the information needed to effectively administer and oversee the program. First, DOD does not have information on the actual cost of administering FMS sales agreements and, as a result, is not able to link the administrative surcharge DOD charges foreign customers with actual costs. Second, DOD lacks information for determining an improved metric to measure the timeliness with which FMS agreements are developed. Finally, DOD does not have consistent data from each of the military services on administering FMS agreements. DSCA Lacks Sufficient Information to Determine Administrative Surcharge Rate Over the past decade, DSCA has implemented several initiatives aimed at improving the balance between FMS expenditures and income. Specifically, DSCA has twice adjusted the surcharge rate—the rate charged to FMS customers to cover program administration costs. However, DSCA does not have sufficient information to determine the balance necessary to support the program in the future. In 1999, DSCA decreased the surcharge rate from 3 to 2.5 percent because the administrative surcharge account had a surplus. Prior to this change, we recommended that DSCA not lower the rate until it determined the cost of implementing the FMS program. However, DSCA disagreed with this recommendation and lowered the rate despite declining income that the program experienced between 1995 and 2000. According to DSCA officials, by 2005 the program experienced a decrease in income that raised concerns about DSCA’s ability to pay FMS program expenses if sales continued at the existing rate. Following a year-long internal study to determine a sustainable rate, DSCA increased the surcharge rate from 2.5 to 3.8 percent in August 2006 and clarified what services are included in the administrative surcharge and what services require additional charges. Since then, the administrative surcharge account balance has grown—a result of both the increased rate and higher than anticipated sales. In fiscal year 2008 alone, FMS program sales totaled $36 billion— almost triple the amount DSCA had previously projected. Once the customer signs the agreement and pays the required deposit, DSCA collects 100 percent of the administrative surcharge from agreements in support of the Global War on Terrorism and other agreements with different funding sources and 50 percent of the administrative surcharge for all other agreements. Expenditures from these sales agreements continue throughout the entire life of the agreement, which on average last 12 years. However, DSCA knows only historical costs associated with the overall program, not the costs to implement each FMS agreement. Identifying the costs of administering the FMS program is a good business practice identified in federal financial accounting standards. DSCA plans to reassess the optimal rate based on the level of sales and estimated expenses, but without data on actual agreement costs, the surcharge rates DSCA establishes may not be sufficient to pay for needed administrative activities. According to a senior DSCA official, while the fund is not currently in danger of becoming insolvent, it is unclear how the current economic situation may affect future sales levels and, therefore, the administrative account balance. DSCA’s selection of its current surcharge rate has also raised issues with FMS customers and the military services regarding which administrative services require additional charges beyond what is included in the standard administrative surcharge. The standard level of service includes services such as the preparation and processing of requisitions. A country that wants services in addition to the standard level of service, such as additional reviews or contractor oversight, is charged separately for those services. DSCA has provided guidance and training to help the Army, Navy, and Air Force apply the revised standard level of service to new cases. However, according to Navy officials, measuring one standard level of service is unrealistic because every case is unique and may require varying levels of service. Several FMS customer representatives to the Foreign Procurement Group also raised questions about administrative surcharge billing and the consistency with which the standard level of service was applied across the services. A briefing prepared by the Foreign Procurement Group in July 2008 noted improvement in the application of the standard level of service but identified the need for additional transparency in DOD’s charges for the standard level of service for FMS agreements. For example, the group cited incidences of charging customers for services that should be covered under the standard level of service. DSCA Lacks Sufficient Information to Improve Metric Regarding FMS Agreement Development Time Frame In an effort to ensure FMS sales agreements are developed and presented to customers in a timely manner, DSCA established a goal of developing and presenting 80 percent of agreements to its customers within 120 days of receiving a request to purchase a defense article through the FMS program. DSCA’s 120-day time period begins with the initial receipt of the purchase request and includes the time required to receive pricing information for defense articles from contractors, to allow the services to write the actual FMS agreement, and for all of the relevant agencies to review and approve the sale of the defense articles. In 2008, DSCA began a study to determine if the 120-day goal was reasonable or if it needed to be revised. However, DSCA officials stated they did not have sufficient information to make such a determination. As a result, DSCA directed each military service to study its FMS process to assess internal FMS processes and the time frames associated with those processes. According to DSCA officials, they anticipate receiving the results of the studies in early summer 2009. A variety of factors may affect the military services’ ability to meet the 120- day time frame for developing an FMS agreement. For all agreements implemented from January 2003 to September 2008, DSCA developed 72 percent of FMS agreements within its stated 120-day goal. While it takes an average of 122 days after the initial receipt of a request to develop an FMS agreement, the number of days that it took to develop an FMS agreement ranged from less than one to 1,622 days. While DSCA officials noted that the creation of a central agreement writing division in 2007 has helped improve the consistency of agreements, there are other factors affecting the time it takes to develop an agreement. Officials responsible for developing the FMS agreements stated that while it is possible to meet the 120-day goal on routine agreements, such as blanket order agreements, it is difficult to meet the goal for complex agreements, such as agreements for weapons systems. Agreements over certain dollar thresholds could require more time if they have to go through the congressional notification process. Similarly, for example, non-NATO cases may require more time for the U.S. Embassy in the customer country to conduct an evaluation of the proposed sale. Prioritization of certain agreements, such as Iraq FMS agreements, can also delay the development of other FMS agreements. Other factors that can extend FMS agreement development times include slow customer response to follow-up questions about requests to purchase defense articles, workload challenges within the military services, and slow contractor response times for pricing information about the defense article the foreign government wants to purchase. Disparate Data Systems Limit Available Information for DSCA Oversight of FMS Program FMS implementation, management, and financial data—which DOD uses to track, oversee, and execute FMS sales agreements—are currently dispersed among 13 electronic systems across the military services and other DOD components. As a result, DSCA’s ability to obtain FMS program information and to manage the efficiency of the FMS process is limited. For example, one DSCA official responsible for collecting program information noted that DSCA cannot effectively measure cost, schedule, and performance on FMS agreements because current systems only provide three consistent indicators that are comparable across the military services. According to the official, while the service specific systems may provide information for analyzing the performance of FMS agreements within that service, the information is not comparable with data produced by other services, thus reducing its value to DSCA for overall oversight of the program. DSCA compiles the limited available data from the military services for quarterly meetings that review the FMS program in an effort to determine potential problems. In addition, as DOD does not have a centralized system, the services have independently developed tools to enhance the capabilities of their existing systems, one of which has been in place since 1976. For example, the Air Force independently developed a web-based system for processing supply discrepancy reports, but DSCA has yet to fully fund this system to be used by the other services. In an effort to develop more comparable, detailed, and complete data on agreement implementation, DSCA is working to develop the Security Cooperation Enterprise Solution. DSCA is currently defining the requirements for this potential system and has yet to determine how it will relate to other data systems. According to DSCA officials, recent increases in FMS sales and the administrative surcharge rate will provide sufficient funds to begin the development of a new data system. DSCA also uses the Security Cooperation Information Portal—a web-based tool designed to provide a point of access for DOD’s multiple FMS information systems, such as the services’ requisition systems, the system used to write agreements, and the financial systems. The portal retrieves information from existing DOD systems and is intended to provide consolidated information to DOD and foreign customers. However, as the portal is based on information from 13 different systems, the data have the same limitations in providing DSCA with comparable data to oversee the FMS program. Conclusions The FMS program, as a part of a broader safety net of export controls designed to protect technologies critical to national security as well as an important foreign policy tool to advance U.S. interests, presents a set of unique challenges to the agencies involved in the program. Previously identified weaknesses in the FMS shipment verification process remain unaddressed and require the immediate and collective attention of leadership within State, DOD, and Homeland Security. While these departments each have a distinct role to play in the FMS program, they have failed to work collectively to ensure that FMS articles are not vulnerable to loss, diversion, or misuse. This failure has clear national security implications because defense articles will be at risk of falling into the wrong hands. Consistent with our 2003 report, we still believe that State should revise the ITAR to establish procedures for DOD verification of FMS shipments to address weaknesses in the shipment verification process. Also, DOD may not be maximizing its resources by fully applying a risk-based approach to ensure that sensitive defense articles are protected as required. In addition, DOD has made changes to its FMS program administration without sufficient information on which to base these changes, and it lacks information to assess how well the program is working. Without this information, DOD’s ability to know if the program is achieving intended results is limited. Recommendations for Executive Action To improve controls for exported items as well as administration and oversight of the FMS program, we are reiterating a recommendation to State from our 2003 report and making the following five recommendations. To establish procedures for DOD verification of FMS shipments, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Policy to provide additional guidance to the military services on how to verify FMS shipment documentation. To ensure CBP port officials have the information needed to verify FMS shipments are authorized, we recommend that the Secretary of State direct the Assistant Secretary for Political-Military Affairs, that the Secretary of Defense direct the Under Secretary of Defense for Policy, and that the Secretary of Homeland Security direct the Commissioner of Homeland Security’s U.S. Customs and Border Protection to coordinate on establishing a process for: ensuring the value of individual shipments does not exceed the total value of the FMS agreement; designating a primary port for each new and existing FMS agreement; developing a centralized listing of these primary ports for use by CBP providing CBP officials with information on FMS agreements that were closed prior to fiscal year 2008. To ensure that correct FMS shipments reach the right foreign customers, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Policy to examine its existing mechanisms and determine if they can be used to improve tracking of FMS shipments. To ensure that FMS defense articles are monitored as required, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Policy to create written guidance for selecting in-country visits that consider a risk-based approach. To improve the administration and oversight of the FMS program, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Policy to better determine the administrative costs of implementing the FMS program and develop metrics that allow DSCA to comprehensively assess the performance of the FMS program. Agency Comments and Our Evaluation State, DHS, and DOD provided written comments on a draft of this report, which are reprinted in appendices II through IV. DHS and DOD also provided technical comments, which we incorporated as appropriate. In written comments, State and DHS concurred with our recommendations and outlined plans to implement them. DOD concurred with two of our recommendations and partially concurred with the other three. In its technical comments, DOD also noted that it disagreed with our characterization of the information it uses to administer the FMS program. In concurring with the recommendation that State should revise the ITAR, which we reiterated from our 2003 report, State noted that the Political- Military Bureau is processing the recommended changes to the ITAR and will publish them in the Federal Register as soon as all requirements for doing so are met. DOD concurred with our recommendation to provide additional guidance on verifying FMS shipment documentation and agreed to work with the military services to review the current guidance and revise as necessary. DOD also concurred with our recommendation that it examine its existing mechanisms for tracking FMS shipments and will work with agency representatives to improve end-to-end visibility. In response to our recommendation that State, DHS and DOD coordinate to ensure CBP port officials have the information needed to verify that FMS shipments are authorized, DHS and DOD agreed to work together to provide this information. DHS identified several specific actions that it plans to take, including reconvening an interagency working group to address FMS-related issues, obtaining a complete list of closed FMS agreements from DOD, and establishing a list of all primary ports for existing and new FMS agreements. DOD also agreed to provide CBP with a list of closed FMS agreements. While DOD agreed to work with State and CBP to establish a process for designating a primary port for each new FMS agreement, it noted that it will have to examine the resource impact of designating a primary port for existing FMS agreements before taking further action. Once DOD has made this assessment it will be important for the agencies to determine the appropriate course of action for existing agreements. DOD partially concurred with our recommendation to create written guidance for in-country visits and said that such guidelines could be included in the Security Assistance Management Manual. DOD noted that these guidelines would take risk into account, but would have to be broad enough to consider other factors, such as the experience of personnel, when scheduling in-country visits. DOD has reported to Congress that it uses a risk based approach to conduct end-use monitoring with available resources. While our report notes that a variety of factors play a role in the selection of countries for compliance visits, we also found that the current system, which lacks written guidance, may not ensure that DOD is distributing its resources in a risk-based manner. As DOD has identified these visits as an important part of its monitoring program, we continue to believe that DOD needs written guidance—whether in published guidance or internal policy memos—that applies a risk-based approach to ensure that sensitive defense articles are protected as required. DOD also partially concurred with our recommendation that it improve the administration and oversight of the FMS program. DOD agreed that rigorous data analysis and well-defined, targeted metrics are vital for FMS program administration. It noted that it conducted a year-long study prior to changing the current administrative surcharge rate in August 2006 and that it hosts a quarterly forum at which security cooperation leadership review metrics related to the FMS program. In its technical comments, DOD also stated that it has sufficient information and that it is not required to gather information on actual costs to administer the FMS program. As we state in our report, DOD’s August 2006 study relies on future sales estimates and historical budget data for program administration to develop its surcharge rate, which does not provide it with the actual costs to implement existing FMS agreements. We also note that identifying the costs of administering the FMS program is a good business practice recognized in federal financial accounting standards. In addition, while we acknowledge that DOD officials meet at quarterly forums to review existing metrics, they have limited consistent indicators that are comparable across the military services. As such, we continue to believe that DOD should obtain additional information regarding the cost of implementing FMS agreements and to develop metrics to administer and oversee the program. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Secretaries of State, Defense, and Homeland Security. In addition, we will make the report available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or lasowskia@gao.gov, if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Ann Calvaresi- Barr, Director; John Neumann, Assistant Director; Heather Miller; Jean Lee; Sarah Jones; Ann Rivlin; Noah Bleicher; John Krump; Karen Sloan; Art James; and Rebecca Rygg. Appendix I: Scope and Methodology To examine the changes that State, DOD, and DHS have made to the FMS program since 2003, we reviewed the regulatory framework governing the FMS process, including the Arms Export Control Act, the International Traffic in Arms Regulations (ITAR), and a draft of potential revisions to the ITAR. We also reviewed the U.S. Customs Control Handbook for Department of State Licenses, the Defense Department’s Security Assistance Management Manual, and other guidance from Customs and Border Protection (CBP), the Defense Department, and the military services. We used case studies to assess the steps in the FMS process. Using data available to the military services and FMS officials through the Defense Security Assistance Management System, we selected 16 FMS agreements based on the following attributes: the military service responsible for administering the FMS agreement, the type of defense item sold, whether the item required enhanced end-use monitoring, the customer country, and agreements that used both the Defense Transportation Service and freight forwarders to transport defense articles to their end destination. We selected similar defense articles to compare across the military services. We obtained data from DOD systems used to manage the FMS program. We verified that the agreements we selected contained the traits for which they were selected. Based upon this verification, we confirmed that the data we used were sufficiently reliable for our purposes. We also obtained data from two major ports, one airport and one seaport. These ports are 2 of the top 10 U.S. ports in terms of the dollar value of FMS shipments they process. We used these data to determine if FMS agreements were being lodged at multiple ports and to determine if exports were occurring against FMS agreements for which exports were no longer authorized. We reviewed copies of licenses and shipment logs to identify when actual shipments were made against FMS agreements that were no longer authorized to have shipments. Our analysis of these data allowed us to determine whether gaps in controls exist, but did not allow us to assess the state of controls at all ports. In addition, we interviewed officials and obtained documentation at the State Department, the Defense Security Cooperation Agency (DSCA), the Air Force Security Assistance Center, the United States Army Security Assistance Command, the Navy International Programs Office, the Naval Inventory Control Point, the Defense Logistics Agency, CBP headquarters and port personnel at two ports, and U.S. security assistance officers stationed in one NATO and one non-NATO country. To determine the information DOD uses to administer and oversee the FMS program, we reviewed the Defense Department’s Security Assistance Management Manual and other guidance from the Defense Department and the military services. We also reviewed the Office of Management and Budget's Managerial Cost Accounting Concepts and Standards for the Federal Government - Statement of Federal Financial Accounting Standards Number 4. We analyzed data the military services use to manage FMS agreements implemented from fiscal years 2003 to 2008. We conducted interviews with officials at DSCA and the military services. We also met with the Foreign Procurement Group, a group composed of FMS customer countries, to ask them about their experiences with the FMS program and reviewed the group’s 2008 briefing for the program. We conducted this performance audit from May 2008 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of State Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of Defense
In fiscal year 2008, the Foreign Military Sales (FMS) program sold over $36 billion dollars in defense articles and services to foreign governments. The Departments of State, Defense (DOD), and Homeland Security (DHS) all have a role in the FMS program. In 2003, GAO identified significant weaknesses in FMS control mechanisms for safeguarding defense articles transferred to foreign governments. In 2007, GAO designated the protection of technologies critical to U.S. national security a high-risk area. GAO was asked to (1) evaluate program changes State, DOD, and DHS have made since 2003 to ensure that unclassified defense articles transferred to foreign governments are authorized for shipment and monitored as required, and (2) determine what information DOD has to administer and oversee the FMS program. GAO conducted 16 case studies; analyzed U.S. port data and FMS agreements; reviewed program performance metrics; and interviewed cognizant officials. Agencies involved in the FMS program have made some changes in the program but have not corrected the weaknesses GAO previously identified in the FMS program's shipment verification process, and the expanded monitoring program lacks written guidance to select countries to visit to ensure compliance with requirements. State--which is responsible for the program and approving FMS sales--has not finalized proposed regulatory revisions to establish DOD's role in the FMS shipment verification process, although the FMS agencies reached agreement on the proposed revisions about a year ago. DHS port officials, responsible for export enforcement, also continue to lack information needed to verify that FMS shipments are properly authorized. GAO found six FMS agreements that had unauthorized shipments, including missile components. In one case, 21 shipments were made after the agreement was closed. At the same time, DOD, which administers the FMS program and FMS agreements, lacks mechanisms to fully ensure that foreign governments receive their correct FMS shipments--in part because DOD does not track most FMS shipments once they leave its supply centers and continues to rely on FMS customers to notify the department when a shipment has not been received. With regard to monitoring defense articles once in country, DOD does not have written guidance to prioritize selecting countries for compliance visits using a risk management approach and has not yet visited several countries with a high number of uninventoried defense articles. DOD lacks information needed to effectively administer and oversee the FMS program. For example, within the last 10 years DOD has twice adjusted the surcharge rate--the rate charged to FMS customers to cover program administration costs--but it does not have information on program costs to determine the balance necessary to support the program in the future. Also, while DOD has a goal to release 80 percent of FMS agreements to a foreign government within 120 days of receiving its request to purchase defense articles, DOD officials stated they do not have the information needed to determine if the goal is reasonable. In addition, DOD lacks information to oversee the program, in large part due to the fact that FMS data reside in 13 different accounting, financial, and case implementation systems. DOD is in the process of defining its requirements for FMS program information before it moves forward with improving its data systems. In the meantime, DOD is relying on systems that do not provide it with sufficient, comparable data to oversee the program's performance.
Background The TANF program was established by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), and was designed to provide cash assistance to needy families with children, while at the same time ending the dependence of needy families on government benefits by promoting job preparation, work, and marriage. As the responsible federal agency, HHS provides an annual TANF block grant to each state. In fiscal year 2004, states spent about $26 billion of federal and state funds to assist low-income families, including providing monthly cash assistance to about 2 million families as of June 2004. States have considerable flexibility in administering TANF programs, and in turn, offices in localities throughout each state are responsible for interacting with TANF clients and running the day-to-day aspects of the program. The TANF program encourages recipients to work, places limits on the length of time that a family can receive federal cash benefits and requires that clients cooperate with child support authorities. Specifically, in accordance with the authorizing legislation, TANF regulations state that all clients must participate in work activities as soon as the state determines they are ready to do so, or after 24 months of cash benefits, whichever occurs earlier. Further, clients may not obtain federal TANF cash benefits for more than a total of 60 months during their lifetime. TANF also requires that all families with children for whom paternity has not been established or for whom a child support order needs to be established, cooperate with child-support enforcement agencies. The TANF program both gives states incentives to enforce these requirements, as well as the flexibility to waive these requirements if necessary. For example, states must engage a certain percentage of their overall and two- parent caseloads in work activities, or face financial penalty. In practice, states can waive the work and time limit requirements for clients in a number of ways. For example, states can provide cash benefits beyond the 5-year time limit to for up to 20 percent of their caseload. Some states also provide cash assistance with only state funds when a client has difficulty meeting federal requirements. States enroll applicants in the TANF program through a process that is roughly comparable, though the particulars may vary from state to state (see fig. 1). A TANF client may meet with a worker who verifies that the cash assistance program is appropriate for the client’s needs and that the client qualifies for cash assistance. The client is typically assigned a caseworker who conducts a review of the client’s employment prospects, including the client’s education, work history, skills, and aspects of the client’s personal life that may affect their ability to hold a job. Based on this review, the caseworker develops an individual responsibility plan that outlines actions that the client is to take in order to obtain employment and become financially self sufficient. This plan may require that a client immediately begin a job search or that the client first take actions to address aspects of the client’s life that pose a barrier to work. After this plan is established, the caseworker meets with the client periodically, in order to ensure that the client is making progress toward the goals outlined in the plan. Some states also require a more in-depth, comprehensive review of the client’s progress at regular intervals or if the client is having difficulty meeting aspects of the individual responsibility plan. The Family Violence Option Gives States Incentives to Address Domestic Violence among the Client Population The TANF legislation also established a provision known as the Family Violence Option, under which states commit to screen clients for domestic violence, and refer those who are identified as domestic violence victims to domestic violence services. Further, states that have adopted the Family Violence Option can waive program requirements—including those related to work, the 5-year federal time limit, and cooperation with child support authorities—if compliance would make it more difficult for clients to escape domestic violence, or would unfairly penalize domestic violence victims. States that have adopted the Family Violence Option may also avoid financial penalties related to the work requirements and the 5-year time limit, if their failure to meet these requirements is at least partly attributable to clients who have been granted federally recognized, good cause domestic violence waivers. In order for the domestic violence waiver to be federally recognized, it must specify which program requirements are being waived, be granted based on need as determined by a person trained in domestic violence, and be accompanied by a services plan. States may report federally recognized good cause domestic violence waivers to HHS, and HHS requires that such waivers not be reported unless they meet the regulatory definition. As with other aspects of the TANF program, states have broad flexibility regarding the detailed implementation of the Family Violence Option. Studies Indicated Domestic Violence Is Prevalent among Low- Income Women and Can Be a Barrier to Employment Domestic violence affects a substantial percentage of low-income women, according to existing research. Further, research shows that it can, in some cases, pose a barrier to work and financial independence. In 1998, we reported that studies indicated that between 15 percent to 56 percent of welfare recipients are, or have been, victims of domestic violence. Since that time, additional studies with similar findings have been published. For example, a 2001 report summarized the prevalence of domestic violence among a random sample of women in the TANF caseload of an urban county in Michigan. This study found that about 51 percent of the women had been severely abused at some time in their life, and about 15 percent had been severely abused by a partner at least once in the preceding year. Further, 61 percent reported being threatened with violence or other retribution at some point in their lifetime, and 24 percent reported such an experience had occurred in the preceding 12 months. Past research has also revealed that domestic violence can, in some cases, be a barrier to work and financial independence. In our 1998 report, we reported that various studies found that women who were or had been victims of domestic violence were employed at about the same rates as women who reported never having been abused. However, we also cited one study that found abused women experienced higher job turnover and more spells of unemployment than women who had never been abused. Further, our 1998 report, as well as other research, has outlined how domestic violence can impede successful employment for some women. For example, several studies have shown that abusers may be threatened by any steps a woman takes toward financial independence, and may thwart a job search or employment by interfering with transportation to work, or by making harassing phone calls to a woman while she is in the workplace. Prompting Disclosure of Domestic Violence Can Be Difficult State TANF programs face significant challenges addressing domestic violence, in part because it can be especially difficult to detect. Many victims of domestic violence may be reluctant to disclose such a personal and potentially humiliating aspect of their life. Researchers in the field of domestic violence and state officials stated that domestic violence victims may be unwilling to disclose domestic violence out of shame and embarrassment. Victims may also fear retribution by the abuser. TANF clients may be particularly reluctant to disclose abuse in the TANF program setting. Advocates for victims of domestic violence explained that TANF clients often see TANF staff as government officials who cannot be trusted to keep the disclosure of domestic violence confidential. Further, some clients may fear that any disclosure of domestic violence to government officials could result in the loss of custody over their children. As a result, disclosure may be thwarted, at least until a trusting working relationship can be developed. Caseworkers may also lack specific skills that facilitate disclosure. For example, according to some experts, caseworkers may lack empathy for domestic violence victims. Additionally, given the heavy workload that some caseworkers face—in busy urban areas, some have over 100 clients—they may not effectively screen for domestic violence. According to some researchers, some caseworkers may see addressing domestic violence as conflicting with the overall goal of ensuring that clients become employed. Finally, HHS officials told us that the physical setting of domestic violence screening— which in many offices can occur in open cubicles or public spaces—may not facilitate disclosure of domestic violence, as clients may fear being overheard. The difficulty of identifying domestic violence is compounded by a lack of consensus about the best techniques for screening clients. Some state officials and researchers reported the benefit of screening tools composed of multiple questions asking about various aspects of domestic violence— such as physical abuse, verbal abuse, or stalking. They believe such an approach helps clients who may not think of themselves as domestic violence victims and yet may respond affirmatively to a question about a specific behavior. On the other hand, some research has found that detailed questions about specific aspects of domestic violence may be considered overly intrusive, which can put clients on the defensive and make disclosure less likely. A 1999 study on the state TANF programs’ efforts to assist victims of domestic violence found that screening instruments should avoid intrusive questions about the specific actions of an abuser, as such questions can be too personal or shaming. Federal TANF Funds May Be Used to Fund Marriage and Responsible Fatherhood Programs Because the TANF program includes a variety of goals, federal TANF funds can be used for purposes other than cash assistance to needy families. Because one of the purposes of the TANF program is to encourage the formation and maintenance of two-parent families, some states have opted to use TANF funds for marriage programs. Such programs attempt to achieve these goals through various measures, such as education. Similarly, some states have opted to use TANF funds for responsible fatherhood programs. These programs are designed to encourage the active and responsible involvement of non-custodial fathers in the lives of their children. State officials, practitioners, and the domestic violence services community have stressed the importance of involving the domestic violence services community in development and implementation of marriage and responsible fatherhood programs. States that have used this approach have indicated that the involvement of the domestic violence services community was necessary to ensure that marriage and responsible fatherhood programs address individual safety. HHS Administration for Children and Families has echoed this sentiment in an information memorandum that strongly recommends that states consult with experts in domestic violence or with relevant community domestic violence coalitions as they design marriage initiatives. Further, HHS has stated that its Healthy Marriage Initiative--which seeks to promote healthy marriages though activities such as marriage and pre-marriage education-- requires grantees to develop domestic violence protocols that address screening for domestic violence and referrals to local domestic violence services. Most States Have Adopted the Family Violence Option, and States’ TANF Programs Take Differing Approaches to Screening for Domestic Violence While the large majority of states have adopted the Family Violence Option or a comparable policy, states take a broad range of approaches to identifying domestic violence. Most states require that clients be formally screened for domestic violence, though techniques can vary significantly from state to state. A few states do not actively screen, but require that clients be notified of available domestic violence waivers. Comparatively few states have policies regarding the privacy of screening, although officials in most states we contacted acknowledged its importance. Finally, most states reported that domestic violence screening is performed by staff that may have little or no training in recognizing and discussing domestic violence. To address this issue, some states have employed domestic violence specialists. Most States Have Adopted the Family Violence Option or an Equivalent Policy Forty states have certified adoption of the Family Violence Option (see fig. 2). Eight states reported that they had not certified the Family Violence Option, but had adopted similar policies. Each of these states reported that they make some effort to screen clients for domestic violence, refer clients to domestic violence services, and offer waivers of certain TANF program requirements. Officials in only three states—Maine, Oklahoma, and Ohio—reported that they had not adopted the Family Violence Option or a comparable policy. According to Oklahoma officials, adoption of the Family Violence Option was not seen as necessary because the state already required some of the actions called for by the Family Violence Option, though the provisions may not be precisely comparable. States Take a Broad Range of Approaches to Screening for Domestic Violence States screen for domestic violence using a broad range of strategies and techniques. Some require that staff use formal screening tools at specific points in the TANF process. Such tools vary in detail and depth of inquiry. Five states notify clients of domestic violence waivers, but do not require that staff specifically inquire about domestic violence. In addition, state TANF program staff can at any time also use informal domestic violence screening techniques, such as observing a client’s demeanor or interaction with their partner. Formal Screening at Specified Times Officials in the large majority of states (43) reported that their state actively inquires about domestic violence, and most of these states rely on a mandatory screening tool. Specifically, 26 states responding to our survey reported that they provide local offices with a specific screening tool that must be used, while another 4 said the tool they provide is optional. Eight states reported that they provide staff with guidance such as regulations, staff manuals, or memoranda, in lieu of a specific screening tool. We learned during our visits to states that the breadth and technique of the screening tools vary considerably. For example, the state of Washington uses a nine-question, electronic screening tool that prompts TANF staff to inquire about various aspects of domestic violence, including queries about threats, angry outbursts or controlling behavior by the client’s partner. This screening tool is administered by case managers, who generally go through these questions verbatim, and record the client’s responses in a data base. In contrast, the state of Iowa uses a tool designed to be self-administered by the client that makes a single query about sexual or physical violence in a table that covers a variety of health related issues. This tool does not ask specific or probing questions about domestic violence, but prompts the client to enter a checkmark if physical or sexual violence is an issue for any member of the client’s family. The large majority of states that require formal screening require it early in the TANF process, with follow-up screenings at certain points. As figure 3 indicates, subsequent screenings occur either at regular intervals, or when it becomes apparent that a client is having difficulty meeting program requirements. For example, our survey data show that 24 states screen during a TANF eligibility review that in some states, such as New York, can take place every 6 months. Many states also screen when the client exhibits difficulty in meeting TANF program goals or requirements. For example, 20 states screen when a client fails to meet the conditions of cash assistance. State officials told us that such follow-up screenings are important because the client may be reluctant to disclose domestic violence initially or because it may emerge as a problem after entry into the TANF program. Five states that have adopted the Family Violence Option or a comparable policy reported that they do not actively screen TANF clients for domestic violence. These states indicated that they have no screening requirements beyond informing clients about available domestic violence waivers. An official of one such state—Pennsylvania—referred to this process as “universal notification” and explained that the policy was developed in consultation with the state coalition that advocates for domestic violence victims. The policy was developed in the belief that it is best not to probe clients about domestic violence and risk putting them in an uncomfortable situation, but to spell out the program flexibilities that exist for domestic violence victims. This policy allows domestic violence victims to disclose, if necessary, at a time of their choosing. Informal Screening throughout TANF Process Staff in local TANF offices can also conduct “informal” screening through casually conversing with or observing clients. For example, a client’s demeanor or physical appearance may provide evidence of domestic violence. Several officials told us that such informal screening can occur at any time in the TANF process and is especially important because a client may not be ready to disclose a domestic violence situation at the time of formal screening, and because a domestic violence situation may arise after the formal screening. For example, caseworkers and other officials in Wisconsin told us that disclosure of domestic violence often occurred in the course of informal discussions and consultations between clients and staff. Fewer Than Half of States Reported Having Policies Regarding Privacy for Screening Sixteen states reported that they have established policies regarding the physical setting in which screenings occur. For example, Colorado officials explained that, while screening will often begin in open cubicles, the state’s policy is that the meeting will be moved to a private office if a situation of domestic violence has been identified. In our visits to TANF offices in seven other states, we learned that screening often occurs in the course of much broader discussions that occur in open cubicles near staff and other clients. During our site visits, several state officials told us that although private, confidential settings are important, many local offices would face constraints in providing such settings. For example, an official in one state said that the degree of privacy available varies considerably from office to office, and the setting in some offices dictates that screening take place at desks near a lobby full of clients. While this is not ideal, the local offices must work within the constraints of the facilities in which they are located. Ten states said that they have policies regarding individuals who may be present during screening. For example, officials in the state of Washington told us that, if a couple comes in to the TANF office to apply for benefits together, it is state policy that they be separated before domestic violence will be discussed. According to the Washington “Work First” handbook, which provides written guidance for caseworkers and other staff in the state’s TANF program, caseworkers are not to ask about family violence in the presence of the partner, because this may endanger the client. Similarly, the screening tool used in Illinois advises caseworkers to ask questions regarding domestic violence at a later time if the client’s partner is present. We also found that, although some states do not have policies in this regard, some caseworkers in these states nonetheless ensure a client is alone for domestic violence screening. A local office official in New York, which does not have a privacy policy, said that before questions about domestic violence are asked, they will separate couples with a contrived reason, if necessary. For example, they may tell the partner that he must meet with another staff member about another topic, such as his job history. In contrast, most states have no explicit policy about who can be present during a client’s domestic violence screening. For example, an official at a local office in Iowa said that the initial meeting with the caseworker takes place with both partners present because it is important to see how a couple interacts. If the caseworker suspects domestic violence, they may try to separate the couple for a subsequent discussion. Other states explained that they encourage a middle approach, explicitly relying on the judgment of the caseworker. Similarly, in New York, state policy directs that the domestic violence screening form can be mentioned with both partners present, but that it not be addressed further if the clients are not interested. However, the policy also advises that caseworkers may need to be creative in finding a way to mention the domestic violence screening form again in a private setting. Screening Is Typically Performed by Caseworkers, but Some States also Use Domestic Violence Specialists Forty-five states reported that caseworkers or intake workers are the staff that typically conduct domestic violence screening, a fact acknowledged by officials during our site visits. As figure 4 indicates, most state policies require little training for these staff. Twenty states indicated that the state either had no policy regarding domestic violence-related training, or may provide training only once in caseworker’s career. Another 25 states require training once in a staff members’ career and make additional training optional. Officials in each state we visited told us that the skills, abilities, and inclinations of staff to conduct domestic violence screening vary considerably. For example, one state official told us that the personalities and style of caseworkers range across the board. This official said that many staff are technical people who have spent their careers stressing program qualifications, and addressing domestic violence and other “soft” issues is difficult adjustment. An official of another state TANF program said that even with the best of best training and screening policies, screening effectiveness is dependent on the effectiveness of individual caseworkers, and some caseworkers may simply skim the screening questions and not ask the full range. These comments were reinforced during our observations of domestic violence screening in several local TANF offices. At one office, we observed one caseworker who stressed certain aspects of the TANF experience, such as the need to obtain employment or attend training services, but gave very peremptory attention to other issues, such as domestic violence and mental health. At another office, we witnessed screening performed by a social worker, who paused at length over these issues, and sensitively asked follow-up questions regarding the client’s home life, and took the initiative to gently and patiently describe a local organization that could provide alternative living arrangements, if this were necessary. In order to supplement the skills of the caseworkers, three of the states we visited—Georgia, New York, and Washington—employed domestic violence specialists to conduct in-depth screening and assessment after a client disclosed that domestic violence was an issue. This practice is implemented statewide in Georgia and New York, and has been established for almost all of the TANF offices in Washington state. In all three states, the domestic violence specialist serves to “backup” the screening conducted by the regular caseworker. For example, New York’s policy requires that a client be referred to a specialist—known as a domestic violence liaison—as soon as a client discloses that domestic violence is an issue. State officials told us that the meeting with the specialist is scheduled as soon as possible and all further inquiry regarding domestic violence is left to the domestic violence liaison. The specialist then conducts a more in-depth inquiry and informs clients about options for protective services and other assistance. State officials told us that domestic violence specialists can play an important role in addressing the needs of domestic violence victims and are important given the limited skills of many caseworkers in dealing with domestic violence. Officials in Washington, for example, said that the presence of a domestic violence specialist increases the likelihood that a domestic violence victim will attend counseling or other services to address the problem. A caseworker can immediately walk a client over to the desk of the domestic violence specialist, who then conducts an in- depth assessment, provides some degree of counseling, and makes referrals to other agencies for ongoing services. They explained that if the domestic violence specialist were not there, they believe that many clients would ignore the referrals to outside agencies, and the domestic violence issue festers. During one of our site visits, we learned firsthand of the importance of effective interpersonal skills in domestic violence screening. We interviewed an employee of a local TANF office who was a former TANF recipient and a domestic violence victim. She explained that, when she was screened for domestic violence, she was in a desperate situation fleeing from her abuser and needed assistance right away. Nonetheless, the attitude of the screening staff was so perfunctory and indifferent that, had she not immediately been routed to a domestic violence specialist, she said she probably would not have returned to that office. The former victim further noted that many caseworkers are overwhelmed by other demands, and felt that the caseworker she dealt with was not equipped to handle domestic violence issues. State officials also told us that domestic violence specialists can enhance the ability of other office staff in dealing with clients’ domestic violence issues. In Washington, caseworkers at a local TANF office stated that an onsite domestic violence specialist serves as an important source of technical assistance for caseworkers. For example, the specialist has provided training for staff, and made them aware of the “red flags” that may indicate domestic violence is an issue. The officials repeatedly praised the domestic violence specialist for this “cultural” impact on this TANF office. HHS Has Taken Some Steps to Provide States with Guidance on Domestic Violence Screening HHS has, since the establishment of the TANF program, taken a number of measures to identify and disseminate information on what states are doing to screen for domestic violence. For example, HHS funded a 1999 report that summarized how states had implemented the Family Violence Option, including the basic techniques each state used to screen for domestic violence, and a number of detailed examples of how domestic violence screening was conducted at specific locations. Further, an HHS-funded report published in 2000 provided in-depth descriptions how 7 counties in different states identified domestic violence victims and assisted those who disclosed. These reports also made observations regarding the benefits and drawbacks of various screening practices. For example, the report published in 2000 noted the usefulness of a more extensive domestic violence screening tool—covering verbal, emotional, and sexual abuse—that could be used once a worker has some indication that a client may have domestic violence issues. Although HHS has funded research on state TANF program approaches to domestic violence screening, HHS officials also told us that the agency has not provided state TANF programs with specific advice in the form of policy guidance or memoranda regarding best practices in domestic violence screening. Further, it has not specified minimal acceptable standards for domestic violence screening. Agency officials explained that the legislation establishing the TANF program gives states considerable flexibility in implementing the Family Violence Option, and does not provide HHS with authority to require particular approaches to screening. To Address the Needs of Victims of Domestic Violence, Most States Use Waivers and Refer Clients to Local Service Providers State TANF programs play an important role by offering victims of domestic violence waivers from TANF program requirements and helping them obtain needed services. Although most states will waive certain TANF requirements, the provisions of these waivers vary from state to state. Further, limited data from two states that we visited indicates that a comparatively small portion of all TANF recipients obtain domestic violence waivers. Although all state TANF offices rely on local service agencies to provide domestic violence services, some also provide in- house domestic violence services and most actively monitor clients for participation in services. While the full range of services is generally available to victims in urban areas, services in rural areas are generally less available. Most States Waive Requirements for Work, Time Limits, and Child Support Of the 48 states that adopted the Family Violence Option or an equivalent policy, the majority will waive the requirement that TANF clients work or engage in work-related activities, the 5-year federal lifetime limit on TANF benefits, and cooperation with the child support authorities to collect child support, as shown in table 1. Some states will waive other requirements as well. Thirty-eight states indicated that they will waive specific requirements included in a client’s individual responsibility plan, which is intended to lead to work and financial independence. For example, an Illinois official told us that the individual plan of many clients requires them to meet with their caseworker in order to continue receiving benefits. If a client misses a meeting and can demonstrate that the failure to make the appointment was due to domestic violence, the requirement can be temporarily waived and cash benefits will not be interrupted. Another six states reported that they will, if necessary, waive other program requirements in the event that domestic violence makes them difficult to meet. For example, an Oregon official reported that they will waive certain financial eligibility requirements in the event of domestic violence. For example, if a domestic violence victim must use part of her income to flee an abuser and pay for temporary housing, this portion of the income will be temporarily excluded from eligibility and benefit calculations. Conditions for Receiving Waivers Varied by State Federal TANF regulations allow states considerable flexibility regarding the conditions for clients to obtain good cause domestic violence waivers. For example, some states reported that they required recipients to provide evidence of domestic violence before granting a waiver, while others reported that the client’s word was sufficient. As table 2 indicates, 25 states do not require evidence beyond a client’s statement in order to grant a waiver from work requirements. For example, a Washington official said that additional evidence before granting a waiver was not needed because officials expect clients to participate in domestic violence services, which clients would not want to do unless such services are needed. In contrast, Illinois requires that recipients provide additional documentation such as a written statement from a third party, a police report, or documentation from a domestic or sexual violence program. Twenty-seven states reported that they also require domestic violence victims to participate in domestic violence services in order to waive program requirements. For example, officials in Iowa and Washington said that clients must demonstrate that they are making an effort to address the domestic violence by attending counseling or other services. In contrast, an Illinois official explained that the state also requires that caseworkers provide waiver recipients with information about available domestic violence services and encourage them to attend; however, they cannot require their participation. Officials said that some clients do not want to participate in services provided through an agency but prefer to use an informal network for support and assistance such as family, friends, or church ministers. Some states may also tailor waivers to fit a client’s particular circumstances and will help clients maximize compliance with program requirements. For example, New York offers “partial waivers,” which state officials believe help to ensure a victim’s safety while still participating in program activities. For example, a partial waiver could be granted to a client who is taking job readiness classes but not actively searching for a job because of safety concerns. In such a case, the partial waiver would allow the client to fulfill part of the work requirement without interrupting cash benefits. Similarly, a partial child support waiver would be granted if it would put the client in danger to appear in court to pursue child support (this is in contrast to a full child support waiver in which a client would not be required to pursue child support at all if doing so would threaten their safety). Limited Data Indicates That a Small Portion of the TANF Population Use Domestic Violence Waivers Reliable national data on the number of domestic violence waivers issued by the states does not exist, according to an HHS official responsible for tracking state data reporting. During our site visits, some of the eight states were able to provide data on domestic violence waivers from the work requirement. For example, Georgia officials reported that from July 2003 through June 2004, local TANF offices in Georgia granted 925 waivers from the work requirement, which was less than 2 percent of the 52,515 clients who were receiving TANF benefits during that time. In contrast, Washington officials reported that from October 2003 through September 2004, local TANF offices in Washington granted 5,162 waivers from the work requirement, more than 9 percent of the 52,515 clients receiving TANF benefits. The number of waivers granted in some states may be relatively small because clients may opt out of TANF requirements in other ways or because domestic violence victims can comply despite their situation. For example, some domestic violence victims may face multiple barriers and could obtain a state “hardship” waiver that will extend the 5-year time limit without coding it as a domestic violence waiver. Officials in most states that we visited said that many victims prefer to work because they consider financial independence as the best way out of an abusive situation. For example, a caseworker in Colorado who works exclusively with domestic violence victims said that she rarely grants a waiver for the work requirement because most clients want to work, and even if they only work part-time, the participation in domestic violence services is included in the work plan as a work activity. She said that she only grants a waiver from work requirements to individuals that are in such a critical situation that they can only deal with their domestic violence issues. Officials in New York also said that most domestic violence victims prefer to work but that many also participate in the domestic violence services that are offered. Clients May Be Referred to Outside Service Providers or in Some Cases Obtain In-House Services, but Degree of Monitoring and Service Availability Varies While all states reported that clients are referred to separate local agencies for domestic violence services such as counseling, some offices also offer such services in-house. During our site visits, some states reported that the availability of services varied, with some states reporting that domestic violence services were less available in rural areas. Further, some states appear to be more active than others in monitoring the progress of clients who obtain domestic violence services. Thirty-eight states reported that their local TANF offices relied exclusively on separate local agencies to provide services to their domestic violence clients. These services generally include counseling, crisis intervention, safety planning, support groups, legal and court advocacy, and emergency shelter. In contrast, 13 states reported that they provided some domestic violence services on-site in the local TANF office in addition to using separate local agencies. For example, officials in Washington said that most of the state’s local TANF offices have a Domestic Violence Advocate (DVA) on-site who works exclusively with domestic violence victims providing services such as safety planning and counseling. The benefit of having this service located on-site is that it gives the victim immediate access to services, and increases the chances that the clients will follow through with services. Also, because the DVA is located at the TANF office, the client may receive services without the knowledge of the abuser. For other domestic violence services, such as shelter and legal advocacy, clients were referred to a separate local service provider. The kind of services offered to domestic violence victims is determined by their need and the availability of such services in the community. However, among the states that we visited, we found that the kinds of services available varied—urban areas generally provided a full range of services, while services in rural areas were less available. For example, a Colorado official cited one area of the state that had only one shelter serving five counties and officials in one rural county expressed a need for more counseling services. New York officials also said that services were less available in rural areas, but that it was simply not cost-effective to have a full range of services in sparsely populated areas. In addition, officials in several states that we visited said that transportation is less available in the rural areas and access to services can be difficult for victims living many miles from the nearest provider. For example, officials in a rural office in Oklahoma said that the shelter serving their clients was 26 miles away and that caseworkers often have to call the local police to transport domestic violence victims that are unable to transport themselves. While most states required that local TANF offices monitor client participation and progress when referred for these services, some states more actively monitor a client’s progress. In responding to our survey, 29 states reported that, in some or all cases, clients referred to a domestic violence service provider must be monitored for progress. For example, Washington officials said that communication with providers is required monthly to monitor clients’ progress so that they can determine when they are able to move into work activities that will allow them to become self- sufficient. In contrast, 16 states indicated that they had no such policy. For example, New York officials said that they discourage communication between the TANF office and the domestic violence service agency. An official explained that the providers, who operate independently, want to maintain confidentiality to ensure the victim’s safety. Most States Used Federal TANF Funds for Marriage or Responsible Fatherhood Programs, but Take Differing Approaches to Addressing Domestic Violence Thirty-one states reported using federal TANF funds for marriage or responsible fatherhood programs, and limited research indicates that such programs generally do not specifically address domestic violence. States that reported funding marriage programs most frequently supported adult and youth marriage and education programs. States that reported funding responsible fatherhood programs most frequently supported programs that deliver services to non-custodial fathers to enhance ability to meet parental obligations. Data show that a relatively small portion of TANF funds were used for marriage programs in 7 states and responsible fatherhood programs in 21 states. While research indicates that these programs do not explicitly address the issue of domestic violence, they may nonetheless do so by emphasizing better communications and healthy relationships, and constructive techniques for dispute resolution. Debate exists as to the best approaches for marriage and responsible fatherhood programs to address domestic violence. Most States Used Federal TANF Dollars to Fund Marriage or Responsible Fatherhood Programs in the Last 3 Years Thirty-one states reported using TANF funds for marriage programs, responsible fatherhood programs, or both in the last 3 years. Specifically, 15 states reported funding marriage programs, and 28 states reported funding responsible fatherhood programs from 2002 to 2004. Of these, 12 states reported funding both marriage and responsible fatherhood programs. See figure 5. States that funded marriage and responsible fatherhood programs with TANF dollars in the last 3 years reported supporting various types of efforts. Most frequently, states that funded marriage programs with TANF dollars in the last 3 years reported funding adult and youth relationship and marriage education programs. Such programs are generally based on a standard curriculum, presented in a classroom-style format, and attempt to change attitudes and dispel myths about marriage and to teach relationship skills. For example, the Oklahoma Marriage Initiative works to improve relationships through services that provide skill-based relationship training. Workshop leaders are trained to teach Prevention and Relationship Enhancement Program (PREP) courses, which are designed to prevent divorce and enhance marriage, in their communities and organizations across the states. Most workshops are voluntary and participants learn about the program through the Internet, referrals, word of mouth, local advertising, and churches. One local TANF office required participants to attend PREP or PREP-like courses. Similarly, youth marriage education programs can be taught using one of several nationally recognized curricula for building successful relationships and marriages. Several states also reported funding activities such as media campaigns and conferences relating to marriage. States that supported responsible fatherhood programs using TANF dollars in the last 3 years typically funded collaborative fatherhood programs between government and private agencies or funded direct services to non-custodial fathers to enhance their ability to meet parental obligations. We visited one state—Georgia—that has fatherhood programs in both these categories. These programs were initiated by the Director of the Division of Family and Children Services, who believed some fathers were not “deadbeats but dead broke” and appointed a special consultant to develop fatherhood programs. Georgia’s Fatherhood Program is a partnership with several state agencies and other organizations. The program’s mission is to assist non-custodial parents in training and educational opportunities leading to employment paying above minimum wage and encouraging increased involvement in the lives of their children. The program contracts with the Georgia Department of Technical and Adult Education to provide job skills and placements for unemployed or underemployed non-custodial parents. Georgia’s other responsible fatherhood program, the Child Access and Visitation Program, is designed to assist non-custodial parents with improving visitation with their children and addressing the children’s relationship with the custodial parent. This program is run through a contract with an outside service provider and funded through a grant from HHS’ Administration for Children and Families. Two states that reported not using federal TANF funds for marriage programs told us that they address factors that cause stress in marriages without implementing marriage programs. For example, Illinois chose to emphasize direct support for low-income families by providing material support, such as childcare, transportation, and cash assistance rather than implementing marriage programs. An Illinois official also noted that a key variable in successfully promoting marriage was increasing the economic status of the couple so that they feel marriage is a viable alternative. New Jersey also reported in its survey that the state funded programs that address factors that can cause stress in marriages, as well as efforts that indirectly promote and preserve marriage. Limited Data Suggest States Use Small Amounts of Federal TANF Funds for Marriage and Responsible Fatherhood Programs and Some Rely on Other Funding Sources States that reported using federal TANF funds for marriage or responsible fatherhood programs did not report using a large proportion of their total federal TANF budget for this purpose. Specifically, no state providing usable data reported spending more than about 5 percent of its total federal TANF expenditures on marriage or responsible fatherhood programs for 2002 or 2003. While according to HHS officials more complete national data does not exist, a Congressional Research Service report also found that states were not spending large portions of TANF funds on marriage programs. Similarly, according to another report, states spend relatively small amounts of TANF funds on responsible fatherhood programs. Some states reported that they fund marriage and responsible fatherhood programs with funding sources other than federal TANF dollars. Five states in survey commentary and one state in a site visit indicated that that they fund marriage programs with monies other than federal TANF funds. Two additional states indicated that they currently fund programs with distinct marriage components, but they did not consider them marriage programs per se. A Georgia official told us that the state will soon be implementing a marriage program similar to the Oklahoma Marriage Initiative. The program will use an ACF grant under the authority of section 1115 of the Social Security Act and private funding rather than federal TANF funds. Furthermore, since state fiscal year 2004, eight states indicated in survey responses or during our site visit that they are implementing or developing new marriage programs that may or may not use federal TANF funds. Similarly, some states are supporting responsible fatherhood programs through other means. For example, some states also support responsible fatherhood programs with monies other than federal TANF funds. The state of Washington collaborated with Alaska and Oregon using non-TANF funds to create public service videos to encourage fathers to be involved with their children. One national fatherhood report also indicated that responsible fatherhood programs remain largely funded through foundations. Generally, Marriage and Fatherhood Programs Do Not Explicitly Address Domestic Violence According to research and practitioners in the field of marriage and responsible fatherhood, domestic violence is generally not explicitly included as a component of marriage and responsible fatherhood programs. Recent research funded by HHS found that many of the widely available marriage education programs were designed and tested with middle income, college-educated couples, and do not assess for or address a variety of issues—including domestic violence—that place considerable stress on couple relationships. A researcher in the field of marriage programs stated that such programs do not address domestic violence because marriage education is an emerging field and developers did not initially see the need for discussion of domestic violence within these programs. Similarly, two representatives of national fatherhood advocacy organizations indicated that most responsible fatherhood programs have not addressed domestic violence or lack the resources to deal with domestic violence. Furthermore, one found that this was because practitioners in responsible fatherhood programs usually do not have expertise in domestic violence issues. Nonetheless, some marriage and responsible fatherhood programs do address domestic violence implicitly through an emphasis on healthy, egalitarian relationships and constructive conflict resolution. For example, the Oklahoma Marriage Initiative program focuses on communication and conflict resolution between couples without an explicit discussion of domestic violence. The Oklahoma program covers danger signs of marital problems, such as negative communication habits that can escalate into anger and frustration, a pattern of constantly putting down or disregarding the thoughts and feelings of a partner, or a habit of negative interpretations of the actions and comments of a partner. There is a range of opinions on how marriage and responsible fatherhood programs should address domestic violence. Some state officials indicated that marriage programs that address domestic violence through an emphasis on healthy relationships and conflict resolution alone can reduce and help prevent violence in marriages by reducing the stress that often leads to violence. For example, New York indicated that programs designed to encourage healthy relationships will have the positive benefit of reducing the likelihood of physical violence and emotional abuse. Some practitioners have noted the need to address domestic violence explicitly and have begun to take steps toward explicitly including domestic violence in marriage programs and have sought program advice from local domestic violence coalitions. For example, while the Oklahoma Marriage Initiative does not explicitly cover domestic violence in its marriage curriculum, it recently created a handout to assist clients in identifying domestic violence and where to obtain help. Nonetheless, other evidence suggests that domestic violence should be explicitly addressed in marriage and responsible fatherhood programs. The HHS-funded research on marriage and responsible fatherhood programs found, for example, that many unmarried parents face a variety of challenges that may impede their ability to form a stable marriage. It further states that assessment of barriers—in particular domestic violence—could point out the need for referral to other kinds of appropriate help. State officials and advocates told us that addressing domestic violence specifically is important to ensuring that programs address the dangers of domestic violence. Further, HHS has stated that all future projects funded by its Healthy Marriage Initiative--which seeks to promote healthy marriages through activities such as marriage and pre- marriage education—are to fully incorporate domestic violence protections, including project specific policies regarding screening for domestic violence. Conclusions The TANF program emphasizes that recipients move toward economic self-sufficiency, but also recognizes that some recipients may face barriers that may make it difficult or impossible for them to work immediately, or meet specific timelines for attaining self-sufficiency. One of these provisions—the Family Violence Option—requires states that adopt it to screen clients for domestic violence, and offer other assistance to those who need it in overcoming this potential barrier to self-sufficiency. The Family Violence Option offers states considerable flexibility in how they screen for domestic violence, and states have taken a range of approaches to doing so. While this flexibility is consistent with the overall TANF emphasis on allowing states latitude in designing and administering their programs, guidance on effective approaches to domestic violence screening could provide states with additional information on promising screening practices. Specifically, there may be certain practices that could benefit all state programs, assuming they are affordable and practicable. For example, in the states we visited, domestic violence specialists appear to offer multiple benefits to local TANF offices, including offering greater expertise and more refined skills in dealing with extremely personal issues than is typical of many caseworkers. This is especially important, given the limited domestic violence training and varying levels of skills and abilities among the caseworkers. Some states also appear, more than others, to stress privacy in conducting domestic violence screening. Although HHS has taken a number of actions to provide states with information about approaches to screening, it has not identified and encouraged the adoption of certain best practices through official guidance or memoranda. Recommendations for Executive Action We recommend that the Secretary of the Department of Health and Human Services: examine current domestic violence screening practices of states, and determine whether certain practices—such as employing and training, where possible, domestic violence specialists—are particularly promising approaches to screening for domestic violence, and provide states with information on these practices, and, through agency guidance or memoranda, encourage their adoption. Agency Comments and Our Evaluation We provided a draft of this report to HHS for its review. Overall, HHS agreed with the report’s findings and provided some comments on the report’s conclusions and recommendations. Regarding the report’s conclusions, HHS correctly states that there is a lack of consensus among domestic violence services professionals about the best techniques for screening clients and that it would be reluctant to advocate particular screening approaches over others. It further states that its regulatory authority is limited in this and many other areas of the program. We agree that there is no documented consensus about which screening practices are the most effective and, consistent with current TANF regulations, state programs should retain flexibility in designing the approaches they believe are most effective. However, we believe that there may be some practices that are sufficiently promising that all states should be made fully aware of their merits, so that they can choose to adopt them if practicable. While some of these promising practices might not be applicable in each and every situation, we continue to believe that state TANF programs would benefit from HHS guidance on the advantages and limitations associated with particular promising practices. Further, HHS’ advocacy of these practices would continue to allow state TANF programs and local TANF offices to retain the flexibility and latitude to select approaches that best meet the needs of their programs. We have revised our conclusions and recommendations to more clearly suggest that HHS should make information on promising screening approaches available to states and to encourage their adoption. HHS also provided technical comments on the draft report, which we have incorporated where appropriate. HHS’ entire comments are reproduced in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to interested congressional committees and Members, and the Secretary of Health and Human Services. We will also make copies available to others upon request. In addition, our report will be available at no charge on GAO’s Web site at http://www.gao.gov. Contact points for our Office of Congressional Relations and Office Pubic Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to determine (1) what states are doing to identify victims of domestic violence among Temporary Assistance for Needy Families (TANF) recipients, (2) what states are doing to address domestic violence among TANF recipients once they have been identified, and (3) the extent to which states are spending TANF funds on marriage and responsible fatherhood programs, and how, if at all, these programs are addressing domestic violence. To address each of these objectives, we: conducted a survey of the TANF agencies in all 50 states and the conducted site visits to state TANF agencies and local TANF service delivery offices in 8 states; and interviewed researchers and representatives of national organizations with expertise in the issues of TANF and domestic violence, as well as marriage and fatherhood programs. More detailed information on each of these aspects of our research is presented below. We conducted our work in accordance with generally accepted government auditing standards between May 2004 and May 2005. Survey development and implementation The survey addressed all three objectives, and included questions about states’ adoption of the family violence option or similar state policies, screening for domestic violence, and the use of waivers and domestic violence services to address victim’s needs. In addition, we asked states about their use of TANF funds to support marriage and responsible fatherhood programs. The survey was developed based on knowledge obtained during our preliminary research. This included a review of pertinent literature and interviews with members of academia and representatives of organizations that conduct research and policy analysis on TANF, domestic violence, and marriage and fatherhood programs. We also conducted visits to TANF offices in Illinois, Iowa, and Wisconsin to obtain an understanding of their state TANF programs, how they identify domestic violence victims and address their needs, and the use of TANF funds for marriage and responsible fatherhood programs. The survey was pre-tested with state TANF officials in Maryland, Michigan, and Pennsylvania to determine whether respondents would understand the questions the way we intended. These states were selected to ensure that we pre-tested with at least one state that (1) had adopted the Family Violence Option (FVO)— (Maryland and Pennsylvania); (2) had not adopted the FVO, but was reputed to have adopted a similar state policy (Michigan); (3) administered the TANF program through state offices (Pennsylvania and Michigan); and (4) administered the TANF program through county offices (Maryland). In addition, Maryland was known to have developed a responsible fatherhood program and Michigan had both responsible fatherhood and marriage programs. Revisions to the survey were made based on comments received during the pretests. We sent the first mailing of the survey in November 2004 followed by a second mailing in January 2005; telephone call reminders followed each mailing. The collection of survey data ended in February 2005 with a 100 percent response rate. We did not independently verify the information obtained through the survey. We did not attempt to verify the respondents’ answers against an independent source of information; however, questionnaire items were tested by probing pretest participants about their answers using in-depth interviewing techniques. Interviewers judged that all the respondents’ answers to the questions were correct. Answers to the final questionnaire items on expenditures were compared to HHS ACF data (form 196) for 2002-2003 and information we received from other researchers in this area. These data are not directly comparable to data obtained in our survey, but do indicate whether survey respondents’ answers were reasonable. We conducted follow-up phone calls to clarify responses where there appeared to be discrepancies. Although no sampling errors were associated with our survey results, the practical difficulties of conducting any survey may introduce certain types of errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted or differences in the sources of information that participants use to respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to reduce such non-sampling errors. Specifically, we pre-tested three versions of the questionnaire, social science survey specialists designed draft questionnaires, and edits were performed to identify inconsistencies and other indications of error prior to analysis of data. Data from the mail survey were double-keyed and verified during data entry and we performed computer analyses to identify inconsistencies and other indications or error. Finally, a second, independent analyst checked all computer analyses. We conducted our survey work from July 2004 to February 2005. State site visits To obtain a detailed understanding of states’ TANF programs’ efforts to screen for and address domestic violence, we conducted visits to eight states. We visited three states—Illinois, Iowa, and Wisconsin— during a preliminary phase of our work, and another five states—Colorado, Georgia, New York, Oklahoma, and Washington in a later phase. We selected these five states in order to ensure we covered (1) states in different regions of the United States; (2) states that had adopted the FVO recently and a state that had not adopted the FVO; (3) a state in which the TANF program is state administered, and a state in which TANF is county administered; and (4) a state that had used TANF funds to support a marriage program, and a state that had used TANF funds to support a fatherhood program. In each state, we interviewed officials at both the state-level policy setting office, as well as officials in two local service delivery offices. During both the state level, and local office interviews, we used a standard interview protocol that enabled us to obtain more detailed—yet comparable— information than states were able to provide in the survey. In all five state- level interviews, we discussed state policies for domestic violence screening, and policies regarding how victim’s needs are met once identified. In addition, we asked officials about marriage and responsible fatherhood programs, how these programs were implemented, and the sources of funding. During the interviews in local TANF offices, we discussed the implementation of state policies, and toured the offices. In addition, in seven local TANF offices in five states, we were able to observe caseworkers interviewing a client applying for TANF benefits, which included questions about domestic violence. In Georgia, New York, and Washington we also interviewed domestic violence specialists who were located at the local TANF offices specifically to identify and address the needs of domestic violence victims. Finally, we interviewed officials from each state’s coalition against domestic violence to obtain their views about their state’s program for identifying domestic violence victims, meeting victim’s needs, and the existence of marriage and/or responsible fatherhood programs. Our site visit work was conducted between December 2004 and February 2005. Other As part of our work, we reviewed pertinent literature and interviewed representatives of the following organizations: The Center for Law and Social Policy; The Urban Institute; The Center for Impact Research; The Center on Budget and Policy Priorities; The American Enterprise Institute; The American Public Human Services Association; The Brookings Institution; MDRC; The National Governors Association; Public Strategies; The National Fatherhood Initiative; and The Center for Fathers, Families, and Workforce Development. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments David Lehrer, Assistant Director, Michael Hartnett, Analyst-in-Charge, Deirdre Gleeson-Brown, Gale Harris, Alison Martin, Nancy Purvine, and Amber Yancey-Carroll also made significant contributions to this report. Important contributions were also made by Jen Popovic, Corinna Nicolaou, and Daniel Schwimer.
The Temporary Assistance for Needy Families (TANF) program introduced specific work requirements and benefit time limits. However, the Family Violence Option (FVO) requires states that adopt the FVO to screen TANF clients for domestic violence and grant waivers from program requirements for clients in domestic violence situations. TANF also allows the use of TANF funds for marriage and responsible fatherhood programs. Given states' broad discretion in implementing the TANF program, including most aspects of the FVO and marriage and responsible fatherhood programs, this report examines (1) how states identify victims of domestic violence among TANF recipients, (2) how states address domestic violence among TANF recipients once they are identified, and (3) the extent to which states spend TANF funds on marriage and responsible fatherhood programs, and how, if at all, these programs are addressing domestic violence. Forty-eight states have adopted the FVO or a comparable state policy. Most of these states actively screened clients by directly questioning them about domestic violence, whereas five states simply notified clients of domestic violence waivers without making a direct inquiry. Most states provide staff with a screening tool, but the detail and depth of these tools vary. State officials said that staff in local TANF offices often have limited skills in dealing with domestic violence issues, and policies regarding staff training vary. To address this issue, some state TANF offices employ domestic violence specialists. Although HHS has compiled and disseminated information about domestic violence screening, HHS has not issued guidance regarding best practices in domestic violence screening. State TANF programs help clients address domestic violence issues by granting waivers that exempt victims from TANF requirements, and by referring clients for domestic violence services. Most states will waive the TANF program's federal requirements pertaining to work, the 5-year lifetime limit on cash assistance, and the child support requirements. However, the conditions of these waivers vary from state to state. For example, 27 states required that clients participate in domestic violence services. Limited data on the number of domestic violence waivers indicates that a comparatively small portion of TANF recipients obtain such waivers. Most states have used TANF funds for marriage or responsible fatherhood programs or both. Specifically, 15 states reported funding marriage programs and 28 reported funding responsible fatherhood programs. States that provided usable data reported spending about 5 percent or less of their federal TANF budget on these programs. In addition, some states funded these programs through other funding sources or had programs in development. According to research and practitioners in the field, these programs generally do not explicitly address domestic violence, and HHS has stated that all future Healthy Marriage projects should include domestic violence protections.
Background HHS’ ability to use the Special Reserve Fund for the procurement of countermeasures is predicated on a six-step process involving coordination with DHS and approval by the Director of the Office of Management and Budget (OMB). As provided in the BioShield Act, the process requires: 1. the DHS Secretary, in consultation with the HHS Secretary and the heads of other agencies as appropriate, to determine that a material threat exists and issue a “material threat determination;” 2. the HHS Secretary to determine countermeasures that are necessary to protect the public health; 3. the HHS Secretary to determine that a particular countermeasure is appropriate for procurement for the Strategic National Stockpile using the Special Reserve Fund and the quantities to be procured; 4. the DHS and HHS Secretaries to jointly recommend to the Director of OMB that the Special Reserve Fund should be used for the designated countermeasure acquisitions; 5. the director of OMB to approve the use of the Special Reserve Fund; 6. both Secretaries to notify designated congressional committees of the procurement. The BioShield Act also provides HHS the ability to use four new contracting authorities for the acquisition of countermeasures. In general, these authorities expanded upon existing provisions in the Federal Acquisition Regulation (FAR). The four authorities are: Simplified acquisition procedures which, in general, increased HHS contract threshold amounts from $100,000 to $25 million. However, the BioShield Act does not place a threshold limit on countermeasures that are procured using the Special Reserve Fund if the HHS Secretary determines there is a pressing need for the specific countermeasure. Procedures other than full and open competition can be used to award contracts when the requirement is only available from one responsible source or a limited number of responsible sources. In addition, in order to conduct procurements on a basis other than full and open competition using simplified acquisition procedures, the HHS Secretary must determine that the mission of the BioShield Program under the Act would be seriously impaired without such a limitation. Increased micropurchase threshold from $2,500 to $15,000. Personal services contracts may be used for experts or consultants who have scientific or other professional qualifications when the HHS Secretary determines such contracts are necessary to respond to pressing countermeasure research and development needs. In 2006, the Pandemic and All-Hazards Preparedness Act (PAHPA), among other things, established the Biomedical Advanced Research and Development Authority (BARDA), within HHS, to provide a coordinated, systematic approach to the development and purchases of countermeasures, including vaccines, drugs, therapies, and diagnostic tools. Later, in 2009, Congress transferred the following amounts from the Special Reserve Fund to HHS accounts: $275 million to be used for the advanced research and development of countermeasures and $137 million for influenza pandemic preparation. HHS Has Used New Authorities to Procure Countermeasures HHS has used its Special Reserve Fund (purchasing) authority and one of its contracting authorities to procure countermeasures for the Strategic National Stockpile. Since 2004, HHS awarded nine contracts using Special Reserve Fund monies to procure various countermeasures, such as anthrax and botulism antitoxins, vaccines for anthrax and smallpox, and post-exposure treatments for radiation poisoning in children and adults. Of the nine contracts awarded using monies from the Fund, HHS terminated one contract, in 2006, because the contractor was unable to meet a major contractual milestone. To date, the remaining eight contracts are valued at almost $2 billion. See table 1. In addition, HHS officials told us there are currently two requests for proposal solicitations for an anthrax vaccine and a smallpox therapeutic. Of the four contracting authorities provided under the BioShield Act, HHS has only used the simplified acquisition procedure authority. From 2004 through 2005, HHS’s National Institutes of Health (NIH) used this authority to award five other contracts, including ones for research to develop a botulism antitoxin and improved treatments for radiation poisoning. Awarded with NIH funding, these contracts have a total value of almost $30 million when options and other later modifications are included. See table 2. HHS officials told us that they have not used this authority since 2005. HHS officials also told us that no other BioShield contracting authorities have been used to date, although the officials noted that these authorities may be needed for use in the future. HHS Has Established Internal Controls for Its New Authorities, but Lacks Adequate Documentation of the Risks of Using the Contracting Authorities In response to BioShield requirements, HHS has established internal controls on its Special Reserve Fund (purchasing) and contracting authorities, but lacks adequate documentation of the risks of using the new contracting authorities. Language in the BioShield Act sets up a broad framework of controls over the procurement of countermeasures, including those with Special Reserve Funds, by requiring HHS to coordinate with DHS and obtain approval by OMB before the Fund may be used. In addition to the language in the Act, HHS officials told us that the internal controls for procuring countermeasures using the Fund are documented in a variety of internal policy and procedure documents and interagency agreements, which provide guidance on roles and responsibilities for how the controls are to be implemented. These documents include: an HHS policy document that establishes an enterprise governance board to oversee requirements and priority-setting regarding emergency medical countermeasures for the civilian population. The document also outlines the authorities, organizational structure, and guidelines for the board; an HHS budget execution document that delineates responsibilities and describes the processes for requesting contract actions, purchases, and interagency agreements; a BARDA standard operating procedure document that provides contracting and other BARDA officials with guidance on source selection procedures and outlines specific responsibilities in carrying out those procedures; a BARDA acquisition plan which details the pre- and post-award approval processes for procurements using the Fund; an interagency agreement between HHS and DHS dated September 25, 2006, that outlines the terms and conditions for when the Fund can be used; and an OMB Circular on transferring budget authority from one agency to another. HHS has also established internal controls for the contracting authorities that were specified in the BioShield Act. On October 18, 2005, HHS issued a memorandum that provided guidance on the use of the following contracting authorities: the increased simplified acquisition threshold and its use with the Special Reserve Fund, the increased micropurchase threshold, and the use of personal services contracts. HHS’s memo is structured around the five elements of internal control: the control environment, risk assessment, control activities, information and communications, and monitoring. Federal internal control standards state that management needs to comprehensively identify risks, analyze them for possible effect, and determine how risks should be managed. Federal internal control standards also state that controls need to be clearly documented, readily available for examination, and distributed in a form and time frame that permits people to perform their duties efficiently. Risk assessment statements we reviewed in the memo are generally not assessments of the risks involved in using particular authorities. Some of the risk statements identify some risks and one mentions possible negative consequences that could occur without proper controls in place, but the statements lack an analysis of those risks. For example, the risk assessment statement for using the increased micropurchase threshold states that “control procedures are necessary to prevent noncompliance with specific requirements of the Act, including exceeding statutory limitation on number of contracts and selections based on improper criteria.” And, the risk assessment statement on increased simplified acquisition procedures does not mention or assess risk. It simply states that “control procedures are necessary to prevent noncompliance with specific requirements of the Act.” In particular, the risk statement on simplified acquisition procedures in the memo does not discuss a key risk associated with using simplified acquisition procedures—namely, that an agency is prohibited from obtaining cost or pricing data for acquisitions at or below the simplified acquisition threshold. According to a senior BARDA procurement official, while using simplified acquisition procedures can expedite the procurement process, the agency will not have cost and pricing data, which may be needed to determine that the price of a contract—especially those valued in the tens of millions or hundreds of millions—is fair and reasonable. In a subsequent meeting, he stated that he is aware of these trade-offs based on his own experience and knowledge of the FAR. He also confirmed that an explanation assessing the trade-offs and risks involved when using the new contracting authorities is not contained in other HHS documents. Instead, this official acknowledged that HHS’s written guidance on the controls for the contracting authorities does not document known risks and trade-offs of using the authorities. As a result, implementation of these controls depends on the experience and knowledge of current personnel. Moreover, the consistent application of these controls is not likely to be sustained over time as employees leave their positions and new ones take their place. Not having adequately documented and appropriately communicated risk assessments, which institutionalize agency policies, may potentially result in future employees not knowing or understanding the risks or tradeoffs involved in using the various contracting authorities. Conclusions Since the enactment of the BioShield Act in 2004, HHS has awarded almost $2 billion in contracts to either procure medical countermeasures or to facilitate their development. Although HHS has established internal controls for its new purchasing and contracting authorities, the risk assessment statements related to the agency’s internal controls for the contracting authorities are not sufficiently specific. In particular, the failure to mention and lack of analysis of specific risks in the risk statements associated with using the increased micropurchase threshold and increased simplified acquisition procedures is not consistent with requirements under federal internal control standards. With employee turnover, the lack of adequately documented risk assessment statements could create a situation in which employees do not know the risks or trade-offs involved in using the various authorities. The effectiveness of the internal controls now in place is dependent on the knowledge of individuals currently working at the agency. Without appropriately documented risk assessments that institutionalize agency policies, HHS will be unable to ensure that sound, informed, and consistent decisions will be made in the face of employee turnover. Recommendation for Executive Action We recommend that the Secretary of Health and Human Services include comprehensive risk assessment statements in written guidance on the internal controls for the BioShield contracting authorities for which the agency was required to establish controls. Agency Comments and Our Evaluation HHS provided us with written comments on a draft of this report. The comments appear in appendix I. HHS agreed with our recommendation and said that it will revise its internal control guidance on risk assessments for using BioShield contracting authorities. We believe that this is a positive step toward helping ensure that sound, informed, and consistent risk assessments will be made in BioShield acquisitions. HHS also provided observations on the Special Reserve Fund and risk assessments, which appear in appendix I. We are sending copies of this report to the Secretary of Health and Human Services. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or NeedhamJK1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health & Human Services Appendix II: GAO Contacts and Acknowledgments Acknowledgments In addition to the contact named above, Carol Dawn Petersen, Assistant Director; Angela D. Thomas, Kelly Bradley, Robert S. Swierczek, Marie P. Ahearn, and Kenneth E. Patton made key contributions to this report.
The Project BioShield Act of 2004 (BioShield Act) increased the federal government's ability to procure needed countermeasures to address threats from chemical, biological, radiological, and nuclear agents. Under the BioShield Act, the Department of Health and Human Services (HHS) was provided with new contracting authorities (increased simplified acquisition and micropurchase thresholds, and expanded abilities to use procedures other than full and open competition and personal services contracts) and was authorized to use about $5.6 billion in a Special Reserve Fund to procure countermeasures. Based on the BioShield Act's mandate, GAO reviewed (1) how HHS has used its purchasing and contracting authorities, and (2) the extent to which HHS has internal controls in place to manage and help ensure the appropriate use of its new authorities. To do this work, GAO reviewed contract files and other HHS documents, including internal control guidance, which GAO compared with federal statutes and federal internal control standards. Since 2004, HHS has awarded nine contracts using its Special Reserve Fund (Fund) purchasing authority under the BioShield Act to procure countermeasures that address anthrax, botulism, smallpox, and radiation poisoning. HHS may procure countermeasures that are approved by the Food and Drug Administration and ones that are unapproved, but are within 8 years of approval. Of the nine contracts, one was terminated for convenience and the remaining eight are valued at almost $2 billion. HHS officials told GAO that additional contracts are likely to be awarded in the near future as the Fund provides funding through fiscal year 2013. In addition, HHS has used one of its new contracting authorities, simplified acquisition procedures, although it has not used this authority since 2005. HHS has established internal controls on its new purchasing and contracting authorities. In addition to the language in the BioShield Act, which sets up a broad framework of controls over the use of the Special Reserve Fund, the internal controls for this purchasing authority are documented in a variety of internal policy and procedure documents and interagency agreements, which provide guidance on roles and responsibilities for how the controls are to be implemented. In response to BioShield Act requirements, HHS also established internal controls for three of the contracting authorities: the increased simplified acquisition threshold and its use with Special Reserve Funds, the increased micropurchase threshold, and the use of personal services contracts. Federal internal control standards state that, among other things, management needs to comprehensively identify risks, analyze them for possible effect, and determine how risks should be managed. Although some of the risk statements in a memo HHS issued identify some risks and one mentions possible negative consequences that could occur without proper controls in place, the risk statements for using the increased micropurchase threshold and increased simplified acquisition procedures lack analysis of specific risks. In particular, the memo does not discuss a key risk associated with using simplified acquisition procedures--namely, that an agency is prohibited from obtaining cost or pricing data for acquisitions at or below the simplified acquisition threshold. Without this data, the agency may not be able to determine if the price of a contract is fair and reasonable. Moreover, not having adequately documented and appropriately communicated risk assessments potentially results in future employees not knowing or understanding the risks or trade-offs involved in using the authorities. With employee turnover, HHS' reliance on the knowledge of current personnel to appropriately implement key controls will not enable future employees to make sound, informed, and consistent decisions.
Background DHS Acquisitions and the Cancellation of Gen-3 We have highlighted DHS acquisition management issues in our high-risk list since 2005. Over the past several years, our work has identified significant shortcomings in the department’s ability to manage an expanding portfolio of major acquisitions. We have also reported that while DHS acquisition policy reflects many key program management practices intended to mitigate the risks of cost growth and schedule slips, the department did not implement the policy consistently. In 2011, expressing concerns about whether DHS had undertaken a rigorous effort to help guide its Gen-3 decision making, members of Congress asked us to examine issues related to the Gen-3 acquisition. We released a report that evaluated the acquisition decision-making process for Gen-3 in September 2012. As discussed later in the statement, we recommended that before continuing the Gen-3 acquisition, DHS should carry out key acquisition steps, including reevaluating the mission need and systematically analyzing alternatives based on cost-benefit and risk information. On April 24, 2014, DHS issued an Acquisition Decision Memo (ADM) announcing the cancellation of the acquisition of Gen-3. The ADM also announced that S&T will explore development and maturation of an effective and affordable automated aerosol biodetection capability, or other operational enhancements, that meet the operational requirements of the BioWatch system. DHS’s S&T conducts research, development, testing, and evaluation of new technologies that are intended to strengthen the United States’ ability to prevent and respond to nuclear, biological, explosive, and other types of attacks within the United States. S&T has six technical divisions responsible for managing S&T’s research R&D portfolio and coordinating with other DHS components to identify R&D priorities and needs.applied research and development projects for its DHS customers. BioWatch in Action The BioWatch program collaborates with 30 BioWatch jurisdictions throughout the nation to operate approximately 600 Gen-2 collectors. These detectors rely on a vacuum-based collection system that draws air samples through a filter. These filters must be manually collected and transported to state and local public health laboratories for analysis using a process called Polymerase Chain Reaction (PCR). During this process, the sample is evaluated for the presence of genetic material from five different biological agents. If genetic material is detected, a BioWatch Actionable Result (BAR) is declared. Figure 1 shows the process that local BioWatch jurisdictions are to follow when deciding how to respond to a BAR. Our Prior Work on the Gen-3 Acquisition Identified Challenges and DHS Has Since Cancelled the Program Our prior findings and recommendations related to the Gen-3 acquisition provide DHS with lessons learned for future decision making. In September 2012, we found that DHS approved the Gen-3 acquisition in October 2009 without fully developing critical knowledge that would help ensure sound investment decision making, pursuit of optimal solutions, and reliable performance, cost, and schedule information. Specifically, we found that DHS did not engage the initial phase of its Acquisition Life- cycle Framework, which is designed to help ensure that the mission need driving the acquisition warrants investment of limited resources. BioWatch officials stated that they were aware that the Mission Needs Statement prepared in October 2009 did not reflect a systematic effort to justify a capability need, but stated that the department directed them to proceed because there was already departmental consensus around the solution. Accordingly, we concluded that the utility of the Mission Needs Statement as a foundation for subsequent acquisition efforts was limited. Additionally, in September 2012, we found that DHS did not use the processes established by its Acquisition Life-cycle Framework to systematically ensure that it was pursuing the optimal solution—based on cost, benefit, and risk—to mitigate the capability gap identified in the Mission Needs Statement. The DHS Acquisition Life-cycle Framework calls for the program office to develop an analysis of alternatives (AoA) that systematically identifies possible alternative solutions that could satisfy the identified need, considers cost-benefit and risk information for each alternative, and finally selects the best option from among the alternatives. However, we found that the AoA prepared for the Gen-3 acquisition did not reflect a systematic decision-making process. For example, in addition to—or perhaps reflecting—its origin in a predetermined solution from the Mission Needs Statement, the AoA did not fully explore costs or consider benefits and risk information as part of the analysis. Instead, the AoA focused on just one cost metric that justified the decision to pursue autonomous detection—cost per detection cycle—to the exclusion of other cost and benefit considerations that might have further informed decision makers. Additionally, we found that the AoA examined only two alternatives, though the guidance calls for at least three. The first alternative was the currently deployed Gen-2 technology with a modified operational model (which by definition was unable to meet the established goals). The second alternative was the complete replacement of the deployed Gen-2 program with an autonomous detection technology and expanded deployment. As we reported in September 2012, BioWatch program officials acknowledged that other options—including but not limited to deploying some combination of both technologies (the currently deployed system and an autonomous detection system), based on risk and logistical considerations—may be more cost-effective. As with the Mission Needs Statement, program officials told us that they were advised that a comprehensive AoA would not be necessary because there was already departmental consensus that autonomous detection was the optimal solution. Because the Gen-3 AoA did not: evaluate a complete solution set; consider complete information on cost and benefits; and include a cost-benefit analysis, we concluded that it did not provide information on which to base trade-off decisions. To help ensure DHS based its acquisition decisions on reliable performance, cost, and schedule information developed in accordance with guidance and good practices, in our September 2012 report, we recommended that before continuing the Gen-3 acquisition, DHS reevaluate the mission need and possible alternatives based on cost- benefit and risk information. DHS concurred with the recommendation and in 2012, DHS directed the BioWatch program to complete an updated AoA. DHS contracted with the Institute for Defense Analyses (IDA) to conduct the updated AoA, which they issued in December 2013. In January 2014, as part of recommendation follow-up, we reviewed the completed analysis. IDA cited the DHS Acquisition Management Instruction/Guidebook and its appendix on conducting an AoA as the criteria for their study. The management directive lays out a sample framework that details the specific steps to take in evaluating acquisition alternatives, which the contractor used for completing its study. On the basis of our review, we concluded that the IDA-conducted AoA followed the DHS guidance and resulted in a more robust exploration of alternatives than the previous effort. The AoA was not intended to identify a specific solution to address DHS’s requirements for earlier warning and detection capabilities. According to IDA, the AoA does not claim to select a solution, but rather to present alternatives and the information required to select an alternative based on cost and effectiveness trade-offs. On April 24, 2014, the DHS Acquisition Review Board reviewed the BioWatch Gen-3 acquisition with OHA and issued an ADM announcing the cancellation of the acquisition of Gen-3. According to the DHS ADM, the AoA “did not confirm an overwhelming benefit to justify the cost of a full technology switch” to Gen-3. The ADM also announced that S&T will explore development and maturation of an effective and affordable automated aerosol biodetection capability, or other operational enhancements, that meet the operational requirements of the BioWatch system. In April 2014, BioWatch Program officials said multiple factors influenced the decision to end the Gen-3 acquisition, including budget considerations, considerations regarding the readiness level of the technology, and the cost to field and maintain the technology. BioWatch Program officials said that the Homeland Security Studies and Analysis Institute’s and our recommendations to complete a robust AoA, which resulted in not identifying a clear path forward for a single technology type for the Gen-3 acquisition, was also a contributing factor. According to BioWatch Program officials, DHS has not ruled out the possibility of pursuing autonomous detection for the BioWatch program, but officials said the technology would have to cost less to develop and maintain than was estimated for the Gen-3 system. Earlier this year, we reported that when programs have been canceled, cost, schedule, and performance problems have often been cited as reasons for this decision, and cancellation can be perceived as failure. However, in some circumstances, program cancellation may be the best choice. In an April 2014 interview, BioWatch Program officials said the Gen-3 acquisitions process yielded many benefits, despite its cancellation. BioWatch Program officials said the program office has learned and gained much from this experience, including engaging state and local stakeholders to help ensure confidence in the system and BioWatch program; finding better ways to test technologies and refine the Testing and Evaluation guidance; and developing robust acquisition documentation for the department. BioWatch program officials said the decision to cancel the Gen-3 acquisition was a cost-effectiveness measure, because the system was going to be too costly to develop and maintain in its current form. We reported in 2012 that while the DHS June 2011 life-cycle cost estimate reported $104 million in actual and estimated costs from fiscal year 2008 through fiscal year 2011, it also indicated that Gen-3 was expected to cost $5.8 billion (80 percent confidence) from fiscal year 2012 through June 2028. However, the original life-cycle cost estimate for the 2009 decision—a point estimate unadjusted for risk—was $2.1 billion. DHS R&D Efforts Also Face Challenges that Could Impact the BioWatch Program DHS has taken positive steps as we recommended to complete a robust assessment of the available biodetection technology alternatives and has taken into consideration the cost and readiness level of the current technology. However, our prior work reviewing DHS research and development efforts highlights challenges DHS may face in transitioning the future biodetection development efforts S&T is now charged with exploring back to the program office, OHA. For example, S&T works with DHS components to ensure that it meets their R&D needs by signing technology transition agreements (TTA) to ensure that components use the technologies S&T develops. However, we previously reported in September 2012 that while S&T had 42 TTAs with DHS components, none of these TTAs has yet resulted in a technology being transitioned from S&T to a component. In that review we also found that other DHS component officials we interviewed did not view S&T’s coordination practices positively. Specifically, we interviewed officials in six components to discuss the extent to which they coordinated with S&T on R&D activities. Officials in four components stated that S&T did not have an established process that detailed how S&T would work with its customers or for coordinating all activities at DHS. For example, officials in one component stated that S&T has conducted R&D that it thought would address the component’s operational need but, when work was completed, the R&D project did not fit into the operational environment to meet the component’s needs. We also reported in 2012 that OHA, which oversees operation of the BioWatch program, and S&T already had a history of working together on advancing the technology used by the BioWatch program. differences of opinion on key performance measures had created a challenge for these two offices related to future biodetection technologies. For example, during our 2012 review of the Gen-3 acquisition, officials from OHA said both OHA and S&T commissioned the Sandia National Laboratory to conduct similar studies on the performance characteristics of the Gen-3 autonomous detection system, but the two offices requested the use of different performance metrics to evaluate Gen-3’s detection capability. OHA officials said they supported using the fraction of the population covered as the metric because it is directly related to public health outcomes, while S&T preferred to use the probability of detection. While we recognize there are advantages and disadvantages for choosing different performance metrics, technology transition of the R&D project developed by S&T could prove challenging in the future if fundamental differences like this are not resolved early to help ensure the technology meets the operational needs of the program office. processes for coordinating R&D. As a result, we recommended that DHS develop and implement policies and guidance for defining and overseeing R&D at the department-level that includes a well-understood definition of R&D that provides reasonable assurance that reliable accounting and reporting of R&D resources and activities for internal and external use are achieved. DHS agreed with our recommendation, and in April 2014, updated its guidance to include a definition of R&D, but efforts to develop a specific policy outlining R&D roles and responsibilities and a process for coordinating R&D with other offices remain ongoing and have not yet been completed. Future Considerations for the Currently Deployed Gen-2 system With the cancellation of the Gen-3 acquisition, DHS will continue to rely on its currently deployed Gen-2 system as an early indicator of an aerosolized biological attack. Cancellation of the Gen-3 system also raises questions that need to be answered about the future maintenance of the Gen-2 system, since it will no longer be replaced, as planned. According to program officials that we recently contacted, DHS is considering multiple options to upgrade the current technology to improve detection capabilities in the wake of the Gen-3 acquisition cancellation. In April 2014, program officials described some of the options they are considering to upgrade the currently deployed system, including: The addition of a trigger to the current system to enhance performance indoors. These are generally systems that provide very fast but nonspecific warnings of a potential agent release, because they do not identify the type of biological material detected. However, DHS is exploring how to use a trigger to indicate when an air sample should be collected and taken to the laboratory for analysis. Use of a wet or liquid filter system rather than the current dry filter system. Collecting samples directly into a liquid could also increase the odds that any microorganisms would remain alive for subsequent testing. Increased frequency of manual filter collection and testing, which would likely increase costs. Other options for hand-held or portable detection devices. While OHA officials determine the next steps with S&T for the BioWatch program to try and address the capability gap that Gen-3 intended to fill, there are other considerations for the currently deployed system, such as maintainability of the current technology and equipment and the costs associated with any upgrades to extend the life of the existing system. For example, BioWatch program officials indicated they will need to replace the laboratory equipment for the currently deployed system, as early as 2015, and readjust life cycle costs. Further, while Gen-2 has been used in the field for over a decade, information about the technical capabilities for the Gen-2 system, including the limits of detection, is limited. In 2011, the National Academy of Sciences stated that the rapid initial deployment of BioWatch did not allow for sufficient testing, validation, and evaluation of the system and its components. there is considerable uncertainty about the likelihood and magnitude of a biological attack, and how the risk of a release of an aerosolized pathogen compares with risks from other potential forms of terrorism or from natural diseases. Further, the report also stated that to achieve its health protection goals, the BioWatch system should be better linked to a broader and more effective national biosurveillance framework that will help provide state and local public health authorities, in collaboration with the health care system, with the information they need to determine the appropriate response to a possible or confirmed attack or disease outbreak. See Institute of Medicine and National Research Council, BioWatch and Public Health Surveillance, 2011. prioritizing resources to help ensure a coherent effort across a vast and dispersed interagency, intergovernmental, and intersectoral network. Therefore, we called for a strategy that would, among other things, (1) define the scope and purpose of a national capability; (2) provide goals, objectives and activities, priorities, milestones, and performance measures; and (3) assess the costs and benefits and identify resource and investment needs, including investment priorities. In July 2012, the White House released the National Strategy for Biosurveillance to describe the U.S. government’s approach to strengthening biosurveillance, but it does not fully meet the intent of our prior recommendations, because it does not yet offer a mechanism to identify resource and investment needs, including investment priorities among various biosurveillance efforts. We remain hopeful that the forthcoming strategic implementation plan which was supposed to be issued in October 2012 and promised to include specific actions and activity scope, designated roles and responsibilities, and a mechanism for evaluating progress will help to address the ongoing need for mechanisms that will help prioritize resource allocation. However, as of March 14, 2014 the implementation plan had not been released. Chairman Brooks, Ranking Member Payne, and members of the subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. GAO Contacts and Staff Acknowledgements If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other contributors include; Edward George, Kathryn Godfrey, Eric Hauswirth, Susanna Kuebler, and Linda Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DHS's BioWatch program aims to detect the presence of biological agents considered to be at a high risk for weaponized attack in major U.S. cities. Initially, development of a next generation technology (Gen-3) was led by DHS S&T, with the goal of improving upon currently deployed technology (Gen-2). Gen-3 would have potentially enabled collection and analysis of air samples in less than 6 hours, unlike Gen-2 which can take up to 36 hours to detect and confirm the presence of biological pathogens. Since fiscal year 2007, OHA has been responsible for overseeing the acquisition of this technology. GAO has published a series of reports on biosurveillance efforts, including a report on DHS's Gen-3 acquisition. In April 2014, DHS cancelled the acquisition of Gen-3 and plans to move development efforts of an affordable automated aerosol biodetection capability, or other enhancements to the BioWatch system to DHS S&T. This statement addresses (1) observations from GAO's prior work on the acquisition processes for Gen 3, and the current status of the program; (2) observations from GAO's prior work related to DHS S&T and the impact it could have on the BioWatch program; and (3) future considerations for the currently deployed Gen-2 system. This testimony is based on previous GAO reports issued from 2010 through 2014 related to biosurveillance and research and development, and selected updates obtained from January to June 2014. For these updates, GAO reviewed studies and documents and interviewed officials from DHS and the national labs, which have performed studies for DHS. In September 2012, GAO reported that the Department of Homeland Security (DHS) approved the Office of Health Affairs (OHA) acquisition of a next generation biosurveillance technology (Gen-3) in October 2009 without fully following its acquisition processes. For example, the analysis of alternatives (AoA) prepared for the Gen-3 acquisition did not fully explore costs or consider benefits and risk information in accordance with DHS's Acquisition Life-cycle Framework. To help ensure DHS based its acquisition decisions on reliable performance, cost, and schedule information, GAO recommended that before continuing the Gen-3 acquisition, DHS reevaluate the mission need and alternatives. DHS concurred with the recommendation and in 2012 decided to reassess mission needs and conduct a more robust AoA. Following the issuance of the AoA in December 2013, DHS decided in April 2014 to cancel Gen-3 acquisition and move the technology development back to the Science and Technology Directorate (S&T). According to DHS's acquisition decisions memorandum, the AoA did not confirm an overwhelming benefit to justify the cost of a full technology switch to Gen-3. Moreover, DHS officials said the decision to cancel the Gen-3 acquisition was a cost-effectiveness measure, because the system was going to be too costly to develop and maintain in its current form. GAO's prior work on DHS research and development (R&D) highlights challenges DHS may face in shifting efforts back to S&T and acquiring another biodetection technology. In September 2012, GAO reported that while S&T had dozens of technology transition agreements with DHS components, none of these had yet resulted in a technology developed by S&T being used by a component. At the same time, other DHS component officials GAO interviewed did not view S&T's coordination practices positively. GAO recommended that DHS develop and implement policies and guidance for defining and overseeing R&D at the department that includes a well-understood definition of R&D that provides reasonable assurance that reliable accounting and reporting of R&D resources and activities for internal and external use are achieved. S&T agreed with GAO's recommendations and efforts to address them are ongoing. Addressing these coordination challenges could help to ensure that S&T's technology development efforts meet the operational needs of OHA. Cancellation of the Gen-3 acquisition also raises potential challenges that the currently deployed Gen-2 system could face going forward. According to DHS officials, DHS will continue to rely on its Gen-2 system as an early indicator of an aerosolized biological attack. However, in 2011, National Academy of Sciences raised questions about the effectiveness of the currently deployed Gen-2 system. While Gen-2 has been used in the field for over a decade, the National Academy of Sciences reported that information about the technical capabilities of the system, including the limits of detection, is limited. In April 2014, DHS officials also indicated that they will soon need to replace laboratory equipment of the currently deployed Gen-2 system and readjust life cycle costs since there will be no Gen-3 technology to replace it.
Background IRS envisions a future in which its tax processing environment will be virtually paper-free and taxpayer information will be readily available to IRS employees to update taxpayer accounts and respond to taxpayer inquiries. To accomplish this, IRS embarked on an ambitious systems modernization program, called Tax Systems Modernization (TSM). In 1995, we identified serious management and technical weaknesses in TSM that jeopardized its successful completion, made more than a dozen recommendations to fix the problems, and designated TSM as a high-risk information technology initiative in our biennial report series on high-risk federal programs. We again designated TSM as high-risk in our 1997 report series. To correct modernization weaknesses, we recommended in our 1995 report, among other things, that the Commissioner of Internal Revenue ensure that IRS (1) implements disciplined processes for requirements management, investment decision management, and system development management, and (2) completes an integrated system architecture, including data and security subarchitectures. IRS agreed with all of our recommendations. In June 1996, we reported that while IRS had initiated a number of actions to address our recommendations, many of these actions were incomplete and none, either individually or collectively, responded fully to any of our recommendations. Accordingly, in the conference report accompanying the fiscal year 1997 appropriations act (P.L. 104-208, Sept. 30, 1996), the Congress took several actions, including directing Treasury to develop a blueprint to define, direct, and control future modernization efforts. On May 15, 1997, Treasury submitted its modernization blueprint to the Congress. The blueprint consisted of four documents: (1) an SLC overview that provides a high-level framework for defining a disciplined set of processes for managing the modernization, (2) about 3,600 broad business requirements, (3) high-level functional and technical architectures that generally describe the target systems environment, and (4) a general sequencing plan for transitioning from IRS’ current to its target systems environment. IRS’ SLC Overview Is Conceptually Consistent With Best Practices but Lacks Adequate Process and Product Definition IRS’ SLC overview is consistent with general approaches used by successful private and public sector organizations for managing large information technology investments. The macro-level practices described in the overview provide the framework for planning, controlling, developing, and deploying information systems based on defined activities, events, milestones, reviews, and products. IRS’ SLC overview, which is summarized in table 1, consists of phases, processes, and products. The phases, listed as columns in table 1, are: Requirements Management. This phase addresses the questions of what is needed and how to satisfy the need(s). It includes (1) identification and definition of information technology needs; (2) conduct of technical analyses (e.g., cost estimates, architectural impact assessments) for each defined need; (3) development of individual business cases (i.e., investment justifications) that include architectural impact and cost/benefit results; and (4) prioritization of competing business cases by type (new development, maintenance, and research and development). Investment Decision Management. This phase includes activities and documentation to determine how much should be spent and what should be developed and deployed. During this phase, business cases are rank ordered by type on an agencywide basis, investment decisions are made (i.e., business cases are approved and funded on the basis of investment costs and benefits), and investment decisions are monitored over time to determine actual costs incurred and benefits realized. System Development/Operations Management. This phase defines, sequences, and documents activities necessary to develop, deploy, operate, and maintain systems. It consists of (1) research and development, which includes prototype development and evaluation; (2) engineering, which includes system requirements analysis, systems design, release definition, release requirements analysis, and release system design; (3) design and development, which includes configuration item requirements analysis, preliminary design, detailed design, code and unit testing, and integration and testing; (4) integration, test, and deployment, which includes release integration and testing, release system acceptance testing, system piloting, and system rollout; and (5) maintenance, which includes release design, code and unit test, and integration and test. Management Control and Oversight. This phase spans each of the three aforementioned phases. It includes change control management (i.e., determining what to change and when), configuration management (i.e., capturing and maintaining records of the changes), performance management (i.e., measuring progress against baselines), organizational management (i.e., determining who is responsible for what), and audit and evaluation process management (i.e., determining whether SLC processes are effective and being followed). Associated with each phase, and shown as rows in table 1, are (1) detailed process definitions, which describe the functions that are performed and how they are performed, (2) key actions that need to be taken to implement the processes, and (3) key products that are prepared as a result of the processes’ execution. IRS’ SLC Is Incomplete While the SLC overview provides a framework that is consistent with public and private sector best practices, IRS’ SLC is incomplete and does not yet provide the specificity needed for building or acquiring systems. As IRS recognizes, its SLC does not yet specify how technology investment activities will be performed. For example, it does not specify (1) how work processes will be reengineered; (2) how business requirements will be specified; (3) how engineering solutions will be developed; (4) how business cases for technology investments will be formulated and evaluated; (5) how systems conforming to architectural standards will be developed; (6) how operational systems conforming to architectural standards will be maintained; and (7) how technology investments will be evaluated using performance metrics. The SLC shortcomings fall into three categories (see table 2). First, IRS does not yet have detailed process definitions for any of the SLC phases. For instance, IRS has not clearly defined how requirements will be formulated and how they will be assessed and prioritized; how projects will be controlled and evaluated; what data will be required and what evaluation criteria will be used; how system designs will be assessed and how system developments and acquisitions will be managed; and how architectural compliance will be determined and enforced. Without these process definitions, IRS cannot validate that the blueprint products published as of May 15, 1997, are correct and consistent. Moreover, IRS cannot adequately develop the level of detail and precision that, as discussed in the following section of this report, these blueprint products currently lack. Second, because it has not yet defined detailed SLC processes, IRS has not yet implemented its SLC. For example, handbooks have not been prepared and training has not been conducted for any of the SLC phases. Further, organizational roles and authorities have not been adequately specified, making it unclear who does what in each SLC process and phase. For instance, as described in the modernization blueprint, the IRS chiefs will be responsible for reengineering business processes and developing business requirements, but it is unclear who has the responsibility and the authority to ensure that the reengineered processes and specified requirements are prioritized and optimized agencywide. Similarly, although the CIO is responsible for developing architecturally compliant engineering solutions to satisfy business requirements, the CIO does not control all system development resources. Moreover, as discussed later in more detail, neither the CIO nor any other organizational entity has sufficient authority to implement SLC processes and enforce architectural compliance agencywide. Third, many SLC products have not been defined or developed. For example, IRS does not have an agencywide, rank ordered, portfolio of investment options. Moreover, some investments are not supported by business cases (i.e., business need documents, cost and schedule estimates, analyses of the organizational and technical impact of the proposed solution(s) on the phases and releases of the sequencing plan and on the architecture, and analyses of the proposed solution(s) expected return on investment). Blueprint Products Are a Good Start but Are Incomplete or Insufficient While the products constituting IRS’ May 15, 1997, modernization blueprint represent a good first step and a foundation upon which to build, none are complete. In particular, the business requirements are not precise enough and the architecture is not sufficiently complete to build or acquire systems. Additionally, the sequencing plan does not provide sufficient detail to understand the transition to the target systems environment. Business Requirements Are Insufficiently Precise IRS has divided its tax administration workflow into the following six core functional areas. Submissions processing, which is the primary source of data entering the workflow. It provides for the collection and correction of data extracted from paper and electronic tax returns, payments, and information returns as well as forwarding of these data to the corporate processing function for storage and access control. Corporate data processing, which includes receipt of data from the submissions processing function and storage of data in enterprisewide databases. It provides for controlled access to a single, authoritative source of corporate data in support of the customer service, compliance, and financial reporting workflow functions. Customer service, which provides the primary, non-face-to-face interface to taxpayers primarily through correspondence and telephone contacts. Compliance, which provides the primary, face-to-face interface to taxpayers in resolving collection, exam, and other compliance cases. Financial reporting, which is integrated with each of the workflow functions that update financial data and provides traceability to the source of all financial updates and summary financial reporting. Information system infrastructure, which supports the other five functional areas by providing communication networks, computing platforms, workstations, and development facilities. For each of these core functional areas, the IRS business users developed “guiding principles” that were intended to provide a framework for developing modernization business requirements. For example, the submission processing guiding principles state that IRS will (1) receive submissions from taxpayers and third parties on approved media, (2) perform up-front manual processing for nonelectronic submissions, (3) define interface protocols for electronic submissions, (4) transform nonelectronic submissions into electronic representations, and (5) perfect submissions. Similarly, some of the customer service guiding principles state that IRS will (1) provide non-face-to-face communication with taxpayers via various communication media, (2) provide access to taxpayer account and non-account data without geographic restriction, and (3) accept taxpayer data via various communication media. Using these guiding principles, IRS developed about 3,600 business requirements that it believes represent IRS’ mission needs. To IRS’ credit, some of these requirements provide for significant improvements in IRS’ financial management capabilities. For example, they include a general ledger that is transaction-based and conforms to federal automated capture of nonfinancial performance information, such as the number of transactions, calls, paper returns filed, and electronic returns filed; improvements in management information for receivables; the ability to trace significant transactions and documents; and prompt and accurate recording of seized asset transactions. However, some of the requirements are insufficiently precise to be useful in building or acquiring systems. For instance, our audits of IRS’ financial statements have reported the need for IRS to correct the serious problems that caused us to designate IRS’ accounts receivable as a high-risk area. To help address these weaknesses, we have recommended that IRS maintain a subsidiary ledger or similar mechanism to routinely track the status of and to assist in managing accounts receivable. However, the business requirements do not describe the subsidiary accounts receivable records in sufficient detail to show that IRS plans to implement such a mechanism. For example, they do not specify whether the subsidiary records will provide for tracking accounts receivable on a receivable-by-receivable basis and include such information as (1) the age of the receivable, (2) the status of any payments received, (3) the accrual of any interest and penalties, (4) the status of the taxpayer’s ability to pay any remaining balances, and (5) the nature of the receivable (i.e., a balance due, created by examination). In another case, a business requirement under the infrastructure systems core functional area calls for “supporting all five levels” of the Software Engineering Institute’s (SEI) software development Capability Maturity Model (CMM). The model’s five levels of maturity provide users with a four-step, sequential approach for incremental process improvement. In 1995 and 1996, we reported that IRS was a CMM level 1 organization, SEI’s lowest level, meaning that its software development processes were ad hoc and sometimes chaotic. Since a substantial process effort is required to move from CMM level 1 to level 2, and from there to each higher level, IRS’ stated business requirement of “supporting all five levels” is too vague and imprecise to be meaningful. Rather than calling for support of all five CMM levels, IRS needs to require incremental attainment of CMM levels, as SEI advocates, according to a specified schedule, such as level 2 within 2 years and level 3 within 4 years. Without precise goals, IRS cannot implement an effective process improvement program. Also, some of IRS’ guiding principles are not reflected by specific business requirements. For example, the guiding principles that (1) 45 percent of all taxpayer inquiries be resolved via automated systems and (2) 95 percent of all inquiries be resolved in the initial contact could not be reconciled with customer service business requirements specifying the volume of calls that will be resolved. In addition, some key terms in the principles are not well defined. For example, the terms “initial contact” and “resolution” are not defined. IRS is currently reassessing its guiding principles and its business requirements. Architecture Is Insufficiently Detailed IRS’ architecture consists of two components: a functional architecture and a technical architecture. The functional architecture defines in business terms the activities/subfunctions that support the six core functional areas discussed earlier, the relationships among these activities/subfunctions, and the data required to support these activities/subfunctions. The technical architecture defines subsystems, configuration items, data allocations, interfaces, and common services that collectively provide a physical view of the target systems environment. Consistent with best practices in both industry and government, the architecture provides traceability among the business requirements, functions and subfunctions, and subsystems. That is, each of the blueprint’s approximately 3,600 business requirements can be mapped to general points in the architecture where they are addressed. Traceability is critical to ensuring that systems meet users needs. The architecture has other positive attributes. For example, it specifies a data subarchitecture consisting of five primary databases and 18 supporting databases characterized as (1) mission-critical, such as a financial accounting database required to support revenue accounting, tracking, and reporting; (2) submissions management support, such as a state return database containing the data of electronically filed state tax returns; (3) security requirements support, such as a security audit database containing data used to track and audit behaviors observed by technical mechanisms that secure sensitive data; and (4) systems management and systems development support, such as a configuration management database containing information used to manage the development and operational configuration of the modernization systems. Also, the architecture includes a security subarchitecture that addresses data privacy and security. It articulates the need to provide user identification and authentication, to build security profiles specifying transactions and patterns of transactions for which a user is authorized, and to limit the transactions that users can perform to those included in their profiles. Information transmitted over data communications networks like the Internet would be protected through the use of encryption. Security-relevant audit data would also be collected, aggregated, and analyzed. Despite the architecture’s positive attributes, it does not yet include implementation details and therefore is insufficiently complete to use in building or acquiring systems. For example, whereas the intention to ensure the confidentiality of taxpayer data is clear, the method to be used is unspecified; and whereas the intention to use data encryption is clear, encryption products and approaches are unspecified. Additionally, the architecture does not sufficiently define the data administration function, and business requirements have not been allocated to specific configuration items (i.e., actual hardware or software components). As a result, it is not yet known which of the system components will satisfy which of the requirements, or how it will do so. Sequencing Plan Is Not Sufficiently Complete To aid in implementing its target architecture, IRS developed a sequencing plan for transitioning from its current to its target systems environment. To do so, IRS first analyzed existing system platforms, applications, databases, and infrastructures to identify system duplications and gaps as well as systems with “the best functionality” that should be preserved. According to IRS, it then applied three criteria to define a cost effective, risk mitigated sequence within which it would introduce new or modified systems and retire existing systems. The criteria are: focus on systems to support IRS business priorities; limit the need for complex system interfaces, large-scale data conversions, and continuous disruption of business operations; and minimize the need to develop interim systems and interfaces and make centralization of duplicative, stand-alone applications and systems a priority. The result was a sequencing plan that divides the transition into six incremental phases within which software, hardware, and supporting infrastructure components will be developed, acquired, and deployed. Each phase in turn is segmented into multiple releases, which consist of actual software/hardware upgrades, improvements/enhancements, and replacements as well as existing system capability retirements or deactivations. According to IRS, the order of the phases is based on criteria such as IRS’ business priorities and a migration plan. (See table 3 for a summary of the six phases.) While the sequencing plan describes IRS’ general intentions for migrating from its current to its target systems environment, the sequencing plan does not provide the fundamental and critical detail needed to fully understand or execute this transition. For example, it does not specify (1) the schedule and cost estimates for any of the phases or releases, (2) the projects that will constitute the phases or releases, (3) the projects’ cost and schedule estimates, and (4) the projects’ interdependencies. Additionally, the sequencing plan does not describe precisely what is intended to occur as subfunctions evolve through the various phases and releases. For instance, a subfunction called Case Analysis and Resolution is shown as new in phase 1/release 1. It is then shown as changed in 10 subsequent releases, but none of the planned changes are explained. Further, the sequencing plan indicates that several legacy systems, such as the Electronic Audit Research Log (an automated tool to monitor and detect browsing) and the Integrated Data Retrieval System (the primary system through which IRS employees access taxpayer accounts), will be replaced. However, the plan does not identify what will replace them or how the replacement will be accomplished. SLC Products Have Not Been Validated Using Defined, Implemented SLC Processes The processes to validate SLC products, including the business requirements, the architecture, and the sequencing plan, have not yet been defined in detail nor have they been implemented. As a result, none of the products submitted as part of the IRS blueprint on May 15, 1997, have been validated using defined, implemented SLC processes. Agencywide Responsibility and Authority for Implementing and Enforcing the Blueprint Has Not Been Established Information management reforms enacted in the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 direct the heads of major agencies to appoint CIOs. The legislation assigns a wide range of duties and responsibilities to CIOs, including (1) helping to establish sound information technology investment management processes, (2) implementing an integrated agencywide technology architecture, and (3) strengthening the agency’s capabilities to effectively manage information resources and develop needed systems. Additionally, this legislation, Office of Management and Budget guidance, and our research into how leading public and private sector organizations successfully manage information technology define common tenets for the CIO position. Among these tenets is the need for the agencies to support the CIO position with an effective CIO organization and management framework for implementing agencywide information technology initiatives. The legislation establishes the CIO position at executive branch agencies and sets forth special requirements for CIOs at the 24 agencies where the Chief Financial Officers Act of 1990, as amended, established chief financial officer positions. In addition, we have supported the establishment of a CIO structure at the agency subcomponent and bureau levels, such as IRS. Such a management structure is particularly important in situations where the departmental subcomponents, like IRS, have large information technology budgets or are engaged in major modernization efforts that require the substantial attention and oversight of a CIO. In the Conference Report on the Clinger-Cohen Act, the conferees recognized that agencies may wish to establish CIOs for major subcomponents and bureaus. These subcomponent-level CIOs should have responsibilities, authority, and management structures that mirror those of the departmental CIO. In 1995, we reported that IRS had not established an effective organizational structure to manage systems modernization agencywide. Specifically, IRS’ modernization management structure was fragmented and did not provide for agencywide control over all new modernization systems and all upgrades and replacements of operational systems. As a result, we recommended that the Commissioner assign the Associate Commissioner/Modernization Executive management and control responsibility for all systems development activities, including those of IRS’ research and development organization. Since that time, IRS has appointed a CIO and established an Investment Review Board, and Treasury has taken a more active role in overseeing the modernization. However, organizational control over IRS’ huge information technology investment portfolio continues to be a problem. The CIO does not control all information systems activity and thus cannot effectively enforce compliance with established system process and product standards. In particular, the CIO does not have budgetary and organizational authority over all IRS systems development, research and development, and maintenance activities. Congress Has Limited IRS Modernization Spending Until Blueprint Is Completed In June 1996, we reported that while IRS had initiated a number of actions to respond to our recommendations for correcting pervasive management and technical weaknesses in TSM, many of these actions were incomplete and none, either individually or collectively, responded fully to any of our recommendations. Accordingly, we suggested that the Congress consider limiting TSM spending to only cost effective efforts that (1) support ongoing operations and maintenance; (2) support ongoing IRS efforts to instill requisite SLC discipline, including completing and enforcing the architecture, institutionalizing disciplined software development and acquisition processes, and improving its information technology investment management; (3) are small, represent low technical risk, and can be delivered in a relatively short time frame; or (4) involve deploying already developed systems, only if these systems have been fully tested, are not premature given the lack of a completed architecture, and produce a proven, verifiable business value. The act (P.L. 104-208, Sept. 30, 1996) and conference report providing IRS’ fiscal year 1997 appropriations limited IRS’ information technology spending to efforts that were consistent with these categories. In September 1997, we briefed IRS’ appropriations and authorizing committees on the results of our assessment of IRS’ May 15, 1997, blueprint. In the conference report accompanying the IRS fiscal year 1998 appropriations act, the conferees agreed with our findings. Accordingly, they limited IRS spending for fiscal year 1998 to efforts that were consistent with the aforementioned spending categories. Additionally, IRS’ fiscal year 1998 appropriations act (P.L. 105-61, Oct. 10, 1997) states that fiscal year 1998 or prior year “Information Systems” appropriations are not available to award or otherwise initiate a prime contract to implement IRS’ modernization blueprint. The act also states that fiscal year 1998 “Information Technology Investments” funds are not available for obligation until IRS and Treasury submit to the Congress a plan for expenditure that, among other things, implements the blueprint. The conference report on the act adds that details of the blueprint need to be completed before IRS commits to build or acquire new systems. Conclusions IRS’ May 15, 1997, modernization blueprint provides the foundation for specifying IRS’ future systems environment and a disciplined approach for delivering this environment. However, none of the blueprint components are detailed or complete. As a result, the components do not yet provide an adequate basis for effectively and efficiently developing or acquiring systems. In addition, the business requirements, architecture, and sequencing plan have not been validated using defined and implemented SLC processes. As a result, IRS cannot assure itself that these SLC products constitute the correct course of action for the agency to follow in modernizing its information systems. IRS’ CIO recognizes these shortcomings and has committed to completing, implementing, and enforcing all SLC processes and completing, validating, and enforcing compliance with all SLC products before acquiring or developing systems. However, the CIO does not have the authority needed to enforce the modernization blueprint (once it is completed) agencywide. Until such authority is assigned, it is uncertain that even a completed blueprint could be used to overcome existing system incompatibilities and correct inefficient and ineffective IRS operations. Recommendations To ensure that IRS develops a complete blueprint for modernizing its information systems, we recommend that the Commissioner of Internal Revenue require the IRS CIO to: complete the definition and implementation of all SLC processes, including processes for ensuring disciplined software development and acquisition and for validating SLC products; for each phase of the modernization, define business requirements and complete the architecture with sufficient detail and precision to build or acquire systems; formulate a sequencing plan that specifies (1) phase and release cost and schedule estimates, (2) projects that constitute the phases and releases, (3) project cost and schedule estimates, (4) project interdependencies, (5) the evolution of architectural subfunctions, and (6) the projects that replace legacy systems that are eliminated; and validate the business requirements, architecture, and sequencing plan using the completed and implemented SLC processes. To ensure that the modernization blueprint is implemented and enforced agencywide, we recommend that the Commissioner give the CIO: responsibility for developing, implementing, and enforcing SLC processes and products across IRS and requisite budgetary and organizational authority over all IRS systems development, research and development, and maintenance activities. Further, until mature SLC processes for developing and acquiring systems have been implemented across IRS, we recommend that the Commissioner limit requests for future appropriations for information technology to only cost-effective efforts that support ongoing operations and maintenance, including all efforts to make IRS systems Year 2000 compliant; support ongoing IRS efforts to instill requisite SLC discipline, including completing and enforcing the architecture, institutionalizing disciplined software development and acquisition processes, and improving its information technology investment management; are small, represent low technical risk, and can be delivered in a relatively short time frame; or involve deploying already developed systems, only if these systems have been fully tested, are not premature given the lack of a completed architecture, and produce a proven, verifiable business value. Agency Comments In its comments, IRS characterized this report as complete, thoughtful, and balanced. IRS also agreed that (1) the blueprint is not yet complete and does not provide sufficient detail and precision for building or acquiring new systems and (2) the SLC needs to be completed and implemented as a precondition to completing and validating the blueprint as well as proceeding with the modernization. Additionally, IRS agreed with our concern about assignment of agency responsibility and authority for managing information technology and committed itself to addressing each of these findings in the coming months. IRS added that the report provided important insight and perspective in shaping these plans and moving IRS forward in a responsive and responsible manner. We are sending copies of this report to the Ranking Minority Members of the Subcommittee on Treasury and General Government, Senate Committee on Appropriations and Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations; the Chairmen and the Ranking Minority Members of the Subcommittee on Taxation and IRS Oversight, Senate Committee on Finance, the Subcommittee on Oversight, House Committee on Ways and Means, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Senate and House Committees on the Budget. We are also sending copies to the Secretary of the Treasury, the Commissioner of Internal Revenue Service, and the Director of the Office of Management and Budget. Copies will be available to others upon request. This work was performed under the direction of Dr. Rona B. Stillman, Chief Scientist for Computers and Telecommunications, who can be reached at (202) 512-6412. Other contributors to this report are listed in appendix III. Objectives, Scope, and Methodology Pursuant to congressional direction in the conference report accompanying the fiscal year 1997 Omnibus Consolidated Appropriations Act (P.L. 104-208), on May 15, 1997, IRS issued a blueprint for defining, directing, and controlling its modernization. We assessed the blueprint’s four principal components (SLC, business requirements, architecture, and sequencing plan) to determine whether the blueprint provided the foundation needed to develop or acquire modernized systems. Our specific objectives were to determine whether IRS’ SLC was complete and consistent with best industry and government practices; the business requirements were sufficiently precise and the functional and technical architectures were sufficiently complete to build or acquire systems and the sequencing plan was sufficiently complete to understand the transition to the target systems environment; the business requirements, functional and technical architectures, and sequencing plan had been validated using defined and implemented SLC processes; and the information technology management structure was conducive to effective implementation and enforcement of the blueprint. To accomplish our objectives, we interviewed senior IRS officials responsible for developing the modernization blueprint to determine how the blueprint was derived, including the processes followed, the participants involved, and the bases for and analyses supporting decisions made in developing it. We then reviewed and analyzed each component of the blueprint and its related documentation for completeness and sufficiency. With respect to IRS’ SLC, we analyzed the overview document in relation to generally accepted government and industry standards for life cycle management of information technology investments. In the case of business requirements, we focused on two functional areas—customer service and financial reporting—because of their criticality to IRS’ tax administration mission and because we have completed a significant amount of audit work in these areas. For customer service, we determined whether the business requirements were consistent with IRS’ stated guiding principles. For financial reporting, we examined whether IRS’ requirements addressed Federal Accounting Standards Advisory Board standards as well as concerns that we have raised about IRS’ financial management capabilities in prior GAO reports. We also reviewed business requirements in IRS’ other core functional areas to determine whether they were generally clear and unambiguous. Concerning the architecture and sequencing plan, we compared both documents to published architectural guidance to determine their completeness and specificity. For each of the blueprint components, we also questioned senior IRS officials about the documents’ completeness and specificity, as well as IRS’ plans for evolving, validating, implementing, and enforcing them. With respect to IRS’ information technology management structure, we interviewed IRS officials about assignment of organizational and budgetary authority over IRS information technology investments as well as other formal mechanisms in place or planned to enforce IRS information technology investment (research and development, new systems development or acquisition, and system maintenance) conformance to the modernization blueprint. In August and September 1997, we briefed senior IRS and Treasury officials, including the Acting Commissioner of Internal Revenue, the IRS CIO, the Treasury CIO, and the Treasury Acting Chief Financial Officer, on our assessment results, including our conclusions and recommendations. We performed our work at IRS headquarters in Washington, D.C., between May 1997 and September 1997 in accordance with generally accepted government auditing standards. Comments From the Internal Revenue Service Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. General Government Division, Washington, D.C. Atlanta Field Office Seattle Field Office Karlin Richardson, Senior Evaluator Elizabeth Naftchi, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the modernization blueprint that the Internal Revenue Service (IRS) prepared pursuant to the conference report accompanying the fiscal year 1997 Omnibus Consolidated Appropriations Act. GAO noted that: (1) IRS' May 15, 1997, modernization blueprint is a good first step and provides a solid foundation from which to determine precise business requirements, a complete target architecture, and a discipline set of processes and detailed plans for validating, implementing, and enforcing the architecture; (2) similarly, the blueprint's business requirements specify needed improvements in such areas as financial management, and the architecture and sequencing plan include several positive attributes, including traceability between business requirements and systems and high-level descriptions of data and security subarchitectures; (3) however, the blueprint is not yet complete and does not provide sufficient detail and precision for building or acquiring new systems; (4) in particular, IRS' systems life cycle (SLC) does not define in sufficient detail any of the SLC processes needed to manage technology investments; (5) as a result, IRS does not yet know: (a) how systems will actually be designed, developed, tested, or acquired; (b) how compliance with standards will be assessed and ensured; (c) how progress on projects will be determined; or (d) how key SLC products will be validated; (6) additionally, IRS plans for each of the three remaining blueprint components--business requirements, architecture, and sequencing plan--to include four levels of progressively greater detail; (7) as of May 15, 1997, IRS had completed the first two levels; (8) as a result, information that is critical to effective and efficient systems modernization is not yet known, essential decisions have not yet been made, and needed actions have not yet been taken; (9) IRS' Chief Information Officer (CIO) has acknowledged that essential elements are missing from the May 15, 1997, blueprint, and stated that he has begun addressing these voids; (10) however, even though IRS has given the CIO increased responsibility and accountability for managing and controlling systems development, acquisition, and maintenance, neither the CIO nor any other IRS organizational entity has budgetary and organizational authority over all IRS systems activities; and (11) as a result, it is unlikely that IRS will be able to institutionally implement and enforce its modernization blueprint once it is completed.
Background Health-related information from medical records and claims is used throughout the health care industry for the analysis of health-related services and payment. To facilitate the processing and analysis of these data, alphabetic or numeric codes are assigned to identify individual heath-related services. Coding professionals, who receive degrees and certifications in health information management, translate the unstandardized narrative information reported by providers on medical records into the appropriate codes. These codes then assist members of the health care industry in identifying health-related services on medical claims for payment and analyzing service utilization, outcomes, and cost. A procedural code set should include codes that accurately define similar medical procedures and minimize the number of broadly defined codes that group procedures that are seemingly similar, but in fact heterogeneous. The challenge in coding medical procedures is finding a level of specificity that allows codes to accurately represent the procedure being performed, without being so broad or so specific that the code set becomes more complex than necessary to administer and that the data yielded are too broad or specific to be effectively used in processing claims or conducting research. In 1993, NCVHS suggested that a procedural code set should be easy to use and facilitate data analysis. To accomplish these goals, NCVHS recommended criteria for a procedural code set to HHS and the health care industry. Specifically, NCVHS recommended that a procedural code set be designed so that: all aspects of a medical procedure are described in detail, including the body system affected (e.g., cardiovascular, respiratory), the approach that was used in completing the procedure (e.g., open surgery, laparoscopy), the technology that was used to complete the procedure (e.g., laparoscope, endoscope), and the device that was implanted, if any; the code set allows for the addition of codes to reflect procedures introduced through new technology; codes can be collapsed into increasingly larger broad categories of related procedures to facilitate aggregated data analysis; and definitions are standardized. Adoption of Standard Code Sets under HIPAA In 1996, the administrative simplification provisions of HIPAA required the Secretary of HHS to adopt standard code sets in an 18-month timeframe. Through these standard code sets, HIPAA’s goals were to (1) simplify administrative functions for Medicare, Medicaid, and other federal and private health programs, (2) improve the efficiency and effectiveness of the health care industry in general, and (3) enable the efficient electronic transmission of health-related information between members of the health care industry such as providers and payers. Under HIPAA, the Secretary had the authority to select existing code sets developed by either private or public entities as the national standard code sets. In adopting standard code sets, HIPAA directed HHS to seek insight from various members of the health care industry. With input from these industry experts, HHS interdepartmental “HIPAA implementation teams” defined a set of criteria to consider in selecting HIPAA standard code sets. In summary, HHS’s HIPAA implementation teams recommended that standard code sets should: improve the efficiency and effectiveness of the health care industry; meet the needs of the health care industry; be supported by accredited standards-setting organizations or other public and private organizations that will maintain the standard code sets; have timely development, testing, implementation, and updating processes have low development and implementation costs relative to the benefits; keep data collection and paperwork burdens on members of the health care industry as low as possible; be technologically independent of computer programs used in health care be consistent with other standard code sets under HIPAA; be precise and unambiguous; and incorporate flexibility to adapt more easily to changes in the health care industry, such as incorporating new codes for new health-related services and information technology. On May 7, 1998, HHS proposed two standard code sets for reporting medical procedures under HIPAA: ICD-9-CM Vol. 3 for inpatient hospital procedures and CPT for all physician services and other medical services, including outpatient hospital procedures. Members of the health care industry who commented on this proposed rule generally supported the adoption of these procedural code sets as standards on the grounds that they were already in widespread use throughout the health care industry. The final rule was published August 17, 2000, and these code sets became the procedural coding standards effective October 16, 2000.13, 14 Recent legislation extended the deadline for complying with the HIPAA standard code set requirements to October 16, 2003, for those who submit a plan of how they will come into compliance by that date. 65 Fed. Reg. 50,312. In addition to the standard code sets HHS adopted for reporting medical procedures, HHS adopted additional code sets to standardize the reporting of diagnoses and other health- related services, such as medical devices, supplies and equipment, home health care services, prescription drugs, and dental services (see app. I). ICD-9-CM Vol. 3 and Its Maintenance ICD-9-CM Vol. 3, the standard code set named for use in reporting inpatient hospital procedures, is maintained in the public domain by CMS. CMS revises ICD-9-CM Vol. 3 through the ICD-9-CM Coordination and Maintenance Committee meetings. Members of the health care industry attend these biannual public meetings at their discretion and typically include representatives from the AHA, AHIMA, and AMA, among others. Discussions at these meetings include proposed coding changes, such as the addition of codes to reflect new and distinct medical procedures— including those resulting from technological advancements—that may not be accurately represented by existing codes. CMS makes final decisions on whether a new medical procedure warrants a new code based on evidence and recommendations presented by stakeholders at the committee meetings. According to CMS representatives, it takes 6 to 18 months to consider new procedural coding requests, designate new codes to represent the new procedures, and implement the new codes. CMS implements newly approved inpatient service codes every October 1. In addition to contributing to CMS’s maintenance of ICD-9-CM Vol. 3, other organizations such as 3M Health Information Systems, AHA, AHIMA, and AMA publish and market coding textbooks, handbooks, workbooks, and software that are used by members of the health care industry. For example, AHA maintains a free information clearinghouse for members of the health care industry with questions about coding. It also coordinates with CMS, NCHS, and AHIMA to write the official guidelines on the use of ICD-9-CM Vol. 3. According to AHA estimates, the administrative costs for AHA to provide clearinghouse and guidance activities are about $1 million per year. AHA also publishes textbooks, handbooks, and workbooks that are used in coding curriculums and the Coding Clinic for ICD-9-CM, a quarterly, subscription-based publication that serves as the primary manual of ICD-9-CM Vol. 3 guidelines. AHA projects that, for 2001, these publications will incur about $1.7 million in costs and generate almost $2 million in revenue. CPT and Its Maintenance CPT, the code set used to report physician services and other medical services including outpatient hospital procedures, is privately maintained. AMA, which copyrights CPT, maintains the code set through its CPT Editorial Panel, which is made up predominantly of AMA-appointed physicians. The panel also includes such members as physicians nominated by CMS, the Blue Cross Blue Shield Association, AHA, and the Health Insurance Association of America. In addition, an AHIMA representative is permitted to attend the CPT Editorial Panel meetings and participate in discussions of new coding requests as a nonvoting panel member. The panel makes final decisions on requests for new procedure codes. Anyone can request a coding change to CPT and anyone who requests a coding change can present their views at the panel’s quarterly meetings and stay throughout deliberations and voting, but the panel’s meetings are closed to the general public. It takes approximately 18 months to consider new coding requests, designate new codes to represent the new procedures, and implement the new codes. Approved changes are added to the CPT by AMA and become effective every January 1. In October 2001, the AMA released its yearly update of CPT as part of an effort to not just add new codes, but to also phase-in changes designed to improve the code set as a whole. The latest version of CPT was designed to revise code descriptors that had been problematic and had contributed to code ambiguity. For example, in some cases, AMA either added parenthetical statements to existing codes to define exactly what methods, techniques, and approaches were used in performing a procedure or, in other cases, it developed new codes to better delineate the procedures performed. In addition, AMA incorporated codes for nonphysician services such as home health care. Finally, CPT was modified to include a special category designed to expedite the adoption of codes for technically innovative procedures that may not have enough clinical evidence available to otherwise meet the approval standards of the CPT Editorial Panel. These codes will be used for data tracking purposes only and not for assigning payment. AMA reports that CPT’s administrative costs—including those costs associated with collecting licensing fees, publishing CPT literature, holding panel meetings, and paying salaries—are about $10.1 million a year. AMA estimates that its revenue from licensing fees paid by software companies (between $3 million and $4 million) and CPT publications totals about $18 million, or about 7 percent of its annual budget. According to AMA estimates, most of the revenue is generated by the sale of the CPT codebook; other related revenue sources include textbooks, manuals, newsletters, and a CPT advice hotline, which is a subscription-based service staffed by five coding professionals. Under a 1983 agreement between HHS and AMA, CMS pays no fees for its use of CPT. As part of the agreement, CMS assists the AMA in maintaining and updating the code set through its representation on the CPT Editorial Panel. ICD-9-CM Vol. 3 and CPT Were Practical Options for Standard Code Sets, Despite Some Limitations Both ICD-9-CM Vol. 3 and CPT meet almost all of the criteria for standard code sets recommended by HHS’s HIPAA implementation teams. In addition, these codes sets each meet a criterion for procedural code sets recommended by NCVHS. Nevertheless, a consensus exists among most representatives of the health care industry, including CMS representatives, that ICD-9-CM Vol. 3 and CPT—to varying extents—do not meet some criteria for HIPAA standard code sets and procedural code sets, including adequate levels of detail to facilitate data analysis and a capacity to incorporate codes in response to new technology. In fact, HHS recognized that in adopting ICD-9-CM Vol. 3 as a standard code set that it would need to replace it in the not-too-distant future, given its limitations. Wide Use of ICD-9-CM Vol. 3 and CPT Made Them Practical Options for Standards Given the 18-month timeframe in which HHS was required to adopt standard code sets under HIPAA, the widespread use of ICD-9-CM Vol. 3 and CPT made them the most practical options for standards at the time. In addition, both ICD-9-CM Vol. 3 and CPT meet almost all of the criteria for HIPAA standard code sets recommended by HHS’s implementation teams (see table 1). For example, most members of the health care industry currently use one, if not both, of these procedural code sets to some extent. The existing health care administrative system for these procedural code sets—including trained coding professionals, publications, training manuals, computer software, medical claims forms, and fee schedules that are already aligned to these code sets—suggests that the costs of implementing these procedural code sets as standards across all providers and payers will be much lower than the costs of implementing less widely used code sets. The maintenance processes for both ICD-9-CM Vol. 3 and CPT are well established, systematic, and operational, which should facilitate the implementation of these procedural code sets as HIPAA standards across all providers and payers. In addition, ICD-9-CM Vol. 3 and CPT each meet a criterion for procedural code sets recommended by NCVHS. ICD-9-CM Vol. 3 meets the NCVHS criterion that a code set should contain codes that can be collapsed into increasingly broader categories of related procedures to facilitate aggregated data analysis. For example, all ICD-9-CM Vol. 3 codes beginning with “36” are classified as “operations on the heart vessels.” This sequential structure allows many distinct procedures such as open coronary angioplasty (code 3603), percutaneous angioplasty (code 3606), and intercoronary thrombosis infusion (code 3604) to be collapsed into this broad category of similar procedures—“operations on the heart vessels”—based on the “36” code alone, facilitating aggregated data analysis. As for CPT, the maintenance process established by the AMA for updating CPT is considered by many representatives of the health care industry, including NCVHS, to maintain currency with technological advancement. ICD-9-CM Vol. 3 in Need of Replacement Despite its widespread use, most representatives of the health care industry, including CMS representatives, agree that ICD-9-CM Vol. 3, designed more than 20 years ago, is outdated and, because of its limited coding capacity, irreparable. In fact, HHS recognized that in naming ICD-9-CM Vol. 3 as a HIPAA standard, it would need to replace it in the not-too-distant future, given its limitations. ICD-9-CM Vol. 3 does not meet 2 of the 10 criteria for HIPAA standard code sets and does not meet most of the procedural code set criteria recommended by NCVHS (see table 2). First, ICD-9-CM Vol. 3 lacks the specificity needed to accurately identify many key aspects of medical procedures. Very distinct but related procedures may all be classified under one code, and variations in procedures performed or technologies used may not be identified. For example, in this code set, a single code exists for all multiple vessel percutaneous angioplasties (code 3605), without specification as to the number of blood vessels involved, or what type of equipment—balloon-tip catheter, laser, or stent—was used. If a stent was used, to fully represent the type of procedure performed, an additional, secondary code, code 3606, “insertion of coronary artery stent,” would also have to be reported. For payment or research purposes, to know how many vessels were involved in the procedure, or whether the stent used was self-expanding or expandable by a balloon, one would have to look to the medical record for this information, as the code would not capture this level of specificity. Without codes that accurately distinguish between the procedures performed, it is difficult to (1) identify trends in utilization and cost that may provide evidence to support the recalibration of payments, or (2) collect information on the performance outcomes of both new and existing procedures and technologies. Second, many representatives of the health care industry, including CMS representatives, agree that the four-character structure of ICD-9-CM Vol. 3 lacks the capacity to expand and the flexibility to appropriately incorporate new codes in response to new procedures and technology. Code set sections are organized by body systems such as the nervous, cardiovascular, and respiratory systems and by miscellaneous diagnostic and therapeutic procedures and services. With only 10 options available for each character (0 through 9), many of the code set sections for body systems are “full” and can no longer accommodate additional codes, requiring new procedures to be assigned their own code outside of their appropriate body system section. For example, CMS has determined that six new procedures involving cardiac resynchronization pacemakers, some of which have defibrillation capabilities, warrant the creation of their own codes. Generally, these procedures would be assigned codes within the pacemaker code sequence in the cardiovascular section (code sequence 3770-3789). However, the code sequence for pacemaker codes is full and there is only one code available for use in the defibrillator code section. Therefore, to add new codes for these six new procedures, CMS assigned these new technologies to the code sequence beginning with “00,” which is outside of their appropriate section. This solution makes the code set harder for providers and coding professionals to use and complicates the retrieval of data for research purposes, as some pacemaker procedure codes are grouped together and others may be interspersed with codes for a collection of dissimilar procedures. CPT Seen as More Comprehensive, but Sometimes Ambiguous Although CPT meets almost all of the criteria recommended for standard code sets under HIPAA (see table 1), it does not meet all of the criteria recommended for a procedural code set by NCVHS (see table 3). For example, CPT code 34001 represents an “embolectomy or thrombectomy, with or without catheter...” Thus, this code is used to represent different procedures, without identifying the specific procedure that was actually performed. This lack of specificity is present for many CPT codes as their definitions use ambiguous language such as “and/or” and “with or without.” In addition, CPT generally lacks the consistency in its coding sequence that would enable data to be easily aggregated into broad categories. For example, in CPT, procedures on blood vessels can begin with the characters “33,” “34,” or “35,” making it more difficult to aggregate data for procedures performed on blood vessels. In addition, codes beginning with the characters “33” can represent such divergent procedures as those involving the implantation of pacemakers and procedures on the cardiac valves, which further complicates the aggregation of like data. 10-PCS Considered Improvement over Current Inpatient Code Set Standard, but Some Challenges Remain Most representatives of the healthcare industry, including CMS representatives, consider 10-PCS to be an improvement over ICD-9-CM Vol. 3 for coding inpatient hospital procedures. In particular, 10-PCS meets almost all of the criteria for HIPAA standard code sets and for procedural code sets as recommended by NCVHS. However, the design and logic of 10-PCS raise concerns about potential challenges in its implementation, including coding accuracy and the availability of useful data. In addition, the existing health care administrative system would need to be changed significantly to accommodate 10-PCS, imposing additional financial costs and administrative burdens on members of the health care industry, such as providers and payers, who are currently undertaking changes to comply with HIPAA. Although the costs of implementing 10-PCS are anticipated to be substantial, most representatives of the health care industry, including CMS representatives, agree that the limitations of ICD-9-CM Vol. 3 warrant its replacement. However, HHS has not yet reached a decision regarding a proposal to adopt 10-PCS as a replacement of ICD-9-CM Vol. 3. 10-PCS Addresses Criteria for Standard and Procedural Code Sets Not Met by ICD-9-CM Vol. 3 Most representatives of the health care industry, including CMS representatives, find 10-PCS’s design and logic to be an improvement over ICD-9-CM Vol. 3. Its seven-character code allows 34 alphanumeric values for each character, affording it much greater capacity than the existing procedural code sets. Within its seven-character structure, 10-PCS is able to identify key aspects of procedures, including the body system and body part affected, the technique or approach of the procedure, and the technology used in completing it (see fig. 1). For example, the first of the seven characters represents the section that relates to the general type of procedure (e.g., surgery, obstetrical procedure, laboratory procedure); the second character is the body system (e.g., respiratory, gastrointestinal); the third character, the root operation or objective of the procedure (e.g., removal, repair); the fourth character, the body part; the fifth character, the approach or technique used; the sixth character, the device or devices left in the body after the procedure; and the seventh character, a qualifier that has a unique meaning for specific procedures, such as identifying the second site included in a bypass. Most representatives of the health care industry, including CMS representatives, consider 10-PCS to be an improvement over ICD-9-CM Vol. 3 for coding inpatient hospital procedures. In particular, its design and logic meet almost all of the criteria for HIPAA standard code sets and for procedural code sets recommended by NCVHS. In addition, 10-PCS addresses the criteria for HIPAA standard code sets and for procedural code sets recommended by NCVHS that are not met by ICD-9-CM Vol. 3 (see table 4). According to many representatives of the health care industry, 10-PCS’s greater coding specificity will distinguish among distinct procedures that might otherwise be grouped into broadly defined ICD-9-CM Vol. 3 codes. This precision in coding could facilitate the use of more specific data to analyze service utilization, outcomes, and cost. For example, the ICD-9-CM Vol. 3 code 3605 for a multiple vessel angioplasty can represent many related procedures with no specification as to the number of blood vessels involved, the technique used in completing the procedure, or what devices, if any, were implanted in the blood vessels. Because of the increased flexibility and capacity of 10-PCS, 18 different procedures currently reflected under this one ICD-9-CM Vol. 3 code are coded separately under 10-PCS. In addition, CMS representatives suggest that the design and logic of 10-PCS and its standardization of definitions should allow codes for new procedures and technologies to be added more expeditiously than under the current process used to update ICD-9-CM Vol. 3. Unlike 10-PCS, the numeric characters of ICD-9-CM Vol. 3 codes are not predefined to represent certain elements of procedures, including the type of procedure and the body part. According to CMS representatives, the ICD-9-CM Coordination Committee spends a significant amount of time trying to determine how a new procedure should be defined and distinguished from existing procedures and what code should be used to represent that procedure. 10-PCS’s standardization of characters and definitions using alphanumeric characters—where each letter and number is predefined to represent an area of clinical care, a body system, a root operation, and so on—should facilitate how CMS will assign codes to new procedures. In addition to addressing the deficiencies of ICD-9-CM Vol. 3, many representatives of the health care industry, including CMS representatives, state that the design and logic of 10-PCS will facilitate the aggregation of data for analysis of utilization and health outcomes, as recommended by NCVHS. For example, when analyzing 10-PCS codes, one could aggregate the data broadly or narrowly based on the codes: all codes beginning with “027” broadly represent surgical procedures where great blood vessels are expanded; all codes beginning with “0272” represent such surgical procedures performed on three coronary arteries (i.e., great blood vessels), specifically. 10-PCS Design and Logic May Pose Challenges Although 10-PCS has many advantages over ICD-9-CM Vol. 3, its design and logic may pose some challenges. First, experienced coding professionals contend that 10-PCS may require greater clinical expertise among coding professionals than the existing code sets. For example, in pretests, coding professionals found that because of its increased specificity and level of detail, 10-PCS would require a higher level of clinical knowledge in anatomy and physiology to translate the procedures recorded on medical records into the appropriate codes than ICD-9-CM Vol. 3 and would therefore require substantially more training. Once familiar with the code set, however, the coding professionals noted overall gains in efficiency, citing one pretest in particular in which 57 patient records that were difficult to code using ICD-9-CM Vol. 3 codes were more readily coded using 10-PCS codes. Second, AMA representatives contend that the terminology of 10-PCS is a distinct departure from the current medical terminology used by physicians and does not parallel the terminology used on medical records. As a result, these representatives contend that physicians, other practitioners, and coding professionals will need to learn a vocabulary that differs from the terminology they now use to document medical procedures. According to the AMA, the 31 body system characters in 10-PCS do not conform to traditionally named body systems. For example, upper and lower arteries and veins, a distinction made in 10-PCS, is not a common anatomical distinction made by health care professionals. In addition, “amputation” is the standard terminology for removal of an extremity; 10-PCS terminology uses “detachment” to describe this procedure. These differences in terminology may result in coding errors, particularly when the code set is first implemented, as coding professionals transcribe the terminology used on medical records into 10-PCS codes, which in turn could affect the appropriateness of payment and the accuracy of information used to analyze data on utilization, outcomes, and cost. Finally, there are some cases where 10-PCS’s specificity creates a significantly greater number of codes, and it is unknown what effects, if any, this increased volume of codes will have on coding accuracy or the availability of useful data. For example, code 3691, the ICD-9-CM Vol. 3 code for “coronary vessel aneurysm repair,” can represent any number of related procedures with no specification as to the means of repair, the type of arteries, the number of arteries, or the device used. Because of the specificity of 10-PCS, 180 different procedures currently reflected under this one ICD-9-CM Vol. 3 code would be coded separately under 10-PCS. With more codes available for use, there are more opportunities for coding errors with inaccurate codes used in describing the procedure provided, particularly if the descriptions of procedures on medical records do not capture all the dimensions of the procedure needed to complete a code. Implementation of 10-PCS Will Involve Financial Costs and Administrative Burdens 10-PCS may not meet two of the criteria for standard code sets recommended by HHS’s HIPAA implementation teams: it may not have low implementation costs and its implementation as a standard code set may not keep data collection and paperwork burdens on members of the health care industry as low as possible. 10-PCS is a distinct departure from the design and logic of ICD-9-CM Vol. 3; thus the existing health care administrative system—including computer software, coding manuals, claims and remittance forms, and training for coding professionals and other health care professionals—would need to be adapted if 10-PCS were to be implemented. Therefore, the implementation of 10-PCS may impose other financial costs and administrative burdens on members of the health care industry, such as providers and payers, who are currently undertaking changes to implement ICD-9-CM Vol. 3 and CPT as standard code sets under HIPAA. Although the costs of implementing 10-PCS are anticipated to be high, and may impose additional administrative burdens on the health care industry, most representatives of the health care industry, including CMS representatives, agree that the limitations of ICD-9-CM Vol. 3 warrant the implementation of its replacement: 10-PCS. Although the development of 10-PCS is complete, HHS has not reached a decision regarding a proposal to adopt it as a HIPAA standard code set. For 10-PCS to replace ICD-9-CM Vol. 3 and be implemented as a HIPAA standard code set, it must go through a public comment and rulemaking process. If 10-PCS is adopted as the new code set for reporting inpatient hospital procedures under HIPAA, CMS will most likely implement it concurrent with that of the revised diagnosis code set—the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM). Some representatives of the health care industry suggested concurrent implementation to reduce administrative burdens and the additional disruption to the coding infrastructure that would result from nonconcurrent implementation of procedural and diagnosis code sets. Merit of Establishing a Single Procedural Code Set Uncertain in Light of Practical Considerations Although ICD-9-CM Vol. 3 and CPT have been supported by most representatives of the health care industry as acceptable options for HIPAA standard code sets given the practical considerations, since 1993 NCVHS and other representatives of the health care industry have argued that a single procedural code set for reporting inpatient hospital procedures, physician services, and other medical services including outpatient hospital procedures would streamline data reporting and facilitate research across providers and sites of service. Although these representatives of the health care industry support the adoption of a single procedural code set in principle, they disagree on which code set should serve in this capacity. Although no data on implementation costs exist, most representatives of the health care industry, including CMS representatives, agree that implementing any new single code set, regardless of the code set that is adopted, would be costly and time consuming; CMS estimates that adopting a single code set for procedures would likely take at least a decade to complete. Nevertheless, there are no data or studies to demonstrate the potential benefits or costs of adopting a single procedural code set. Representatives of the Health Care Industry Agree That One Code Set Is Preferable to Two Since 1993, NCVHS has supported the adoption of a single code set for reporting inpatient hospital procedures, physician services, and other medical services, including outpatient hospital procedures. NCVHS contends that because of variations in design and terminology between ICD-9-CM Vol. 3 and CPT, the simultaneous operation of the dual code sets is not conducive to aggregating data needed to perform utilization and outcome analyses across providers and sites of service. For example, for payment purposes, hospitals need to use both procedural code sets—an inpatient hospital procedure receives an ICD-9-CM Vol. 3 code for payment purposes whereas the same procedure performed in a hospital outpatient department receives a CPT code. In order for hospitals to analyze the provision of services across inpatient and outpatient departments, they must voluntarily code these procedures using both ICD-9-CM Vol. 3 and CPT so that these data can be aggregated. For the same reason, this dual code set arrangement complicates the research activities of health care analysts seeking data on a particular procedure performed across providers and sites of service. Such analyses are becoming more important as advancements in medical technology increase the ability of providers to perform procedures in various sites of service. NCVHS also notes that efforts to reduce fraud and abuse require more uniformity in coding; multiple code sets, with entirely different maintenance processes and rules, add to the complexity of proper billing and the difficulties of regulators and law enforcement officials in identifying billing violations. Other representatives of the health care industry support the adoption of a single procedural code set in principle. For example, AHIMA representatives support a single procedural code set, suggesting that a single set would reduce the level of resources—including staff, software, and updated manual and guideline publications—needed by hospitals to operate separate inpatient and outpatient procedural code sets. Currently, the operation of dual procedural code sets requires hospitals to maintain either separate coding staffs with expertise in each set or a single coding staff with expertise in both sets. AMA concurs that ideally one procedural code set could be used by providers in all sites of service, allowing for true administrative efficiencies and the reduction of burdens faced by providers that currently use multiple sets. Neither 10-PCS nor CPT, as Designed, Would Suffice as a Single Code Set A single procedural code set has not been developed. Although most representatives of the health care industry, including CMS representatives, agree ICD-9-CM Vol. 3 would not suffice as a single procedural code set, substantial disagreements exist on whether 10-PCS or CPT could serve in this capacity. AHIMA views 10-PCS as a potential candidate because it meets the procedural code set criteria recommended by NCVHS, but has stated that pretesting of this new code set for many outpatient procedures, including physician services, has been too limited to make conclusive recommendations. AMA argues that 10-PCS would not suffice as a single procedural code set. In addition to not reflecting the terminology currently used by the medical profession, 10-PCS does not include codes for certain outpatient procedures now represented in CPT, such as those for “evaluation and management” services—physician office visits, consultations, and hospital observation services. If 10-PCS were to be used as a single procedural code set, adaptations to the code set would have to be made to incorporate these services. CMS has not planned to test 10-PCS as a candidate for a single procedural code set. AMA supports a single procedural code set that would be based on CPT, because it is already widely used by the health care industry and could be adapted for coding inpatient hospital procedures. However, according to NCVHS, CPT is not an ideal candidate for a single procedural code set because its definitions are not always precise and unambiguous and its codes lack the ability to be easily collapsed into broad categories for aggregated data analysis. In addition, AHA and AHIMA contend that CPT is designed to describe physician-based services specifically and does not adequately capture hospital-based, nonphysician services. Implementing a Single Code Set Would Involve Significant Costs and Time Although no data on implementation costs exist, most representatives of the health care industry, including CMS representatives, agree that implementing any single procedural code set, regardless of the code set that is adopted, would involve significant costs and time. For example, coding textbooks, handbooks, workbooks, software, and claims forms would need to be revised or developed. All providers and payers would need to retrain staff, update computer software, and create or purchase new manuals and other educational materials. In addition, a single procedural code set would need to be coordinated with public and private payment systems for inpatient and outpatient procedures, including physician services, which would contribute to the costs of implementing such a code set. Finally, some representatives of the health care industry note that even if an existing code set such as 10-PCS or CPT were adopted as a single procedural code set, the process for its adaptation and implementation would take at least a decade. No Empirical Evidence on Benefits or Costs of a Single Code Set There have been no empirical studies on the adoption of a single procedural code set to measure the potential benefits identified by NCVHS and others or to estimate the costs of implementing such a code set. Recognizing the lack of empirical evidence, NCVHS stated in its recommendations that it would be necessary to evaluate the costs, benefits, and impact of a single procedural code set. AHA has stated that any proposed change should be thoroughly tested to prove that the procedural code set is both functional and able to be coordinated with payment systems. In addition, AHIMA recommends that federally funded research examine the feasibility, efficacy, costs, and benefits of moving to such a set. The benefits of a single procedural code set for research may be altered by developments in processing health-related information. Increasingly, the health care industry is moving toward electronic medical records and claims. Companies are working to create search engines that would align the unstandardized terminology found on electronic medical records with variations in definitions from existing code sets. For example, the narratives on a medical record may list “myocardial infarction,” “MI,” or “heart attack” to represent the same condition. Similarly, ICD-9-CM Vol. 3, CPT, and 10-PCS have differences in terminology to describe similar medical procedures. These search engines would allow for searches under key terms and retrieve the appropriate data regardless of the terminology or code that is used on electronic medical records and claims, facilitating the analysis of data across sites of service. Concluding Observations ICD-9-CM Vol. 3 and CPT, although not without limitations, were practical options for HIPAA code set standards given their widespread use in the health care industry and the time constraints for their adoption. In addition, these procedural code sets meet almost all of the criteria recommended for HIPAA standard code sets—that they improve the efficiency and meet the needs of the health care industry, are recognized by the public and private organizations that will maintain the code sets, have low additional costs and administrative burdens associated with their implementation, are independent of computer programs, and are consistent with other HIPAA standard code sets. Nevertheless, many representatives of the health care industry argue that the adoption of a single procedural code set could help further improve the efficiency of data reporting and facilitate data analysis across sites of service. Yet it is unknown if the benefits of moving to a single procedural code set would justify the transition costs, or how long it would take for the benefits to recoup these costs because the theoretical merits of a single procedural code set have yet to be demonstrated empirically. Considering the adequacy of ICD-9-CM Vol. 3 and CPT in meeting almost all of the criteria recommended for HIPAA standard code sets, the practical challenges of implementing a single procedural code set, and lack of empirical evidence to either support or disprove the merits of doing so, we believe that dual code sets for reporting medical procedures are acceptable under HIPAA. In addition, we concur with those representatives of the health care industry who contend that more study is needed to examine the possible benefits of adopting a single code set for medical procedures before its implementation could be considered. Agency Comments We received written comments from CMS on a draft of this report (see app. III). We also received written comments from AHA, AHIMA, and AMA on excerpts of our draft report. In general, CMS concurred with our analysis. CMS said that this subject is of concern to HHS because the Secretary is considering how to proceed in the face of the perceived inadequacies of ICD-9-CM Vol. 3 for the future coding of inpatient hospital procedures. CMS also said it was important to emphasize that the decision to adopt ICD-9-CM Vol. 3 as a HIPAA standard code set was made following an evaluation of its benefits and limitations and that it represented the best alternative available at the time for inpatient procedure coding. In addition, while CMS agreed that the costs of replacing ICD-9-CM Vol. 3 with 10-PCS would be significant, they emphasized that no estimate is available and that it is difficult to justify referring to these costs as “high,” as we do in our report. CMS said that the costs associated with making a change (such as software and training manuals) should be balanced against the costs to the health care system of continuing to use an out-of-date code set. We agree with CMS that the costs associated with replacing ICD-9-CM Vol. 3 should be balanced against the costs to the health care system of its continued use. Nevertheless, we feel that the costs associated with replacing it for the myriad of users within the health care system—updating computer software, coding manuals, and claims and remittance forms and training coding professionals and other health care professionals—will ultimately be “high.” Finally, CMS said the report should clarify that the Secretary has not made a decision to eliminate ICD-9-CM Vol. 3 and adopt 10-PCS. We have revised the report accordingly. CMS, AHA, AHIMA, and AMA also made technical comments that we have incorporated where appropriate. We are sending copies of this report to CMS, AHA, AHIMA, and AMA, and will make it available to those who are interested upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7101. Emily J. Rowe, Hannah Fein, Preety Gadhoke, and Martin T. Gahart made major contributions to this report. Appendix I: Code Set Standards for Health- Related Services Adopted under HIPAA The Health Insurance Portability and Accessibility Act of 1996 (HIPAA) required the Secretary of the Department of Health and Human Services (HHS) to adopt standard code sets for describing health-related services in connection with transactions such as filing claims for payment. In addition, HIPAA required these standard code sets to be used by all providers and payers. In response, HHS adopted several code sets to standardize the reporting of procedures, diagnoses, and other health- related services, such as medical devices, supplies and equipment, prescription drugs, and dental services (see table 5). Appendix II: The Distinct and Independent Code Sets Known as “ICD-9” Appendix II: The Distinct and Independent Code Sets Known as “ICD-9” There are several distinct code sets similarly referred to as “ICD-9” that are used to code different health-related services (see table 6). The World Health Organization’s (WHO) International Classification of Diseases, 9th Revision (ICD-9) was used worldwide to code and classify causes of death from death certificates before WHO adopted the tenth revision. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) has three, publicly maintained volumes that have been adopted as standard code sets for assigning codes to diagnoses and inpatient hospital procedures under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Volumes 1 and 2 of ICD-9-CM are based on WHO’s ICD-9 mortality code set and have been named the standard code set under HIPAA to code and classify diagnosis data from inpatient and outpatient records, physician offices, and most National Centers for Health Statistics (NCHS) surveys. NCHS is responsible for the use, interpretation, and periodic revision of the diagnosis code set in collaboration with WHO. Volume 3 of ICD-9-CM has been named as the standard code set under HIPAA for coding inpatient hospital procedures. It is maintained in the public domain by the Centers for Medicare and Medicaid Services (CMS) and pertains to the provision of hospital inpatient procedures. Appendix III: Comments from the Centers for Medicare and Medicaid Services
Consistently classifying, defining, and distinguishing among the range of medical services provided today--from diagnoses to treatments--is critical for reimbursing providers and analyzing health care utilization, outcomes, and cost. Codes serve this role by assigning each distinct service a unique identifier. Health care providers, such as hospitals and physicians, report medical conditions and the health-related services they have provided to patients on medical records. In August 2000, the Department of Health and Human Services (HHS) adopted two standard code sets for reporting medical procedures: (1) the International Classification of Diseases, 9th Revision, Clinical Modification, Volume 3 (ICD-9-CM Vol. 3); and (2) the Current Procedural Terminology (CPT). Despite HIPAA's goals for administrative simplification, many representatives of the health care industry have expressed concern that the individual limitations of these code sets result in inefficiencies in record keeping and data reporting. GAO found that, given the 18-month time frame allotted to HHS under HIPAA for adopting standard code sets, ICD-9-CM Vol. 3 and CPT were practical options for HIPAA standard code sets despite some limitations. Both code sets meet almost all of the criteria for standard code sets recommended by HHS's HIPAA implementation teams. For example, they improve the efficiency and meet the needs of the health care industry, have low additional costs and administrative burdens associated with their implementation, and are consistent with other HIPAA standards. In addition, each of these codes sets meets a criterion for procedural code sets recommended by the National Committee on Vital and Health Statistics.
Growing Fiscal Imbalance Raises Questions about the Affordability and Sustainability of Current Defense Spending The federal government’s financial condition and long-term fiscal outlook present enormous challenges to the nation’s ability to respond to emerging forces reshaping American society, the place of the United States in the world, and the future role of DOD as well as the rest of the federal government. The near-term deficits are daunting—a $412 billion unified budget deficit in fiscal year 2004 (including a $567 billion on-budget deficit and a $155 billion off-budget surplus) and a $368 billion deficit (not including any supplemental appropriations) forecast for fiscal year 2005 by the CBO. If these near-term deficits represented only a short-term phenomenon—prompted by such factors as economic downturn or national security crises—there would be less cause for concern. However, deficits have grown notwithstanding the economy recovery from the recession in 2001, and the incremental costs of responding to homeland security and the nation’s global war against terrorism represent only a relatively small fraction of current and projected deficits. Moreover, based on our long-range fiscal simulations, the current fiscal condition is but a prelude to a much more daunting long-term fiscal outlook. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. Absent significant policy changes on the spending or revenue side of the budget, our simulations show that growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. In fact, the cost implications of the baby boom generation’s retirement have already become a factor in CBO’s baseline projections and will only intensify as the baby boomers age. According to CBO, total federal spending for Social Security, Medicare, and Medicaid is projected to grow by about 25 percent over the next 10 years—from 8.4 percent of Gross Domestic Product in 2004 to 10.4 percent in 2015. In addition, CBO reported that excluding supplemental funding appropriated in 2004 and requested in 2005 (mostly for activities in Iraq and Afghanistan), discretionary budget authority for defense programs is estimated to grow from $394 billion in 2004 to $421 billion in 2005, a 6.8 percent increase. The expected growth combined with the fact that DOD accounted for more than half of all discretionary spending in fiscal year 2004 raises concerns about the sustainability and affordability of increased defense spending. Despite the need to make strategic investment decisions to address these fiscal pressures, DOD’s current approach to planning often supports the status quo and results in a mismatch between programs and budgets. As we have reported, DOD has difficulties overcoming cultural resistance to change and the inertia of various organizations, policies, and procedures rooted in the Cold War era. Long-standing organizational and budgetary programs need to be addressed, such as the existence of stovepiped or siloed organizations, the involvement of many layers and players in decision making, and the allocation of budgets on a proportional rather than a strategic basis across the military services. DOD’s approach to planning does not always provide reasonable visibility to decision makers, including Congress, over the projected cost of defense programs. As we have reported in the past, DOD uses overly optimistic estimations of future program costs that often lead to costs being understated. For example, in January 2003 we reported that the estimated cost of developing eight major weapon systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. As a result of these inaccurate estimates, DOD has more programs than it can support with its available dollars, which often leads to program instability, costly program stretch- outs, and program termination. Increasingly limited fiscal resources across the federal government, coupled with emerging requirements from the changing security environment, emphasize the need for DOD to address its current inefficient approach to planning and develop a risk-based strategic investment framework for establishing goals, evaluating and setting priorities, and making difficult resource decisions. In its strategic plan, the September 2001 Quadrennial Defense Review, DOD outlined a new risk management framework consisting of four dimensions of risk—force management, operational, future challenges, and institutional—to use in considering trade-offs among defense objectives and resource constraints. We recognize what a large undertaking developing a departmentwide risk management framework will be and understand that DOD is still in the process of implementing this approach. However, it remains unclear how DOD will use the risk management framework to measure progress in achieving business and force transformation. It also remains unclear how the framework will be used to correct limitations we have previously identified in DOD’s strategic planning and budgeting. We are currently monitoring DOD’s efforts to implement the risk management framework. Pervasive Business Management Weaknesses Place DOD’s Overall Business Transformation at Risk Numerous management problems, inefficiencies, and wasted resources continue to trouble DOD’s business operations, resulting in billions of dollars of wasted resources annually at a time when our nation is facing an increasing fiscal imbalance. Specific business management challenges that DOD needs to address to successfully transform its business operations include DOD’s approach to business transformation, strategic human capital management, its personnel security clearance program, support infrastructure management, business systems modernization, financial management, weapons systems acquisition, contract management, and supply chain management. These management challenges are on our 2005 high-risk list of programs and activities that need urgent and fundamental transformation if the federal government is to function in the most economical, efficient, and effective manner possible. The 8 DOD specific high-risk areas, along with six government-wide areas that apply to DOD, mean that the department is responsible for 14 of 25 high-risk areas. As shown in table 1, we added DOD’s approach to business management transformation to this list in 2005 because it represents an overarching high-risk area that encompasses the other individual, DOD specific, high- risk areas, but many of these other management challenges have been on the list for a decade or more. DOD’s Approach to Business Transformation DOD’s approach to business management transformation represents an overarching high-risk area, encompassing several other key business management challenges. Over the years, DOD has embarked on a series of efforts to reform its business management operations, including modernizing underlying information technology (business) systems. However, serious inefficiencies remain. As a result, the areas of support infrastructure management, business systems modernization, financial management, weapon systems acquisition, contract management, and supply chain management remain high-risk DOD business operations. We now consider DOD’s overall approach to business transformation to be a high-risk area because (1) DOD’s business improvement initiatives and control over resources are fragmented; (2) DOD lacks a clear strategic and integrated business transformation plan and an investment strategy, including a well-defined enterprise architecture, to guide and constrain implementation of such a plan; and (3) DOD has not designated a senior management official responsible and accountable for overall business transformation reform and related resources. Unless DOD makes progress in overall business transformation, we believe it will continue to have difficulties in confronting other problems in its business operations. DOD spends billions of dollars to sustain key business operations intended to support the warfighter. We have previously testified on inefficiencies in DOD’s business operations, such as the lack of sustained leadership, the lack of a strategic and integrated business transformation plan, and inadequate incentives. Moreover, the lack of adequate transparency and accountability across DOD’s major business areas results in billions of dollars of wasted resources annually at a time of increasing military operations and growing fiscal constraints. Business transformation requires long-term cultural change, business process reengineering, and a commitment from both the executive and legislative branches of government. Although sound strategic planning is the foundation on which to build, DOD needs clear, capable, sustained, and professional leadership to maintain the continuity necessary for success. Such leadership could facilitate the overall business transformation effort within DOD by providing the momentum needed to overcome cultural resistance to change, military service parochialism, and stovepiped operations, all of which have contributed significantly to the failure of previous attempts to implement broad-based management reform at DOD. Without such leadership, it is also likely that DOD will continue to spend billions of dollars on stovepiped, duplicative, and nonintegrated systems that do not optimize mission performance or effectively support the warfighter. Strategic Human Capital Management DOD is attempting to address the critically important business management challenge of strategic human capital management through its proposed human resources management system, the National Security Personnel System (NSPS). Successful implementation of NSPS is essential for DOD as it attempts to transform its military forces and defense business practices in response to 21st century challenges. In addition, this new human resources management system, if properly designed and effectively implemented, could serve as a model for governmentwide human capital transformation. DOD is one of several federal agencies that have been granted the authority by Congress to design a new human capital system as a way to address the governmentwide high-risk area of strategic human capital management. This effort represents a huge undertaking for DOD, given its massive size and geographically and culturally diverse workforce. As I recently testified on DOD’s proposed NSPS regulations, our ongoing work continues to raise questions about DOD’s chances of success in its efforts to effect fundamental business management reform, such as NSPS. I would like to acknowledge, however, that DOD’s NSPS regulations take a valuable step toward a modern performance management system as well as a more market-based and results-oriented compensation system. On February 14, 2005, the Secretary of Defense and the Acting Director of Office of Personnel Management (OPM) released the proposed NSPS regulations for public comment. Many of the principles underlying those regulations are generally consistent with proven approaches to strategic human capital management. For instance, the proposed regulations provide for (1) elements of a flexible and contemporary human resources management system, such as pay bands and pay for performance; (2) right- sizing of DOD’s workforce when implementing reduction-in-force orders by giving greater priority to employee performance in its retention decisions; and (3) continuing collaboration with employee representatives. (It should be noted, however, that 10 federal labor unions have filed suit alleging that DOD failed to abide by the statutory requirements to include employee representatives in the development of DOD’s new labor relations system authorized as part of NSPS.) Despite this progress, we have three primary areas of concern about the proposed NSPS regulations. DOD’s proposed regulations do not (1) define the details of the implementation of the system, including such issues as adequate safeguards to help ensure fairness and guard against abuse; (2) require, as we believe they should, the use of core competencies to communicate to employees what is expected of them on the job; and (3) identify a process for the continuing involvement of employees in the planning, development, and implementation of NSPS. DOD also faces multiple implementation challenges once it issues its final NSPS regulations. Given the huge undertaking NSPS represents, another challenge is to elevate, integrate, and institutionalize leadership responsibility for this large-scale organizational change initiative to ensure its success. A chief management official or similar position can effectively provide the continuing, focused leadership essential to successfully completing these multiyear transformations. Additionally, DOD could benefit if it develops a comprehensive communications strategy that provides for ongoing, meaningful two-way communication to create shared expectations among employees, employee representatives, managers, customers, and stakeholders. Finally, appropriate institutional infrastructure could enable DOD to make effective use of its new authorities. At a minimum, this infrastructure includes a human capital planning process that integrates DOD’s human capital policies, strategies, and programs with its program goals, mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and a set of adequate safeguards—including reasonable transparency and appropriate accountability mechanisms—to help ensure the fair, effective, and credible implementation and application of a new system. We strongly support the need for government transformation and the concept of modernizing federal human capital policies within both DOD and the federal government at large. There is general recognition that the federal government needs a framework to guide human capital reform. Such a framework would consist of a set of values, principles, processes, and safeguards that would provide consistency across the federal government but be adaptable to agencies’ diverse missions, cultures, and workforces. Personnel Security Clearance Program Delays in completing hundreds of thousands of background investigations and adjudications (reviews of investigative information to determine eligibility for a security clearance) have led us to identify as a business management challenge the DOD personnel security clearance program, which we just added to our high-risk list in 2005. Personnel security clearances allow individuals to gain access to classified information. In some cases, unauthorized disclosure of classified information could reasonably be expected to cause exceptionally grave damage to national defense or foreign relations. DOD has approximately 2 million active clearances as a result of worldwide deployments, contact with sensitive equipment, and other security requirements. While our work on the clearance process has focused on DOD, clearance delays in other federal agencies suggest that similar impediments and their effects may extend beyond DOD. Since at least the 1990s, we have documented problems with DOD’s personnel security clearance process, particularly problems related to backlogs and the resulting delays in determining clearance eligibility. Since fiscal year 2000, DOD has declared its personnel security clearance investigations program to be a systemic weakness—a weakness that affects more than one DOD component and may jeopardize the department’s operations. An October 2002 House Committee on Government Reform report also recommended including DOD’s adjudicative process as a material weakness. As of September 30, 2003 (the most recent data available), DOD could not estimate the full size of its backlog, but we identified over 350,000 cases exceeding established time frames for determining eligibility. DOD has taken steps to address the backlog—such as hiring more adjudicators and authorizing overtime for adjudicative staff—but a significant shortage of trained federal and private-sector investigative personnel presents a major obstacle to timely completion of cases. Other impediments to eliminating the backlog include the absence of an integrated, comprehensive management plan for addressing a wide variety of problems identified by us and others. In addition to matching adjudicative staff to workloads and working with OPM to develop an overall management plan, DOD needs to develop and use new methods for forecasting clearance needs and monitoring backlogs; eliminate unnecessary limitations on reciprocity (the acceptance of a clearance and access granted by another department, agency, or military service); determine the feasibility of implementing initiatives that could decrease the backlog and delays; and provide better oversight for all aspects of its personnel security clearance process. The National Defense Authorization Act for Fiscal Year 2004 authorized the transfer of DOD’s personnel security investigative function and over 1,800 investigative employees to OPM. This transfer took place in February 2005. While the transfer eliminated DOD’s responsibility for conducting the investigations, it did not eliminate the shortage of trained investigative personnel needed to address the backlog. Although DOD retained the responsibility for adjudicating clearances, OPM is now accountable for ensuring that investigations are completed in a timely manner. By the end of fiscal year 2005, OPM projects that it will have 6,500 of the estimated 8,000 full-time equivalent federal and contract investigators it needs to help eliminate the investigations backlog. Support Infrastructure Management DOD has made progress and expects to continue making improvements in its support infrastructure management, but much work remains to be done. DOD’s support infrastructure includes categories such as force installations, central logistics, the defense health program, and central training. DOD’s infrastructure costs continue to consume a larger-than- necessary portion of its budget than DOD believes is desirable, despite reductions in the size of the military force following the end of the Cold War. For several years, DOD also has been concerned about its excess facilities infrastructure, which affects its ability to devote more funding to weapon systems modernization and other critical needs. DOD has reported that many of its business processes and much of its infrastructure are outdated and must be modernized. Left alone, the current organizational arrangements, processes, and systems will continue to drain scarce resources. DOD officials recognize that they must achieve greater efficiencies in managing their support operations. DOD has achieved some operating efficiencies and reductions from such efforts as base realignments and closures, consolidations, organizational and business process reengineering, and competitive sourcing. It also has achieved efficiencies by eliminating unneeded facilities through such means as demolishing unneeded buildings and privatizing housing at military facilities. In addition, DOD and the services are currently gathering and analyzing data to support a new round of base realignments and closures in 2005 and facilitating other changes as a result of DOD’s overseas basing study. Despite this progress, much work remains for DOD to transform its support infrastructure to improve operations, achieve efficiencies, and allow it to concentrate resources on the most critical needs. Organizations throughout DOD need to continue reengineering their business processes and striving for greater operational effectiveness and efficiency. DOD needs to develop a plan to better guide and sustain the implementation of its diverse business transformation initiatives in an integrated fashion. DOD also needs to strengthen its recent efforts to develop and refine its comprehensive long-range plan for its facilities infrastructure to ensure adequate funding to support facility sustainment, modernization, recapitalization, and base operating support needs. DOD generally concurs with our prior recommendations in this area and indicates it is taking actions to address them. A key to any successful approach to resolving DOD’s support infrastructure management issues will be addressing this area as part of a comprehensive, integrated business transformation effort. Business Systems Modernization We continue to categorize DOD’s business systems modernization program as a management challenge because of a lack of an enterprise architecture to guide and constrain system investments and because of ineffective management oversight, system acquisition, and investment management practices. As a result, DOD’s current operating practices and over 4,000 systems function in a stovepiped, duplicative, and nonintegrated environment that contributes to DOD’s operational problems. For years, DOD has attempted to modernize these systems, and we have provided numerous recommendations to help guide its efforts. For example, in 2001 we provided DOD with a set of recommendations to help it develop and implement an enterprise architecture (or modernization blueprint) and establish effective investment management controls. Such an enterprise architecture is essential for DOD to guide and constrain how it spends billions of dollars annually on information technology systems. We also made numerous project-specific and DOD-wide recommendations aimed at getting DOD to follow proven best practices when it acquired system solutions. While DOD agreed with most of these recommendations, to date the department has made limited progress in addressing them. In May 2004, we reported that after 3 years and over $203 million in obligations, DOD had not yet developed a business enterprise architecture containing sufficient scope and detail to guide and constrain its departmentwide systems modernization and business transformation. One reason for this limited progress is DOD’s failure to adopt key architecture management best practices that we recommended, such as developing plans for creating the architecture; assigning accountability and responsibility for directing, overseeing, and approving the architecture; and defining performance metrics for evaluating the architecture. Under a provision in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, DOD must develop an enterprise architecture to cover all defense business systems and related business functions and activities that is sufficiently defined to effectively guide, constrain, and permit implementation of a corporatewide solution and is consistent with the policies and procedures established by OMB. Additionally, the act requires the development of a transition plan that includes an acquisition strategy for new systems and a listing of the termination dates of current legacy systems that will not be part of the corporatewide solution, as well as a listing of legacy systems that will be modified to become part of the corporatewide solution for addressing DOD’s business management deficiencies. In May 2004, we also reported that the department’s approach to investing billions of dollars annually in existing systems had not changed significantly. As a result, DOD lacked an effective investment management process for selecting and controlling ongoing and planned business systems investments. While DOD issued a policy that assigns investment management responsibilities for business systems, in May 2004 we reported that DOD had not yet defined the detailed procedures necessary for implementing the policy, clearly defined the roles and responsibilities of the business domain owners (now referred to as core business mission areas), established common investment criteria, or ensured that its business systems are consistent with the architecture. To address certain provisions and requirements of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, on March 24, 2005, the Deputy Secretary of Defense directed the transfer of program management, oversight, and support responsibilities regarding DOD business transformation efforts from the Office of the Under Secretary of Defense, Comptroller, to the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (OUSD(AT&L)). According to the directive, this transfer of functions and responsibilities will allow the OUSD(AT&L) to establish the level of activity necessary to support and coordinate activities of the newly established Defense Business Systems Management Committee (DBSMC). As required by the act, the DBSMC— with representation including the Deputy Secretary of Defense, the designated approval authorities, and secretaries of the military services and heads of the defense agencies—is the highest ranking governance body responsible for overseeing DOD business systems modernization efforts. While this committee may serve as a useful planning and coordination forum, it is important to remember that committees do not lead, people do. In addition, DOD still needs to designate a person to have overall responsibility and accountability for this effort. This person must have the background and authority needed to successfully achieve the related objectives for business systems modernization efforts. According to DOD’s annual report to congressional defense committees on the status of the department’s business management modernization program, DOD has not yet established investment review boards below the DBSMC for each core business mission. The statutory requirements enacted as part of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 further require that the DBSMC must agree with the designated approval authorities’ certification of funds exceeding $1 million for the modernization of business systems before funds can be obligated. More important, the obligation of these funds without the requisite approval by the DBSMC is deemed a violation of the Anti- Deficiency Act. As DOD develops a comprehensive, integrated business transformation plan, such a plan must include an approach to resolve the business systems modernization problems. To this end, it is critical that this plan provide for the implementation of our many recommendations related to business systems modernization. Financial Management DOD continues to face financial management problems that are pervasive, complex, long-standing, and deeply rooted in virtually all of its business operations. DOD’s financial management deficiencies adversely affect the department’s ability to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, prevent fraud, and address pressing management issues. As I testified before the House Committee on Government Reform in February 2005, and as discussed in our report on the U.S. government’s consolidated financial statements for fiscal year 2004, DOD’s financial management deficiencies, taken together, represent a major impediment to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. Our recent reports and testimonies on Army reserve and national guard pay issues clearly illustrate the impact deficiencies in DOD’s financial management have had on the very men and women our country is depending on to perform our military operations. For example, in February 2005, we reported that the Army’s process for extending active duty orders for injured soldiers lacks an adequate control environment and management controls, including (1) clear and comprehensive guidance, (2) a system to provide visibility over injured soldiers, and (3) adequate training and education programs. The Army also has not established user- friendly processes, including clear approval criteria and adequate infrastructure and support services. Poorly defined processes for extending active duty orders for injured and ill reserve component soldiers have caused soldiers to be inappropriately dropped from their active duty orders. For some, this has led to significant gaps in pay and health insurance, which have created financial hardships for these soldiers and their families. Based on our analysis of Army manpower data during the period from February 2004 through April 7, 2004, almost 34 percent of the 867 soldiers who applied for extension of active duty orders because of injuries or illness lost their active duty status before their extension requests were granted. For many soldiers, this resulted in being removed from active duty status in the automated systems that control pay and access to benefits such as medical care and access to a commissary or post exchange that allows soldiers and their families to purchase groceries and other goods at a discount. Many Army locations have used ad hoc procedures to keep soldiers in pay status; however, these procedures often circumvent key internal controls and put the Army at risk of making improper and potentially fraudulent payments. Finally, the Army’s nonintegrated systems, which require extensive error-prone manual data entry, further delay access to pay and benefits. The Army recently implemented the Medical Retention Processing (MRP) program, which takes the place of the previously existing process in most cases. The MRP program, which authorizes an automatic 179 days of pay and benefits, may resolve the timeliness of the front-end approval process. However, the MRP program has some of the same problems as the existing process and may also result in overpayments to soldiers who are released early from their MRP orders. DOD’s senior civilian and military leaders have taken positive steps to begin reforming the department’s financial management operations. However, to date, tangible evidence of improvement has been seen in only a few specific areas, such as internal controls related to DOD’s purchase card and individually billed travel card programs. Further, we reported in September 2004 that, while DOD had established a goal of obtaining a clean opinion on its financial statements by 2007, it lacked a written and realistic plan to make that goal a reality. DOD’s continuing, substantial financial management weaknesses adversely affect its ability to produce auditable financial information as well as provide accurate and timely information for management and Congress to use in making informed decisions. Overhauling the financial management and related business operations of one of the largest and most complex organizations in the world represents a daunting challenge. Such an overhaul of DOD’s financial management operations goes far beyond financial accounting to the very fiber of the department’s wide-ranging business operations and its management culture. It will require (1) sustained leadership and resource control, (2) clear lines of responsibility and accountability, (3) plans and related results-oriented performance measures, and (4) appropriate individual and organizational incentives and consequences. DOD is still in the very early stages of a departmentwide overhaul that will take years to accomplish. DOD has not yet established a framework to integrate improvement efforts in this area with related broad-based DOD initiatives, such as human capital reform. However, successful, lasting reform in this area will only be possible if implemented as part of a comprehensive and integrated approach to transforming all of DOD’s business operations. Weapon Systems Acquisition Another business management challenge DOD faces is its weapon systems acquisition program. While DOD’s acquisition process has produced the best weapons in the world, it also consistently yields undesirable consequences—such as cost increases, late deliveries to the warfighter, and performance shortfalls. Such problems were highlighted, for example, in our reviews of DOD’s F/A-22 Raptor, Space-Based Infrared System, Airborne Laser, and other programs. Problems occur because DOD’s weapon programs do not capture early on the requisite knowledge that is needed to efficiently and effectively manage program risks. For example, programs move forward with unrealistic program cost and schedule estimates, lack clearly defined and stable requirements, use immature technologies in launching product development, and fail to solidify design and manufacturing processes at appropriate junctures in development. When programs require more resources than planned, the buying power of the defense dollar is reduced and funds are not available for other competing needs. It is not unusual for estimates of time and money to be off by 20 to 50 percent. When costs and schedules increase, quantities are cut and the value for the warfighter—as well as the value of the investment dollar—is reduced. In these times of asymmetric threats and netcentricity, individual weapon system investments are getting larger and more complex. Just 4 years ago, the top five weapon systems cost about $281 billion; today, in the same base year dollars, the five weapon systems cost about $521 billion. If these megasystems are managed with traditional margins of error, the financial consequences—particularly the ripple effects on other programs—can be dire. While weapon systems acquisition continues to remain on our high-risk list, DOD has undertaken a number of acquisition reforms over the past 5 years. Specifically, DOD has restructured its acquisition policy to incorporate attributes of a knowledge-based acquisition model and has reemphasized the discipline of systems engineering. In addition, DOD recently introduced new policies to strengthen its budgeting and requirements determination processes in order to plan and manage weapon systems based on joint warfighting capabilities. While these policy changes are positive steps, implementation in individual programs will continue to be a challenge because of inherent funding, management, and cultural factors that lead managers to develop business cases for new programs that over-promise on cost, delivery, and performance of weapon systems. It is imperative that needs be distinguished from wants and that DOD’s limited resources be allocated to the most appropriate weapon system investments. Once the best investments that can be afforded are identified, then DOD must follow its own policy to employ the knowledge-based strategies essential for delivering the investments within projected resources. Making practice follow policy is not a simple matter. It is a complex challenge involving many factors. One of the most important factors is putting the right managers in their positions long enough so that they can be both effective and accountable for getting results. Contract Management Another long-standing business management challenge is DOD’s contract management program. As the government’s largest purchaser at over $200 billion in fiscal year 2003, DOD is unable to assure that it is using sound business practices to acquire the goods and services needed to meet the warfighter’s needs. For example, over the past decade DOD has significantly increased its spending on contractor-provided information technology and management support services, but it has yet to fully implement a strategic approach to acquiring these services. In 2002, DOD and the military departments established a structure to review individual service acquisitions valued at $500 million or more, and in 2003 they launched a pilot program to help identify strategic sourcing opportunities. To further promote a strategic orientation, however, DOD needs to establish a departmentwide concept of operations; set performance goals, including savings targets; and ensure accountability for achieving them. In March 2004, we reported that if greater management focus were given to opportunities to capture savings through the purchase card program, DOD could potentially save tens of millions of dollars without sacrificing the ability to acquire items quickly or compromising other goals. DOD also needs to have the right skills and capabilities in its acquisition workforce to effectively implement best practices and properly manage the goods and services it buys. However, DOD reduced its civilian workforce by about 38 percent between fiscal years 1989 and 2002 without ensuring that it had the specific skills and competencies needed to accomplish current and future DOD acquisition/contract administration missions, and more than half of its current workforce will be eligible for early or regular retirement in the next 5 years. We found that inadequate staffing and the lack of clearly defined roles and responsibilities contributed to contract administration challenges encountered in Operation Iraqi Freedom (OIF). Further, we have reported that DOD’s extensive use of military logistical support contracts in OIF and elsewhere required strengthened oversight. Just recently, we identified surveillance issues in almost a third of the contracts we reviewed. We also noted that some personnel performing surveillance had not received required training, while others felt that they did not have sufficient time in a normal workday to perform their surveillance duties. DOD has made progress in laying a foundation for reshaping its acquisition workforce by initiating a long-term strategic planning effort, but as of June 2004 it did not yet have the comprehensive strategic workforce plan needed to guide its efforts. DOD uses various techniques—such as performance-based service contracting, multiple-award task order contracts, and purchase cards—to acquire the goods and services it needs. We have found, however, that DOD personnel did not always make sound use of these tools. For example, in June 2004, we reported that more than half of the task orders to support Iraq reconstruction efforts we reviewed were, in whole or in part, outside the scope of the underlying contract. In July 2004, we found that DOD personnel waived competition requirements for nearly half of the task orders reviewed. As a result of the frequent use of waivers, DOD had fewer opportunities to obtain the potential benefits of competition— improved levels of service, market-tested prices, and the best overall value. We also found that DOD lacked safeguards to ensure that waivers were granted only under appropriate circumstances. Our work has shown that DOD would benefit by making use of commercial best practices, such as taking a strategic approach to acquiring services; building on initial efforts to develop a strategic human capital plan for its civilian workforce; and improving safeguards, issuing additional guidance, and providing training to its workforce on the appropriate use of contracting techniques and approaches. DOD is undertaking corrective actions, but because most efforts are in their early stages, it is uncertain whether they can be fully and successfully implemented in the near term. A key to resolving DOD’s contract management issues will be addressing them as part of a comprehensive and integrated business transformation plan. Supply Chain Management In 1990, we identified DOD’s inventory management as a management challenge, or a high-risk area, because inventory levels were too high and the supply system was not responsive to the needs of the warfighter. We have since expanded the inventory management high-risk area to include DOD’s management of certain key aspects of its supply chain, including distribution, inventory management, and asset visibility, because of significant weaknesses we have uncovered since our 2003 high-risk series was published. For example, during OIF, the supply chain encountered many problems, including backlogs of hundreds of pallets and containers at distribution points, a $1.2 billion discrepancy in the amount of material shipped to—and received by—Army activities, cannibalized equipment because of a lack of spare parts, and millions of dollars spent in late fees to lease or replace storage containers because of distribution backlogs and losses. Moreover, we identified shortages of items such as tires, vehicle track shoes, body armor, and batteries for critical communication and electronic equipment. These problems were the result of systemic deficiencies in DOD’s supply chain, including inaccurate requirements, funding delays, acquisition delays, and ineffective theater distribution. While DOD reports show that the department currently owns about $67 billion worth of inventory, shortages of certain critical spare parts are adversely affecting equipment readiness and contributing to maintenance delays. The Defense Logistics Agency (DLA) and each of the military services have experienced significant shortages of critical spare parts, even though more than half of DOD’s reported inventory—about $35 billion— exceeded current operating requirements. In many cases, these shortages contributed directly to equipment downtime, maintenance problems, and the services’ failure to meet their supply availability goals. DOD, DLA, and the military services each lack strategic approaches and detailed plans that could help mitigate these critical spare parts shortages and guide their many initiatives aimed at improving inventory management. DOD’s continued supply chain problems also resulted in shortages of items in Iraq. In an April 8, 2005, report, we reported that demands for items like vehicle track shoes, batteries, and tires exceeded their availability because the department did not have accurate or adequately funded Army war reserve requirements and had inaccurate forecasts of supply demands for the operation. Furthermore, the Army’s funding approval process delayed the flow of funds to buy them. Meanwhile, rapid acquisition of other items faced obstacles. Body armor production was limited by the availability of Kevlar and other critical materials, whereas the delivery of up-armored High Mobility Multi-Purpose Wheeled Vehicles and armor kits was slowed by DOD’s decisions to pace production. In addition, numerous problems, such as insufficient transportation, personnel, and equipment, as well as inadequate information systems, hindered DOD’s ability to deliver the right items to the right place at the right time for the warfighter. Among the items the department had problems delivering were generators for Assault Amphibian Vehicles, tires, and Meals Ready-to-Eat. In addition to supply shortages, DOD also lacks visibility and control over the supplies and spare parts it owns. Therefore, it cannot monitor the responsiveness and effectiveness of the supply system to identify and eliminate choke points. Currently, DOD does not have the ability to provide timely or accurate information on the location, movement, status, or identity of its supplies. Although total asset visibility has been a departmentwide goal for over 30 years, DOD estimates that it will not achieve this visibility until the year 2010. DOD may not meet this goal by 2010, however, unless it overcomes three significant impediments: developing a comprehensive plan for achieving visibility, building the necessary integration among its many inventory management information systems, and correcting long-standing data accuracy and reliability problems within existing inventory management systems. DOD, DLA, and the services have undertaken a number of initiatives to improve and transform DOD’s supply chain. Many of these initiatives were developed in response to the logistics problems reported during OIF. While these initiatives represent a step in the right direction, the lack of a comprehensive, departmentwide logistics reengineering strategy to guide their implementation may limit their overall effectiveness. A key to successful implementation of a comprehensive logistics strategy will be addressing these initiatives as part of a comprehensive, integrated business transformation. Key Elements for Successful Business Transformation Although DOD has a number of initiatives to address its high-risk areas, we believe that DOD must fundamentally change its approach to overall business transformation effort before it is likely to succeed. We believe there are three critical elements of successful transformation: (1) developing and implementing an integrated and strategic business transformation plan, along with an enterprise architecture to guide and constrain implementation of such a plan; (2) establishing central control over systems investment funds; and (3) providing sustained leadership for business reform efforts. To ensure these three elements are incorporated into the department’s overall business management, we believe Congress should legislatively create a full-time, high-level executive with long-term “good government” responsibilities that are professional and nonpartisan in nature. This executive, the Chief Management Official (CMO), would be a strategic integrator responsible for leading the department’s overall business transformation, including developing and implementing a related strategic plan. The CMO would not assume the responsibilities of the undersecretaries of defense, the services, and other DOD entities for the day-to-day management of business activities. However, the CMO would be accountable for ensuring that all DOD business policies, procedures, and reform initiatives are consistent with an approved strategic plan for business transformation. Reform Efforts Must Include an Integrated, Comprehensive Strategic Plan Our prior work indicates that agencies that are successful in achieving business management transformation undertake strategic planning and strive to establish goals and measures that align at all levels of the agency. The lack of a comprehensive and integrated strategic transformation plan linked with performance goals, objectives, and rewards has been a continuing weakness in DOD’s business transformation. Since 1999, for example, we have recommended that a comprehensive and integrated strategic business transformation plan be developed for reforming DOD’s major business operations and support activities. In 2004, we suggested that DOD clearly establish management accountability for business reform. While DOD has been attempting to develop an enterprise architecture for modernizing its business processes and supporting information technology assets for the last 4 years, it has not developed a strategic and integrated transformation plan for managing its many business improvement initiatives. Nor has DOD assigned overall management responsibility and accountability for such an effort. Unless these initiatives are addressed in a unified and timely fashion, DOD will continue to see billions of dollars, which could be directed to other higher priorities, wasted annually to support inefficiencies in its business functions. At a programmatic level, the lack of clear, comprehensive, and integrated performance goals and measures has handicapped DOD’s past reform efforts. For example, we reported in May 2004 that the lack of performance measures for DOD’s business transformation initiative—encompassing defense policies, processes, people, and systems—made it difficult to evaluate and track specific program progress, outcomes, and results. As a result, DOD managers lacked straightforward road maps showing how their work contributed to attaining the department’s strategic goals, and they risked operating autonomously rather than collectively. As of March 2004, DOD formulated departmentwide performance goals and measures and continued to refine and align them with outcomes described in its strategic plan—the September 2001 Quadrennial Defense Review (QDR). As previously discussed, DOD outlined a new risk management framework in the QDR that DOD was to use in considering trade-offs among defense objective and resource constraints, but as of March 2005 DOD was still in the process of implementing it. Finally, DOD has not established a clear linkage among institutional, unit, and individual results-oriented goals, performance measures, and reward mechanisms for undertaking large-scale organizational change initiatives that are needed for successful business management reform. Traditionally, DOD has justified its need for more funding on the basis of the quantity of programs it has pursued rather than on the outcomes its programs have produced. DOD has historically measured its performance by resource components, such as the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement behavioral changes have been minimal or nonexistent. The establishment of a strategic and integrated business transformation plan could help DOD address these systemic management problems. Central Control over Business Systems Investment Funds Is Crucial DOD’s current business systems investment process, in which system funding is controlled by DOD components, has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. We have made numerous recommendations to DOD to improve the management oversight and control of its business systems modernization investments. However, as previously discussed, a provision of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, consistent with the suggestion I have made in prior testimonies, established specific management oversight and accountability with the “owners” of the various core business mission areas. This legislation defined the scope of the various business areas (e.g., acquisition, logistics, finance, and accounting), and established functional approval authority and responsibility for management of the portfolio of business systems with the relevant under secretary of defense for the departmental core business mission areas and the Assistant Secretary of Defense for Networks and Information Integration (information technology infrastructure). For example, the Under Secretary of Defense for Acquisition, Technology, and Logistics is now responsible and accountable for any defense business system intended to support acquisition activities, logistics activities, or installations and environment activities for DOD. This legislation also requires that the responsible approval authorities establish a hierarchy of investment review boards, the highest level being the Defense Business Systems Management Committee (DBSMC), with DOD-wide representation, including the military services and defense agencies. The boards are responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for their business-area portfolio, including ensuring that investments are consistent with DOD’s business enterprise architecture. However, as I pointed out earlier, DOD has not yet established the lower level investment review boards as required by the legislation. Although this recently enacted legislation clearly defines the roles and responsibilities of business systems investment approval authorities, control over the budgeting for and execution of funding for systems investment activities remains at the DOD component level. As a result, DOD continues to have little or no assurance that its business systems modernization investment money is being spent in an economical, efficient, and effective manner. Given that DOD spends billions on business systems and related infrastructure each year, we believe it is critical that those responsible for business systems improvements control the allocation and execution of funds for DOD business systems. However, implementation may require review of the various statutory authorities for the military services and other DOD components. Control over business systems investment funds would improve the capacity of DOD’s designated approval authorities to fulfill their responsibilities and gain transparency over DOD investments, and minimize the parochial approach to systems development that exists today. In addition, to improve coordination and integration activities, we suggest that all approval authorities coordinate their business systems modernization efforts with a CMO who would chair the DBSMC. Cognizant business area approval authorities would also be required to report to Congress through a CMO and the Secretary of Defense on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. Chief Management Official Is Essential for Sustained Leadership of Business Management Reform As DOD embarks on large-scale business transformation efforts, we believe that the complexity and long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained change management leadership across the department—and over a number of years and various administrations. One way to ensure such leadership would be to create by legislation a full-time executive-level II position for a CMO, who would serve as the Deputy Secretary of Defense for Management. This position would elevate, integrate, and institutionalize the high-level attention essential for ensuring that a strategic business transformation plan—as well as the business policies, procedures, systems, and processes that are necessary for successfully implementing and sustaining overall business transformation efforts within DOD—are implemented and sustained. An executive-level II position for a CMO would provide this individual with the necessary institutional clout to overcome service parochialism and entrenched organizational silos, which in our opinion need to be streamlined below the service secretaries and other levels. The CMO would function as a change agent, while other DOD officials would still be responsible for managing their daily business operations. The position would divide and institutionalize the current functions of the Deputy Secretary of Defense into a Deputy Secretary who, as the alter ego of the Secretary, would focus on policy-related issues such as military transformation, and a Deputy Secretary of Defense for Management, the CMO, who would be responsible and accountable for the overall business transformation effort and would serve full-time as the strategic integrator of DOD’s business transformation efforts by, for example, developing and implementing a strategic and integrated plan for business transformation efforts. The CMO would not conduct the day-to-day management functions of the department; therefore, creating this position would not add an additional hierarchical layer to the department. Day-to-day management functions of the department would continue to be the responsibility of the undersecretaries of defense, the service secretaries, and others. Just as the CMO would need to focus full-time on business transformation, we believe that the day-to-day management functions are so demanding that it is difficult for these officials to maintain the oversight, focus, and momentum needed to implement and sustain needed reforms of DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems and their overall level within the department preclude the under secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas while continuing to fulfill their other responsibilities. If created, we believe that the new CMO position could be filled by an individual appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. As prior GAO work examining the experiences of major change management initiatives in large private and public sector organizations has shown, it can often take at least 5 to 7 years until such initiatives are fully implemented and the related cultures are transformed in a sustainable way. Articulating the roles and responsibilities of the position in statute would also help to create unambiguous expectations and underscore Congress’s desire to follow a professional, nonpartisan, sustainable, and institutional approach to the position. In that regard, an individual appointed to the CMO position should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across DOD. Furthermore, to improve coordination and integration activities, we suggest that all business systems modernization approval authorities designated in the Ronald W. Reagan National Defense Act of 2005 coordinate their efforts with the CMO, who would chair the DBSMC that DOD recently established to comply with the act. We also suggest that cognizant business area approval authorities would also be required to report to Congress through the CMO and the Secretary of Defense on applicable business systems that are not compliant with review requirements and include a summary justification for noncompliance. In addition, the CMO would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s business transformation efforts. Measurable progress toward achieving agreed-upon goals should be a basis for determining the level of compensation earned, including any related bonus. In addition, the CMO’s achievements and compensation should be reported to Congress each year. Concluding Observations The long-term fiscal pressures we face as a nation are daunting and unprecedented. The size and trend of our projected longer-term deficits mean that the nation cannot ignore the resulting fiscal pressures—it is not a matter of whether the nation deals with the fiscal gap, but when and how. Unless we take effective and timely action, our near-term and longer-term deficits present the prospect of chronic and seemingly perpetual budget shortfalls and constraints becoming a fact of life for years to come. These pressures will intensify the need for DOD to make disciplined and strategic investment decisions that identify and balance risks across a wide range of programs, operations, and functions. To its credit, DOD is in the process of implementing a risk management framework to use in considering trade- offs among defense objectives and resource constraints and establishing department-level priorities, rather than relying on incremental changes to existing budget levels. We recognize what a large undertaking developing a departmentwide risk management framework will be and while we are still monitoring DOD’s efforts to implement the framework, we have preliminary concerns based on our work reviewing other DOD reform efforts. Unless DOD is better able to balance its resources, DOD will continue to have a mismatch between programs and budgets, and will be less likely to maximize the value of the defense dollars it spends. DOD continues to face pervasive, decades-old management problems related to its business operations and these problems affect all of DOD’s major business areas. While DOD has taken steps to address these problems, our previous work has uncovered a persistent pattern among DOD’s reform initiatives that limits their overall impact on the department. These initiatives have not been fully implemented in a timely fashion because of the absence of comprehensive, integrated strategic planning, inadequate transparency and accountability, and the lack of sustained leadership. As previously mentioned, the Secretary of Defense has estimated that improving business operations could save 5 percent of DOD’s annual budget. This represents a savings of about $22 billion a year, based on the fiscal year 2004 budget. In this time of growing fiscal constraints, every dollar that DOD can save through improved economy and efficiency of its operations is important to the well-being of our nation. Until DOD resolves the numerous problems and inefficiencies in its business operations, billions of dollars will continue to be wasted every year. DOD’s senior leaders have demonstrated a commitment to transforming the department and have taken several positive steps to begin this effort. To overcome the previous cycle of failure at DOD in implementing broad- based management reform, however, we believe that three elements are key to successfully achieve needed reforms. First, DOD needs to implement and sustain a strategic and integrated business transformation plan. Second, we believe that the implementation of two proposed legislative initiatives—establishing central control of business system funds and creating a CMO—is crucial. We believe that central control over business system investment funds would better enable DOD to ensure that its resources are being invested in an economical, efficient, and effective manner. As long as funding is controlled by the components, it is likely that the existing problems with stovepiped, duplicative, and nonintegrated systems will continue. We support the need for legislation to create a CMO, in part, because we doubt that there is a single individual—no matter how talented and experienced—who could effectively address all that needs to be addressed at DOD, including conducting a global war on terrorism, transforming the military, and tackling long-standing, systemic business transformation challenges. We believe that a CMO, serving a 7-year term with the potential for reappointment, would have the institutional clout and an adequate term in office to work with DOD’s senior leadership across administrations to make business transformation a reality. Since the CMO would not have responsibility for day-to-day management, this position would not superimpose another hierarchical layer over the department to oversee daily business operations. Instead, the CMO would be responsible and accountable for strategic planning, performance and financial managment, and business system modernization, while facilitating overall business transformation. Without the strong and sustained leadership provided by a CMO, DOD will likely continue to have difficulties in maintaining the oversight, focus, and momentum needed to implement and sustain the reforms to its business operations. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In addition to external security threats, our nation is threatened from within by growing fiscal imbalances. The combination of additional demands for national and homeland security resources, the long-term rate of growth of entitlement programs, and rising health care costs create the need to make difficult choices about the affordability and sustainability of the recent growth in defense spending. At a time when the Department of Defense (DOD) is challenged to maintain a high level of military operations while competing for resources in an increasingly fiscally constrained environment, DOD's business management weaknesses continue to result in billions in annual waste, as well as reduced efficiencies and effectiveness. Congress asked GAO to provide its views on (1) the fiscal trends that prompt real questions about the affordability and sustainability of the rate of growth of defense spending, (2) business management challenges that DOD needs to address to successfully transform its business operations, and (3) key elements for achievement of reforms. One key element would be to establish a full-time chief management official (CMO) to take the lead in DOD for the overall business transformation effort. In this regard, we support the need for legislation to create a CMO in DOD with "good government" responsibilities that are professional and nonpartisan in nature, coupled with an adequate term in office. Our nation's current fiscal policy is on an imprudent and unsustainable course and the projected fiscal gap is too great to be solved by economic growth alone or by making modest changes to existing spending and tax policies. In fiscal year 2004, DOD's spending represented about 51 percent of discretionary spending, raising concerns about the affordability and sustainability of the current growth in defense spending and requiring tough choices about how to balance defense and domestic needs against available resources and reasonable tax burdens. GAO has reported that DOD continues to confront pervasive, decades-old management problems related to business operations that waste billions of dollars annually. These management weaknesses cut across all of DOD's major business areas. These areas, along with six government-wide areas that also apply to the department, mean that DOD is responsible for 14 of 25 high-risk areas. To move forward, in our view, there are three key elements that DOD must incorporate into its business transformation efforts to successfully address its systemic business management challenges. First, these efforts must include an integrated strategic plan, coupled with a well-defined blueprint--referred to as a business enterprise architecture--to guide and constrain implementation of such a plan. Second, central control of system investments is crucial for successful business transformation. Finally, a CMO is essential for providing the sustained leadership needed to achieve lasting transformation. The CMO would not assume the day-to-day management responsibilities of other DOD officials nor represent an additional hierarchical layer of management, but rather would serve as a strategic integrator who would lead DOD's overall business transformation efforts. Additionally, a 7-year term would also enable the CMO to work with DOD leadership across administrations to sustain the overall business transformation effort.
Background Prepositioning is an important part of DOD’s overall strategic mobility framework. It allows DOD to field combat-ready forces in days rather than the weeks it would take if the forces and all necessary equipment and supplies had to be brought from the United States to the location of the conflict. The U.S. military can deliver equipment and supplies in three ways: by air, by sea, or by prepositioning. While airlift is fast, it is expensive to use and impractical for moving all of the material needed for a large-scale deployment. Although ships can carry large loads, they are slower than airlift. Prepositioning lessens the strain of using expensive airlift and reduces the reliance on slower sealift deliveries. Concerned about the reduction in U.S. forces overseas and their ability to move forces in the time required to resolve potential conflicts quickly, the services have expanded prepositioning programs ashore and on ships in potential areas of conflict. The military services have prepositioning programs to store combat or support equipment and supplies near areas with a high potential for conflict and to speed response times and reduce the strain on other mobility assets. The Defense Logistics Agency prepositions food and bulk fuel to support a range of contingency operations and training exercises. The Special Operations Command relies on the military services to preposition common support items for its forces, such as base support items and vehicles. The Army’s program involves three primary categories of stocks: combat brigade sets, operational projects, and war reserve sustainment stocks stored at land sites and aboard prepositioning ships around the world. The Marine Corps also prepositions equipment and supplies aboard prepositioning ships and at land sites in Norway. The Navy’s prepositioning efforts are comparatively small, used mainly to support the Marine Corps’ prepositioning program and deploying forces. The Navy prepositions equipment and supplies at land sites and aboard the maritime prepositioning ships. The Air Force prepositions stocks of war reserve equipment and supplies to meet initial contingency requirements and to sustain early deploying forces. The Air Force’s prepositioned war reserve stocks include bare base sets; vehicles; munitions; and a variety of consumable supplies, such as rations, fuel, support equipment, aircraft accessories, and medical supplies. DOD’s prepositioning programs are briefly described in the table below. The military services preposition these stocks of equipment and supplies at several land sites and aboard prepositioning ships around the world. Most of the military services preposition equipment and supplies in southwest Asia, the Pacific theater, Europe, and aboard prepositioning ships. Figure 1 shows the locations of DOD’s prepositioned stocks. Inventory Shortfalls and Poor Equipment Condition Leave Many of DOD’s Prepositioning Programs at Risk DOD faces some near-term operational risks should another large-scale conflict emerge due to inventory shortfalls and poor maintenance condition of some of its prepositioned stocks. For example, the department has drawn heavily on its prepositioned stocks to support ongoing operations in Iraq and relatively little has been reconstituted. In addition, while remaining stocks provide some residual capability, many have significant inventory shortfalls and in some cases, maintenance problems. Combatant commanders rely on prepositioned stocks being available and in good maintenance condition; otherwise U.S. forces must bring needed stocks with them or spend valuable time repairing equipment. Since these stocks are typically used in the early stages of a conflict, it is important for DOD to determine the operational risk associated with any shortfalls. Operation Iraqi Freedom revealed significant issues with the status of prepositioned stocks, such as shortages in spare parts and less-than-modern equipment. The same problems continue to exist today in some programs. The Army Is Reporting Low Inventory Fill and Poor Maintenance Condition for Some Prepositioned Stocks The Army is currently reporting low inventory fill for the combat brigade sets, operational project stocks, and sustainment stocks that comprise its prepositioning program, and some stocks not used in recent operations are in poor maintenance condition. For example, the Army used much of the equipment and supplies associated with the combat brigade sets stored at land sites in Kuwait and Qatar and aboard prepositioning ships afloat near Diego Garcia to support operations in Iraq. In addition, the Army used some equipment from its other prepositioned stocks in Europe, South Korea, and from other prepositioning ships located near Guam/Saipan. The Army is also reporting low inventory fill for its operational projects and sustainment stocks. The Army has a total of 14 operational projects that contain equipment and supplies needed for unique mission requirements, such as special operations forces, mortuary operations, and prisoner handling. Sustainment stocks provide replacement equipment and supplies, such as repair parts, petroleum items, and tracked vehicles, until normal resupply channels are established. The Army is reporting inventory fills for operational projects and sustainment stocks— approximately 26 percent and 20 percent respectively—that are considerably lower than the program requirements. Some of the Army’s shortfalls have been long-standing, however, including shortfalls in critical areas like spare parts, and are not attributable to the war in Iraq. For example, we reported in 2003 that DOD experienced equipment readiness problems because of a lack of key spare parts. Table 2 provides an overview of the inventory levels, maintenance condition, and operations and maintenance funding of the Army’s prepositioned stocks. Some stocks that were taken from prepositioned storage locations and used during operations in Iraq are either still in use, or have experienced extreme wear and tear. For example, the Army continues to use equipment taken from prepositioned stocks to support its units in Iraq, delaying the reconstitution and redistribution of the equipment. According to Army officials, this equipment may not be returned to the prepositioned stocks because the Army is giving priority to transforming its forces into more deployable and expeditionary brigade-based formations and may use formerly prepositioned equipment to fill additional equipment requirements associated with the new formations. Importantly, this heavy stress on equipment is a problem across much of the Army’s equipment that has been used in Iraq, not just the equipment taken from prepositioned stocks. We also found that some other prepositioned stocks in storage were in poor maintenance condition, even though they had not been used in Operation Iraqi Freedom. For example, during a March 2005 visit to Camp Carroll, South Korea, we found that some of the Army’s prepositioned stocks at this location were in poor maintenance condition and that much of the equipment was overdue for periodic maintenance. Army officials confirmed that required cyclic maintenance had not been performed on the equipment in the brigade set, operational projects, and sustainment stocks for several years. To address this, the Army has stepped up its maintenance efforts by bringing in contractor support and setting up temporary maintenance facilities to assist in repairing the equipment to standard. Moreover, as shown in figure 2, certain stocks were stored outside and had been for many years and corrosion was evident on some pieces of equipment. Corrosion can significantly affect the readiness of prepositioned equipment: DOD spends an estimated $20 billion each year to repair the damage to military equipment and infrastructure caused by this problem. In this regard, we have called for improvements to DOD’s long-term corrosion strategy, including better planning and establishment of a long-term funding mechanism. Because of continuing concerns over corrosion, we are currently conducting a congressionally directed review of its impact on DOD’s overall prepositioned assets. The Army also maintains European prepositioning storage sites in Germany, Italy, Luxembourg, and the Netherlands. However, few stocks remain there because they have been drawn out to support operations in other locations, including Bosnia and Iraq. The mission in Europe has steadily declined since the European drawdown of the early 1990s, and the remaining sites are the last remnants of the Army’s large-scale prepositioning program developed during the Cold War. Army officials told us that they are currently using the local national workforce at these locations to perform other maintenance workloads, including fixing equipment from Iraq. In past reports, we have recommended that the Army align its workforce and facilities to meet the reduced post-Cold War mission in Europe. Officials told us that they have reduced infrastructure in response to our recommendations, and are contemplating further reductions. In Italy, however, the Army has requested about $55 million to construct new storage and maintenance facilities that it has said will become the centerpiece of its land prepositioning in the region. This region still receives considerable funding, as shown in table 2. The Army spent nearly $290 million during fiscal years 2000-2004, even though the sites have had an uncertain mission and reduced stocks for much of that time. The Marine Corps Is Reporting Low Inventory Fill for Some Prepositioned Stocks, While the Navy Reports Few Shortfalls The Marine Corps has offloaded about 75 percent of the major end-items stored on 5 of its 16 prepositioning ships to support combat operations in Iraq. The remaining 11 prepositioning ships are reporting inventory fills of 95 percent or greater and good maintenance condition for major end-items and sustainment stocks. The Marine Corps also used some of its prepositioned major end-items stored at several land sites in Norway to support operations in Iraq and Afghanistan and to fill shortfalls at Marine Corps bases and on some of the prepositioning ships. As a result, these sites are currently reporting an inventory fill of about 71 percent. It is unclear when this equipment will be returned to prepositioned stocks because, according to a Marine Corps official, a large portion of the Marine Corps’ equipment offloaded from prepositioning ships to support the deployment of the I Marine Expeditionary Force to Iraq is currently being kept in Iraq to support the rotation of the II Marine Expeditionary Force. In a recent congressional testimony on the status of its military equipment, a Marine Corps official reported that in addition to higher usage rates, equipment is being used under extreme conditions which have increased the maintenance requirements. For example, to date, more than 1,800 equipment items have been destroyed and an additional 2,300 damaged equipment items will require depot maintenance. For the Norway stocks, the Marine Corps is in the process of updating the requirements for its program there so that it will be capable of providing a global response capability to any regional combatant commander. During our September 2004 visit to the prepositioning sites in Norway, we discussed this change in the scope of the program and Marine Corps officials confirmed that the facilities in Norway can support any combatant commander and the stocks are globally deployable via air, rail, and sea. This shift in scope is in response to concerns about the continued relevance of land stocks in Norway. According to Marine Corps officials, however, these stocks are important to Norway, cost relatively little to maintain (about $3.9 million in operational costs per year), are stored in excellent facilities, and can be taken out to respond to crises as needed. The Navy is reporting high inventory fill for its prepositioned assets. According to Navy officials, most of its equipment used to offload the maritime prepositioning ships was not used in direct combat and has not required extensive reconstitution, and other equipment was available to backfill the field hospitals and construction forces deployed to support operations in Iraq. The Air Force Is Reporting Low Inventory Fill and Some Stocks in Poor Condition The Air Force has used a considerable amount of its prepositioned equipment and supplies to support combat operations in Afghanistan and Iraq and, as a result, the inventory fill of many of these stocks is low. For example, it used approximately 43 percent of the total number of its prepositioned bare base sets to support Operations Enduring Freedom and Iraqi Freedom, and due to the extreme desert conditions, many of these sets will have to be replaced. A U.S. Central Command, Air Forces, official told us that the command is continuing to issue prepositioned base operating support equipment and vehicles to forces that have been deployed to the area of responsibility. While the Air Force is working on refilling its prepositioned equipment and supplies, if a conflict arises in the near term, these stocks may not be available for use as it is unclear when these stocks will be refilled. In addition, the Air Force is experiencing shortfalls in its inventory of fuel bladders. These bladders are used to store fuel for Air Force aircraft at austere operating locations. Air Force officials stated that to support combat operations in Iraq, Central Command, Air Forces, has used a considerable number of its prepositioned war reserve fuel bladders. As combat operations continue, the Air Force is depleting its supply of these bladders, and officials have characterized the impact of potential shortfalls in these bladders as its “highest operational risk.” At the same time, the Air Force is undergoing an initiative to modernize its fuel support equipment, including its fuel bladders. As part of this initiative, the Air Force requested that Central Command, Air Forces, officials not purchase the type of fuel bladders that had previously been used. To mitigate the risk, the Air Force has allowed Central Command, Air Forces, to purchase some replacement fuel bladders; however, it is unclear when its modernization initiative will be fully implemented. During our review, we were also told that some bare base sets that the Air Force prepositions at Andersen Air Force Base on Guam are in poor maintenance condition and are unusable. According to a Pacific Air Forces official, the sets stored at Andersen Air Force Base have deteriorated due to a lack of required maintenance. Air Force maintenance personnel are responsible for the war reserve stocks prepositioned at this location as an additional duty to the maintenance of operating stocks also stored at the base. The official told us that the quality of maintenance performed by Air Force personnel on the war reserve bare base sets has been a long- standing problem at this location. Air Force officials told us that bare base sets stored in Southwest Asia and South Korea did not have these same maintenance problems because contractors have been hired to maintain these sets. When we discussed these issues with Air Force officials, they told us that they believed they could overcome shortfalls and any maintenance problems in the event of a conflict by using supplemental funding or cross- leveling equipment from other theaters. Additionally, Pacific Air Forces officials told us that they would be able to obtain some vehicles from countries where they will operate by using contracts already in place. Shortfalls in Inventory Fill Exist for Stocks Prepositioned to Support Special Operations Forces The Army, Air Force, and Navy preposition common support equipment and supplies for use by their special operations forces. However, the services have traditionally underfunded these stocks and, as a result, inventory shortfalls exist in most of these stocks. Lessons learned from recent military operations in Iraq further highlighted the need for special operations forces to have stocks of prepositioned equipment and supplies to support these forces in multiple austere environments. Special Operations Command officials told us that special operations forces are often among the first units to deploy and, therefore, have a need to draw prepositioned stocks. The department recognized this and recent guidance issued by DOD directs the military services to fully fund inventory shortfalls in these stocks of common items prepositioned to support special operations forces. The military services have agreed to provide funding for prepositioned stocks for special operations forces beginning in fiscal year 2006. Shortfalls Create Some Operational Risks Since prepositioned stocks are integral to the military’s war plans, shortfalls in these programs create risks that combatant commanders would have to mitigate in the event of a new conflict. It could cost time or manpower to fill shortages or fix equipment. Since these stocks are typically used in the early stages of a conflict, it is important for DOD to determine the operational risk associated with any shortfalls. The military planners we spoke to told us that they would find a way to work around the shortfalls, but offered little in the way of concrete plans. Operation Iraqi Freedom revealed significant lessons for DOD’s prepositioning managers, especially in the Army, such as shortages in spare parts and less-than-modern equipment. Prior to the onset of combat operations in Iraq, the Army had significant shortages in its prepositioned stocks, especially in spare parts. The Army overcame these shortfalls by having the units that were drawing the prepositioned stocks bring their own spare parts, in addition to obtaining spare parts from nondeploying units. However, according to the Army’s after-action assessments of the war, the Army had shortages in these and other items, including food, water, fuel, construction materials, and ammunition. The available stocks of these supplies were insufficient to meet sustainment requirements at the outset of the deployment and it took the supply chain months to respond. At the time of our work, we found that many of the same shortfalls that existed in the Army’s program are still evident, and may be getting worse. For example, as of mid-March 2005, the Army had only 21 percent of its authorized prepositioned repair parts on hand in South Korea. According to Army officials, if a military conflict should arise there, their strategy to mitigate these shortfalls would be to cross-level required parts from available sustainment stocks as needed. Although the precise operational risks created by shortfalls in the Marine Corps and Air Force’s prepositioned stocks are difficult to assess, officials from these services told us that these risks can be managed. This is because the Marine Corps has kept about two-thirds of its prepositioned combat capability available for potential contingencies and that equipment is reported to be in good condition. Moreover, Air Force officials stated that if a conflict arises, they will be able to fill shortfalls and repair equipment as needed by using supplemental funding and obtaining some vehicles and other stocks in other countries through contracts already in place. Air Force officials stated, however, that this presumes that they will have the time and necessary funding available to address the shortfalls. Combatant commanders rely on prepositioned stocks being available and in good maintenance condition. Prior to Operation Iraqi Freedom, the combatant commander built up the required forces over a period of months, and had time to overcome any inventory shortages in the prepositioned stocks or resolve any maintenance issues with prepositioned equipment. However, should a new conflict arise in the near term—especially one where U.S. forces did not control the timing—the combatant commander would likely face even more difficult operational challenges. During our visit to South Korea, officials told us that their strategy to mitigate maintenance issues with the Army’s prepositioned stocks stored there, should a conflict arise, would be to surge maintenance personnel as needed to fix equipment, use arriving personnel to assist in maintenance execution, and cross-level required parts from available sustainment stocks. Officials acknowledged, however, that it could take longer than planned to get the equipment ready in the event of a conflict. Another factor making it difficult to assess the potential operational risks is the lack of sound information available to assess and manage DOD’s prepositioning programs. Such programs need valid inventory requirements that meet the needs of the war fighters, and reliable information about inventory levels and maintenance condition for those requirements. These long-standing management problems are discussed in the next section of this report. DOD and Some of the Military Services Have Provided Insufficient Oversight Over Their Prepositioning Programs Oversight over prepositioning programs by DOD and the military services has been insufficient, despite the importance of prepositioning to the military. This inattention has allowed long-standing problems to linger. Management principles, such as those embraced in the Government Performance and Results Act of 1993, provide federal agencies a framework for effectively implementing and managing programs. Management principles include sufficient information to support sound decision making and enable Congress to provide proper oversight. However, DOD has not adhered to its directive on war reserve materiel policy that could provide oversight over its prepositioning programs. In addition, service oversight has been inadequate, particularly in the Army’s processes for determining requirements and the Army and Air Force processes for assessing inventory shortfalls and maintenance condition. This limited oversight unnecessarily leaves the programs at risk of being unavailable when required and lacking the right mix of equipment and supplies to support the war fighter. Oversight Has Been Insufficient The overarching departmental guidance is contained in DOD directive 3110.6, updated in December 2003, which provides policy guidance on the department’s war reserve materiel program and assigns oversight and accountability responsibilities within the department. The secretaries of the military departments, directors of defense agencies, and the combatant commands are responsible for setting program requirements and the Defense Logistics Agency has responsibility for storage and distribution of the stocks. At the department level, their responsibilities are as follows: The Undersecretary of Defense for Acquisition, Technology, and Logistics is required to assess the adequacy of war reserve stocks annually. The Undersecretary of Defense for Policy is required to provide planning guidance that includes war reserve requirements. The Chairman of the Joint Chiefs of Staff is required to validate the operational requirements of the geographic combatant commands. A provision of the directive related to oversight states that the department is to assess the adequacy of its war reserve stocks. In order to assess adequacy, the directive requires the secretaries of the military departments and the directors of defense agencies to submit annual reports on war reserve materiel levels to the Under Secretary of Defense for Acquisition, Technology, and Logistics within the Office of the Secretary of Defense. Officials within the Deputy Under Secretary of Defense for Supply Chain Integration told us that this oversight responsibility had been delegated to their office. However, the directive has not been implemented and, therefore, the reporting requirement contained in the directive has not been enforced. Neither the services nor the Under Secretary’s office could provide us with copies of these reports. Officials told us that they had suspended this reporting requirement in 2002; however, the directive had been updated in late 2003 and the reporting requirement was maintained. Officials also stated that although they had been given responsibility for implementing the oversight provisions of the directive, since their office primarily deals with only sustainment issues, they did not have sufficient authority or personnel to meet the requirements stated by the directive, specifically to assess the adequacy of the services’ prepositioning programs. Officials further told us they did not believe the reporting requirement in the directive was necessary because they were able to provide adequate oversight of the department’s prepositioning programs through other mechanisms, such as reviewing the services’ budget submissions and quarterly readiness assessments. Quarterly readiness reviews and integrated priority list submissions allow the combatant commanders and others to identify issues that have reached critical thresholds that may limit war-fighting capabilities. These assessments, some of which have included issues related to prepositioned stocks, are briefed to DOD’s senior leadership and may be included in a legislatively mandated quarterly readiness report to Congress. However, we have previously reported that these reports provide a vague and broad description of readiness problems and, therefore, are not effective as an oversight tool. Furthermore, officials at one combatant command told us that these assessments do not provide a sufficient mechanism to determine the inventory readiness of stocks prepositioned in their area of responsibility. Also, a DOD official told us that they review the budget submissions from the military services and approve how much the services allocate to their prepositioning programs. In our view, while such mechanisms provide the department with important information on gaps in capabilities and resource allocation, they do not constitute sufficient, sustained program oversight. Such oversight problems have existed for years and several prior reports have cited the lack of centralized oversight and direction in the department’s prepositioning programs, particularly in the Army. For example, the Institute for Defense Analyses concluded in a 1997 report that the military services do not coordinate their war reserve planning among themselves or with the combatant commanders. The report specifically called on the Army to reinvent the entire war reserve process, and work with the unified combatant commands and other Army commands to build credible requirements and better planning factors. The Army Materiel Command Inspector General also reported in 2001 that the Army and the combatant commands had not uncovered, mitigated, or elevated issues about the readiness of the Army’s prepositioning programs to the department level. Further, the report stated that the lack of centralized oversight fostered inefficiencies and impacted the effectiveness of the Army’s prepositioning program. Lack of Valid Requirements and Insufficient Information Makes Oversight Difficult The Army does not have sound requirements for some of its prepositioning programs and both the Army and Air Force do not have sufficient information about inventory levels and maintenance condition, making oversight difficult. Without valid requirements underpinning the services’ prepositioning programs, it is impossible to reliably assess the impact of reported shortfalls or equipment in poor maintenance condition. As a result, the services cannot assess the overall readiness of their prepositioning programs, which potentially leaves war fighters at risk of not having needed stocks in the future. In addition, assessing the readiness of prepositioned stocks requires reliable information about inventory levels and maintenance condition. Inventory levels are measured against requirements set by the services, while maintenance condition describes whether on-hand items work well enough to perform their mission. Because prepositioned stocks are intended to be used in the early stages of a conflict, the stocks need to be completely filled and in working order. Otherwise, the purpose of prepositioning is likely defeated. Such problems with questionable requirements and insufficient information are long- standing, and make it difficult for the services and the department to assess readiness, provide oversight, and support sound decision making about where to make program investments. Questionable Requirements During our review, we found that the requirements underpinning some of the Army’s prepositioning programs are questionable, which may make the impact of shortfalls difficult to assess. Specifically, Army officials told us that the war reserve information system used to calculate the requirements for some sustainment stocks had not been successfully updated since 1999, although Army officials told us that they are required to compute these requirements on an annual basis. While the Army is planning on recalculating these requirements by the end of 2005, it is currently unclear what the requirements for these stocks should be. As a result, program managers cannot be sure what to buy because they do not know if inventory shortfalls are valid. We reported on the operational impacts of this problem in our March 2004 testimony on prepositioned stocks used during Operation Iraqi Freedom. Additionally, in our April 2005 report, we found that because the process used to determine requirements for Army war reserve spare parts had not been updated, the war reserve inventories for some spare parts were inadequate in Operation Iraqi Freedom and could not meet initial wartime demands. In addition, inaccurate and inadequately funded Army war reserve requirements contributed to shortages in other items, such as track shoes for Abrams tanks and Bradley Fighting Vehicles and lithium batteries. Additionally, we identified problems with DOD’s process for establishing requirements for prepositioned munitions. For example, during our visit to U.S. Forces Korea, officials told us that the command is not afforded the opportunity to proactively participate in the determination of either total munitions requirements or, more specifically, prepositioned munitions requirements. In October 2002, we reported that DOD’s munitions requirements determination process did not fully consider the combatant commander’s preferences for munitions and weapon systems that will be used against targets identified in projected scenarios. We recommended that the Secretary of Defense establish a direct link between the munitions needs of the combatant commands and the munitions requirements determinations and purchasing decisions made by the military services. In October 2003, DOD issued instruction 3000.4, which required that the munitions requirements developed by each of the military services address the operational objectives of the combatant commanders against potential threats. In addition, it directed the military services to work directly with the military service component and the combatant commands to develop near- and out-year munitions requirements. Finally, it directed combatant commanders to review the military services’ generated munitions requirements and report any issues needing resolution during the planning and programming process. We found that these requirements are not being met. Only the Air Force visits the command prior to developing total munitions requirements to support purchasing decisions. The other services do not coordinate with the command prior to generating munitions requirements. Further, U.S. Forces, Korea, officials told us that they do not have the opportunity to review the service-generated munitions requirements prior to purchasing decisions and have no input to what munitions will be prepositioned or where those munitions will be located. While officials in the Office of the Secretary of Defense and Joint Staff expressed skepticism about the way U.S. Forces, Korea, had developed their munitions requirements, they agreed that the proper coordination was not occurring. As a result, the needed linkage between the combatant command’s needs and the munitions purchases made by the services continues to be inadequate and raises questions as to whether combatant commands will have what they need should a conflict arise. Unreliable Information We also found that the Army and Air Force lack reliable information on the inventory fill and maintenance condition for some prepositioned stocks. The lack of reliable inventory information may provide program managers with an unrealistic view on the preparedness of these programs. Army officials told us that its information management system does not provide reliable information on the inventory levels and maintenance condition of its operational projects and sustainment stocks. Army managers told us that this lack of inventory visibility has persisted for many years, and sometimes the only way to get reliable information is to contact the storage site directly. As recently as February 2005, the Army reported in the unclassified inventory information that it extracts from its main readiness reporting system that a high percentage of the combat brigade set prepositioned in South Korea was fully mission capable. However, in an October 2004 Army assessment, inspectors had found that a high percentage of the equipment reviewed was not mission capable. Air Force officials also told us that they do not have adequate information available to assess the overall readiness of their prepositioned stocks. While this information is decentralized and available in some cases to base and component commanders, information on inventory levels and maintenance condition is not available to Air Force managers overseeing the war reserve materiel program. Air Combat Command and Central Command, Air Forces, officials told us that in order to obtain information on the readiness of most prepositioned stocks, they had to contact the storage locations since this information is not readily available to them. Pacific Air Forces officials told us that they developed their own automated system to track the inventory levels and maintenance condition of the war reserve materiel prepositioned in their area of responsibility because the Air Force lacked a comprehensive system that provides reliable and timely readiness information on its war reserve program. Problems with Requirements and Reliability of Information Have Been Long-standing, but Remain Unresolved The problems we found during our review with requirements determination and reliability of inventory information are not new. Our review of past reports going back to 1995 revealed that similar issues have been reported repeatedly, but have not been resolved. The findings from several past studies are described below, and appendix I provides a more comprehensive summary of the major findings from more than 30 past reports by us and the department’s own studies. Inventory management issues, and more recently supply chain management, have been considered high-risk areas by us since 1990. Specific to the prepositioning programs, we have previously reported numerous times on long-standing management problems. For example, we reported in our last review of prepositioning programs in 1998 that the Army and Air Force had poorly defined, outdated, and otherwise questionable requirements in their programs. Our 1998 report also noted that it was difficult for DOD to assess the readiness of its prepositioned stocks and the impact of any shortfalls due to the poor information the services used to manage these programs. We also reported in 2001 that, among other things, a potential mismatch existed between the Army’s methodology for determining spare parts requirements and the Army’s anticipated battlefield needs. And more recently, we reported in January 2005 that DOD does not have the ability to provide timely or accurate information on the location, movement, status, or identity of its supplies due to long-standing data accuracy and reliability problems within existing inventory management systems. The department’s own auditors and an Army command have also been sharply critical of program management, especially how program requirements have been determined. For example, the Army Materiel Command reported in 2003 that the requirements computation for war reserve stocks and stockage lists for prepositioned stocks did not accurately portray what was needed for Operation Iraqi Freedom. These stockage lists did not contain the most critical items needed to sustain combat equipment during the operation. In addition, the Army Audit Agency reported in 2004 that Army program managers had not reviewed the requirements for many of the operational projects it examined. As a result, some operational projects contained inaccurate, overstated, or questionable requirements. Of $1.5 billion in requirements examined, about $727 million were valid, $472 million were invalid, and about $280 million were questionable. In addition, the Air Force Audit Agency reported in May 2003 that Air Force personnel did not properly segregate certain war reserve requirements from peacetime operating spare parts requirements, resulting in more than $118.8 million of overstated requirements for peacetime. Past reports have also revealed problems with the reliability of inventory information. In 2001, Army auditors reported that the lack of reliable data on operational projects and sustainment stocks impeded the overall readiness capability of the Army’s prepositioning program. In addition, the Army reported that there was a general lack of confidence in the information management system used to provide information on inventory levels. More recently, the Army Materiel Command’s 2003 report on lessons learned in Iraq also found that different automated systems provided different inventory levels at the same storage location during operations in Iraq. Similarly, a June 2004 CNA Corporation after-action report on the Marine Corps’ prepositioning program in Operation Iraqi Freedom found that the Marine Corps did not have reliable information on the status of some prepositioned equipment used to support operations in Iraq. Specifically, due to a lack of automated tracking systems, the Marine Corps had to use manual methods for tracking equipment with hand counts and written reports. As a result, Marine Corps commanders did not have clear and accurate tools for determining where cargo was in the pipeline, and more importantly, forecasting when equipment would arrive and when integration would be complete. DOD Lacks A Plan To Coordinate Future Prepositioning Programs DOD has not developed a coordinated departmentwide plan or joint doctrine to guide the future of its prepositioning programs, despite the heavy use of prepositioned stocks in recent conflicts and the department’s plans to rely on them in the future. The 2005 National Defense Strategy specifically notes the importance of prepositioning in the future and indicates that prepositioning programs should be more innovative, flexible, and joint in character, but provides few details on how DOD plans to accomplish these goals. In addition, the independent Overseas Basing Commission recently echoed the continued importance of the department’s prepositioning programs in the future. In the absence of a departmentwide plan or joint doctrine to coordinate the reconstitution and future plans for these programs, the military services have been recapitalizing some stocks and developing future plans for their programs without a clear understanding of how they will fit together to meet the evolving defense strategy. Without an overarching framework that establishes priorities for prepositioning among competing initiatives and identifies the resources required to implement the future programs, DOD cannot provide assurances to Congress that the billions of dollars that will be required to recapitalize the stocks and develop future programs will ultimately produce programs that will operate jointly, support the needs of the war fighter, and are affordable. National Defense Strategy and Overseas Basing Report Indicate a Reliance on Prepositioning in the Future The most recent National Defense Strategy published in March 2005 states that to strengthen DOD’s capability for prompt global action and flexibility to employ military forces where needed, prepositioned stocks “will be better configured and positioned for global employment.” This overarching defense strategy establishes key goals for the future of defense capabilities such as the prepositioning of support materiel and combat capabilities in critical regions of the world and along key transportation routes, and a greater reliance on joint prepositioning capabilities that will be in accordance with other aspects of transformation. However, while such goals confirm that prepositioning will continue to play a key role in the evolving military strategy, the National Defense Strategy provides no specific details on how the department and the military services will accomplish them. In addition, the recently released report of the Overseas Basing Commission states that where DOD puts prepositioned stocks, what they are comprised of, and how they are maintained is central to the department’s operational capability. The report states that prepositioning is “imperative” for quick response of U.S. forces in areas of the world where access may be difficult, and calls for tight integration of service concepts, doctrines, and plans as a first step in ensuring the sustainability of prepositioning. Importantly, the Commission recommends that given the centrality of these stocks to the operational capability of U.S. forces, their high costs, and their anticipated heavy use over time, Congress should periodically review the status of prepositioned stocks. DOD Has Not Developed a Plan or Joint Doctrine for Its Prepositioning Programs While it seems certain that DOD will continue to rely on prepositioning in the future, it is unclear how prepositioning will fit into its future plans since DOD currently has no department-level prepositioning plan that provides specific details on how the department and the military services will work together to plan the future of their prepositioning programs or joint doctrine for its prepositioning programs. DOD officials told us that the future of its prepositioning programs has not yet been determined, in part because the future is dependent on the outcome of several interrelated studies ongoing within the department. For example, DOD is currently reviewing the mobility capabilities required to meet the full range of mobility needs for all aspects of the national defense strategy. According to DOD officials, the recommendations from this study will likely have a significant impact on the services’ prepositioning programs since requirements for prepositioning are being factored into the mobility deliberations. In addition, in March 2003, the Secretary of Defense requested that the department develop a comprehensive and integrated presence and basing strategy for the next 10 years. This strategy will build upon multiple DOD studies and will use information from the combatant commanders to determine the appropriate location of the basing and associated infrastructure necessary to execute the U.S. defense strategy. DOD officials told us that the basing study will also likely have an impact on prepositioning as the services will need to determine where to preposition their stocks to support the new defense strategy. Although some preliminary results have been released, DOD officials stated that once these studies are completed, they will have a better understanding of how prepositioning will be able to support the war fighter. Similarly, DOD has not developed joint doctrine to guide the planning and employment of its prepositioning programs. DOD defines joint doctrine as the fundamental military principles that guide the employment of forces of two or more services in coordinated action toward a common objective. DOD’s transformation guidance states that part of the department’s transformation efforts is developing concepts to operate in a joint environment, and placing a continuing emphasis on the importance of expeditionary operations. DOD has published joint doctrine in a number of areas, including deployment and redeployment operations, multinational operations, and military operations other than war. However, in the absence of a departmentwide plan and joint doctrine for prepositioning, the military services currently plan and implement separate programs in an independent, service-centric manner. A service- centric approach to prepositioning potentially misses opportunities to achieve greater efficiencies where service programs overlap. In a 2003 Joint Staff-sponsored study on strategies for prepositioning, the Logistics Management Institute found that the military services continue to program for prepositioning materiel to meet individual service rather than joint requirements. As a result, the services may overstate operational requirements and put unnecessary burdens on limited transportation assets that would be required to move these prepositioned assets from their storage locations to the operational sites. For example, although the Army and Air Force have separate bare base programs, there is a lack of commonality among the design and components of these programs even though basic capabilities are the same. Moreover, this service-centric approach to prepositioning is out of step with DOD’s transformation guidance, which states that developing concepts to operate in a joint environment and a continuing emphasis on the importance of expeditionary operations is key to the department’s transformation efforts. Reconstitution Likely to Be Delayed Due to Ongoing Operations, but Delay Offers DOD Opportunities to Set Clear Direction for Programs Clearly prepositioning figures prominently in the department’s future plans, but the services do not have precise estimates of the costs and time required to reconstitute their prepositioned stocks since the services continue to use these stocks in Afghanistan and Iraq. In a recent report to Congress, DOD estimated that the costs to reconstitute the Army and Marine Corps’ prepositioned equipment will be between $4 billion and $5 billion. The report acknowledges, however, that these estimates may change depending on several factors, including the length of time the equipment is in use, the number of combat losses, and any changes in the future plans for its prepositioning programs. However, most of the costs required to reconstitute and recapitalize the Army and Marine Corps’ prepositioned stocks have not been budgeted for in the department’s baseline submissions or supplemental funding requests. In the absence of a departmentwide plan that coordinates the reconstitution of these programs with the future plans of the department’s prepositioning programs, the services are developing plans to reconstitute and recapitalize their prepositioned stocks without a clear understanding of how the future of these programs will fit together in support of the evolving defense strategy. According to Army officials, plans to reconstitute the equipment and return it to the combat brigade sets are uncertain because, in some cases, Army units are continuing to use prepositioned stocks to support operations in Iraq instead of bringing their own equipment. In addition, the Army is placing a higher priority for its resources on supporting ongoing operations and on its modular conversion initiative—restructuring its forces to make them more flexible and rapidly deployable. As a result of this initiative, the Army is planning to use combat equipment that was part of the prepositioned brigade sets to meet the increased equipment requirements. For example, over 11,000 pieces of prepositioned combat equipment used in Iraq—such as tanks, Bradley Fighting Vehicles, and armored personnel carriers—are slated to be repaired and turned over to active duty units. Furthermore, Army officials told us that decisions have not been made as to whether the sustainment and operational project stocks will be reconstituted because of the large investments required and the uncertainty of the future plans for the Army’s prepositioning program. DOD’s recent report to Congress estimates the costs to reset and reconfigure the Army’s prepositioned stocks to be more than $4 billion. According to the report, however, these costs are not currently captured in DOD’s baseline submissions or in any of its supplemental funding requests. The Marine Corps’ prepositioning programs are expected to have a reduced capability until 2008, at least. The department’s April 2005 report to Congress estimates the cost to reconstitute the Marine Corps’ prepositioned equipment is approximately $490 million. Of this amount, about half ($247 million) was requested in the department’s most recent supplemental request. In the past year, the Marine Corps considered its options to reconstitute the equipment stored aboard the prepositioning ships given its continuing commitment to support operations in Iraq. The Marine Corps recently decided to partially refill the five ships offloaded to support operations in Iraq; however, due to the limited availability within the Marine Corps of equipment needed, such as heavy cargo trucks and High Mobility Multi-purpose Wheeled Vehicles, the Marine Corps forecasts that these ships will have major end-item fill rates of less than 50 percent. Additionally, Marine Corps officials stated that the reconstitution of the stocks in Norway is scheduled to be completed by 2008, at which time the fill rate of these stocks is projected to be approximately 88 percent. Air Force officials stated that they do not know when they will be able to reconstitute prepositioned stocks and return them to storage. As part of its reconstitution effort, the Air Force is in the process of replacing or converting all of its existing bare base sets into a smaller and more modular configuration. However, it is uncertain when the new sets will be available. For example, the Air Force had budgeted approximately $320 million in fiscal year 2005 for procurement of the new bare base sets. However, according to an Air Force official, Congress reduced the Air Force’s budget by $53 million because it was concerned about the large increase in the Air Force’s procurement budget for that year. As a result, the official stated that the Air Force will not be able to procure all of the required sets. In addition, Air Force officials told us that they do not know when reconstitution for other categories of prepositioned stocks will be completed since much of this equipment is still in use. Without a Plan and Joint Doctrine, the Military Services and Defense Logistics Agency’s Prepositioning Plans Are Uncoordinated Each of the military services and the Defense Logistics Agency are planning the future of their prepositioning programs without the benefit of an overall plan or joint doctrine to coordinate their efforts. Thus, it is unclear to us how the programs will fit together to meet the evolving defense strategy. DOD officials representing the Joint Staff and the services shared our assessment and concerns. And, according to these officials, the Joint Staff has formed a working group that is focused on establishing common definitions for prepositioning as a first step in developing joint doctrine and setting a future plan for the department’s prepositioning programs. The future of the Army’s prepositioning program, the largest of DOD’s programs, is still unclear, and the Army acknowledges that it faces continuing funding challenges as it attempts to modernize, support ongoing combat operations, and reconstitute its prepositioned equipment, leaving the future direction of its prepositioning program uncertain. The Army has a major effort ongoing to transform its units into more flexible, rapidly deployable forces at the same time it is supporting ongoing combat operations. The Army’s future prepositioning strategy was being revised during our review, so we could not assess how this overall transformation—commonly called “modularity” by the Army—will affect the prepositioning program. In addition, the Army’s prepositioned stocks will have to be reconstituted due to their heavy use in Operation Iraqi Freedom. According to Army officials, however, the Army is nearing completion on a new strategy for its prepositioning programs. They told us that prepositioning will continue to be important in the future and that the prepositioned sets would be converted to the “modular” configuration by 2012 or sooner. While the Marine Corps and Navy have identified concepts for future prepositioning programs, they have not developed firm schedules and cost estimates for these programs. For example, the Marine Corps is planning on changing the focus of its prepositioned stocks in Norway from their Cold War configuration to a more global support capability. Additionally, the Marine Corps is considering a fundamental change to the future of its prepositioning program that would replace existing Maritime Prepositioning Force ships with an undetermined number of new ships with a wider range of capabilities. These ships are intended to be an integral part of a future Navy sea base. The seabasing concept provides maritime platforms capable of supporting at-sea arrival of forces, assembly of those forces, rapid movement ashore, and combat sustainment without reliance on shore facilities. While such seabasing is envisioned by DOD to be a joint service capability, it is not clear how this will be accomplished. Furthermore, the affordability of the program is in question—this new concept could cost billions of dollars. The Defense Logistics Agency began developing a global stock positioning strategy in 2004 to support its overseas customers for the items it manages. The strategy involves a combination of fixed-forward depots, a floating distribution center, and a deployable distribution depot. Fixed- forward stocking depots have been established at the following locations: Germersheim, Germany; Yokosuka, Japan; Pearl Harbor, Hawaii; Sigonella, Italy; Kuwait; Guam; and South Korea. The floating distribution center involves a mobile floating depot which will be capable of providing immediate distribution within the first 30 days of a contingency and could operate as part of the seabasing concept. The deployable distribution depot will be able to provide a full range of distribution capabilities in a theater of operations early in a contingency in developed or remote operating areas. These last two capabilities are still being developed and the Defense Logistics Agency does not yet have firm estimates for the costs of these capabilities. The Air Force is also planning changes for its prepositioning programs. It is transforming its bare base sets into a smaller, more modular configuration and is considering new prepositioning sites to support the new defense strategy. However, Air Force officials told us that it cannot make some decisions related to new storage sites for its prepositioned stocks until DOD’s basing study is complete. Without a plan or joint doctrine to guide their efforts, the services are planning for the future of their programs without an overarching framework that establishes priorities for prepositioning among competing initiatives, develops performance goals to measure success, and identifies resources to implement plans. Until the department determines how prepositioning fits into future military plans, it cannot provide assurances to Congress that the substantial investments required to recapitalize the stocks will be affordable. Conclusions Prepositioning seems certain to be a key component of U.S. military strategy for years to come, but the department must make it a priority for it to overcome past management problems and ensure its future. In the near term, operational risks may exist should other military contingencies arise given the current inventory shortfalls and poor maintenance condition of some prepositioned stocks. However, the department has not developed concrete plans to overcome these challenges, even though inventory shortfalls and maintenance issues exist in the prepositioned stocks in potential trouble spots such as South Korea. Despite the importance of prepositioning to the military, however, long- standing management problems persist and the programs seem to have received little attention at the department level. Oversight mechanisms are in place, but they have been ineffective or ignored. Leadership and accountability begin at the top. Until DOD fully implements its own directive on war reserve materiel, oversight of its prepositioning programs will likely continue to be inadequate and the department will be unable to assess risks associated with any shortfalls in the programs. Moreover, DOD lacks reliable information in regard to its prepositioning programs and will be unable to make reliable assessments of the readiness of these programs. This could result in failure to obtain the right amount and types of equipment for the designated prepositioning locations, which could ultimately jeopardize the ability of U.S. forces to accomplish their war- fighting missions and leave them at risk. Congress is also concerned about these issues and directed the Secretary of Defense to submit a report on its prepositioning plans by October 1, 2005. Looking toward the future, without a coordinated plan and joint doctrine that identifies the role of prepositioning in the transformed military, the department cannot plan the future of its programs in a comprehensive manner. As a result, DOD cannot provide assurance to Congress that its prepositioning programs will be coordinated, effective, and affordable. Taking all these problems together—and considering them against the backdrop of growing operational and fiscal strains on the military—we believe the future of the prepositioning programs are at risk. Unless the department addresses long-standing management issues and sets a clear plan for the future, the department and Congress cannot make informed decisions about the significant investments needed to reconstitute or recapitalize the stocks. Recommendations for Executive Action To address the risks and management challenges facing the department’s prepositioning programs and improve oversight, we recommend that the Secretary of Defense take the following five actions: Direct the Chairman, Joint Chiefs of Staff, to assess the near-term operational risks associated with current inventory shortfalls and equipment in poor condition should a conflict arise. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to provide oversight over the department’s prepositioning programs by fully implementing the department’s directive on war reserve materiel and, if necessary, revise the directive to clarify the lines of accountability for this oversight. Direct the Secretary of the Army to improve the processes used to determine requirements and direct the Secretary of the Army and Air Force to improve the processes used to determine the reliability of inventory data so that the readiness of their prepositioning programs can be reliably assessed and proper oversight over the programs can be accomplished. Develop a coordinated departmentwide plan and joint doctrine for the department’s prepositioning programs that identifies the role of prepositioning in the transformed military and ensures these programs will operate jointly, support the needs of the war fighter, and are affordable. Report to Congress, possibly as part of the mandated October 2005 report, how the department plans to manage the near-term operational risks created by inventory shortfalls and management and oversight issues described in this report. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. These comments are reprinted in appendix III. DOD partially or fully concurred with our recommendations. However, in its response, DOD disagreed with the implementation of two of our recommendations because it had already taken actions to address them. In subsequent discussions with DOD, officials indicated that this disagreement was not related to the substance of our recommendations. In fact, the department has already initiated several actions to address our recommendations including conducting an assessment of risk, improving requirements and inventory visibility, and conducting a departmental assessment on future prepositioning. Further, DOD agreed that oversight policy as discussed in its directive does not reflect appropriate oversight roles and responsibilities. To address this issue, DOD plans to clarify policy and roles and responsibilities for oversight. With respect to our recommendation to improve requirements determination and the reliability of inventory data, the initial efforts taken by the Army and Air Force represent progress, but the planned actions should address all categories of the Army and Air Force’s prepositioned stocks, as discussed in our report, and not just a portion of these programs. For example, the planned actions should also include the Army’s operational project stocks and the Air Force’s vehicle stocks, among others. Overall, we acknowledge the actions already taken by the department to address these issues, but DOD will need sustained management focus to resolve these deeply rooted and long-standing problems. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of Defense, the Secretary of the Army, the Secretary of the Air Force, the Secretary of the Navy, and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staffs have any questions, please contact me at (202) 512- 8365. Key contributors to this report are listed in appendix IV. Appendix I: Past Products Identifying DOD Inventory Management and Prepositioning Challenges The Department of Defense’s (DOD) prepositioning programs have faced long-standing challenges including poor asset visibility; equipment excesses and shortfalls; and invalid, inaccurate, poorly defined, and otherwise questionable requirements. GAO, military service auditors, DOD’s Inspector General, and others have called attention to these problems in products issued over the years. In 1990, we identified DOD’s inventory management as high risk because inventory levels were too high and the supply system was not responsive to the needs of the war fighters. With the onset of Operation Iraqi Freedom, other supply chain issues related to inventory management have been reported as impediments. In a January 2005 update, we expanded this high-risk area to include DOD’s management of its entire supply chain, which includes distribution, inventory management, and asset visibility. Table 3 provides summaries of challenges identified in select GAO reports and testimonies issued between January 1995 and March 2005. Table 4 provides summaries of issues identified in select products released by other organizations during the same time period. Appendix II: Scope and Methodology To assess the near-term operational risk given the continuing use of prepositioned stocks, we obtained reports prepared by the military services on the inventory levels of their prepositioned stocks compared to program requirements. We also reviewed available maintenance reports or other data used by the services to measure the maintenance condition of the prepositioned stocks. We also observed the physical condition of materiel stored by the Marine Corps at its prepositioning locations in Norway and aboard a prepositioning ship at its maintenance facility located at Blount Island, Florida; and observed the maintenance condition of the Army’s prepositioned stocks at Camp Carroll, South Korea and Sagami Army Depot, Japan. We interviewed program managers at each of the military services to determine the impact of reported shortfalls and poor maintenance condition in the prepositioned stocks and discussed the time frames and costs needed to repair or replace prepositioned stocks used in recent military operations. To assess the sufficiency of the Department of Defense’s (DOD) and service-level oversight of these prepositioning programs, we discussed the processes used by DOD and the services to oversee their prepositioning programs with officials from the Office of the Secretary of Defense, the Joint Staff, and the military services. We reviewed relevant DOD directives and readiness reports prepared by the services and the Joint Staff to determine the extent to which the information contained in these reports could be used by DOD or the services to provide oversight. We also reviewed past reports prepared by GAO, the Army Audit Agency, the Air Force Audit Agency, the Army Materiel Command Inspector General, and the CNA Corporation that identified problems with the reliability of data regarding the preparedness of the services’ prepositioned stocks and problems with the requirements determination processes for some of these stocks. We discussed issues regarding the sufficiency of data on the preparedness of DOD’s prepositioned stocks with program managers from each of the services. To assess whether DOD has developed a coordinated plan for the future of its prepositioning programs that would meet the goals of the recently published defense strategy, we collected and analyzed information from the military services and the Defense Logistics Agency on the future plans for their prepositioning programs. We also reviewed the recently published National Defense Strategy and discussed the future direction of the department’s prepositioning programs with officials in the Office of the Secretary of Defense, the Joint Staff, and the military services. We conducted our review from July 2004 through May 2005 in accordance with generally accepted government auditing standards. We reviewed available data for inconsistencies and discussed the data with DOD officials. Our assessments of data reliability revealed significant concerns that are discussed in the report. We interviewed officials and obtained documentation at the following locations: U.S. Army Headquarters, Washington, D.C. U.S. Army Materiel Command, Ft. Belvoir, Virginia U.S. Army Field Support Command, Rock Island, Illinois U.S. Army Forces Command, Ft. McPherson, Georgia U.S. Army Special Operations Command, Ft. Bragg, North Carolina Eighth U.S. Army, Yongsan Garrison, South Korea Combat Equipment Battalion-Northeast Asia, Camp Carroll, South Korea Materiel Support Center-Korea, Camp Carroll, South Korea Sagami Army Depot, Camp Zama, Japan U.S. Marine Corps Headquarters, Arlington, Virginia Marine Corps Combat Development Command, Quantico, Virginia Blount Island Command, Jacksonville, Florida Frigaard Storage Facility, Norway Hammerkammen Storage Facility, Norway Vaernes Aviation Storage Facility, Norway Marine Corps Logistics Command, Albany, Georgia Chief of Naval Operations, Washington, D.C. CNA Corporation, Alexandria, Virginia Naval Facilities Engineering Command, Washington, D.C. Naval Special Warfare Command, San Diego, California Naval Audit Service, Falls Church, Virginia Naval Medical Logistics Command, Fort Detrick, Maryland Military Sealift Command, Washington, D.C. U.S. Air Force Headquarters, Washington, D.C. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, John Pendleton, Assistant Director, Harold Reich, Assistant Director, Aisha Cabrer, Katherine Chenault, Lee Cooper, Jeff Kans, Renee McElveen, John Nelson, Emmy Rhine, Enemencio Sanchez, Patricia Sari-Spear, Kimberly Seay, Robyn Trotter, Matthew Ullengren, Eddie Uyekawa, Hector Wong, and Ignacio Yanes also made key contributions to this report.
The importance of prepositioned stocks to the U.S. military was highlighted during recent operations in Iraq, as much of the equipment and supplies stored at land sites in the region and aboard prepositioning ships were used to support operations. Long-standing problems in the Department of Defense's (DOD) prepositioning program are systematic of the inventory management issues, and more recently supply chain management issues, that GAO has considered as high-risk areas since 1990. GAO was asked to review the risks facing DOD's prepositioning programs, including an assessment of (1) the near-term operational risk given the continuing use of these stocks, (2) the sufficiency of DOD and service-level oversight of these prepositioning programs, and (3) whether DOD has developed a coordinated plan for the future of the department's prepositioning programs that would meet the goals of the recently published defense strategy. DOD faces some near-term operational risks should another large-scale conflict emerge because it has drawn heavily on its prepositioned stocks to support ongoing operations in Iraq. And, although remaining stocks provide some residual capability, many of the programs face significant inventory shortfalls and in some cases, maintenance problems. For example, the Army has drawn equipment from virtually all of its prepositioned stocks to support operations in Iraq. Some of its storage sites have shortfalls of equipment and sustainment items, like spare parts, and some stocks are in poor condition. Additionally, the Marine Corps has used a significant portion of the stocks downloaded from 5 of its 16 prepositioning ships to support operations in Iraq and it is unclear when this equipment will be refilled. The Air Force is also continuing to use a considerable amount of its prepositioned stocks to support combat operations in Iraq and it is unclear when these stocks will be refilled. The precise operational risk created by these shortfalls is difficult to assess. However, should a new conflict arise in the near term, the combatant commander would likely face difficult operational challenges. The department and the military services have provided insufficient oversight over DOD's prepositioning programs. This inattention has allowed long-standing problems with determining program requirements and managing inventory to persist. DOD has not enforced its directive that could provide centralized oversight over its prepositioning programs. Officials told us they did not enforce this directive because they were able to provide adequate oversight through other mechanisms. Even if the department had enforced its directive, however, the requirements underpinning some of DOD's prepositioning programs are questionable and the services do not have sufficient information on the inventory level and maintenance condition of some prepositioned stocks. Without reliable information on requirements, inventory levels, and maintenance condition, DOD cannot provide sufficient oversight over its programs, which potentially leaves war fighters at risk of not having needed stocks in the future. DOD has not developed a coordinated departmentwide plan or joint doctrine to guide the future of its prepositioning programs, despite the heavy use of prepositioned stocks in recent conflicts and the department's plans to rely on them in the future. DOD's recently published defense strategy indicates that prepositioning programs should be more innovative, flexible, and joint. In the absence of a departmentwide plan or joint doctrine to coordinate the reconstitution and future plans for these programs, the services have been recapitalizing stocks and developing future plans without an understanding of how the programs will fit together to meet the evolving defense strategy. Without a framework that establishes priorities for prepositioning among competing initiatives, DOD cannot provide assurances to Congress that the billions of dollars that will be required to recapitalize the stocks and develop future programs will produce programs that operate jointly, support the needs of the war fighter, and are affordable.
Background The Stamp Out Breast Cancer Act (Pub. L. No. 105-41, Aug. 13, 1997) required that the Postal Service issue its first-ever semipostal—the BCRS. The Service issued the BCRS on July 29, 1998. The act required that the BCRS be available for sale for 2 years, but Congress has since extended the sales period through December 31, 2003. Semipostals are stamps sold with a surcharge above the First-Class postage rate with the net surcharge amount going to a designated cause. The act stipulated that the BCRS surcharge was not to exceed 25 percent of the First-Class postage rate, which, at the time of issuance, was 32 cents. The act further stipulated that after recovering its reasonable costs, the Service was to transfer 70 percent of the remaining surcharge revenue to NIH and 30 percent to DOD for breast cancer research. The Service’s presidentially appointed governors initially set the price of the BCRS at 40 cents—32 cents for First-Class postage plus the maximum 25-percent surcharge of 8 cents. Since that time, the price of First-Class postage has increased to 37 cents, and the price of the BCRS is currently 45 cents. On the day the initial sales period for the BCRS was to end, the Semipostal Authorization Act (Pub. L. No. 106-253, July 28, 2000) was enacted, which extended the sales period for the BCRS through July 29, 2002, and granted the Service authority to issue future semipostals of its own choosing. Additionally, the act required that the Service issue regulations governing future semipostals aside from the BCRS. Another act, the Breast Cancer Research Stamp Act of 2001 (Pub. L. 107-67, Nov. 12, 2001) further extended the sales period for the BCRS and established new requirements governing the sales price of the BCRS. That act extended the BCRS’ sales period through December 31, 2003, and replaced the maximum 25 percent surcharge with a minimum 15 percent surcharge that, when added to the First-Class postage rate, is evenly divisible by five. That is, the BCRS must be sold for an amount evenly divisible by five and must cost at least 15 percent more than First-Class postage. Specifically, the BCRS is currently sold for 45 cents, which is evenly divisible by 5; with the 8-cent surcharge, it costs about 22 percent more than the 37-cent First-Class postage rate. Additional legislation is currently pending that would extend the sales period for the BCRS through December 31, 2005. Since the BCRS was issued in 1998, Congress has passed legislation establishing two additional semipostals. One semipostal is to provide assistance to the families of emergency relief personnel killed or permanently disabled in the line of duty in connection with the terrorist attacks against the United States on September 11, 2001—commonly referred to as the Heroes of 2001 semipostal. The Service began selling the Heroes of 2001 semipostal on June 7, 2002, and its sales are scheduled to end no later than December 31, 2004, in accordance with the semipostal’s authorizing legislation. The other semipostal—commonly referred to as the Stop Family Violence semipostal—is to help fund domestic violence programs. Legislation requiring introduction of the Stop Family Violence semipostal specifies that sales are to begin no later than January 1, 2004, and end no later than December 31, 2006. Legislation was also pending in Congress at the end of August 2003 to establish semipostals to help promote childhood literacy and the Peace Corps. As of August 2003, the Service had issued no semipostals that had not been congressionally mandated. Images of the BCRS, Heroes of 2001, and Stop Family Violence semipostals are reproduced as figures 1, 2, and 3, respectively. The Service plans to begin selling the Stop Family Violence semipostal in November 2003. For more details about the BCRS and its background, see our April 2000 BCRS report. That report also includes information on semipostals issued by foreign postal administrations. Reported Monetary and Other Resources Devoted to the BCRS Program The full cost of the BCRS program is not known. The Service reported that the bulk of BCRS costs from inception through May 16, 2003, were about $9.5 million, most of which were recovered through the First-Class postage portion of the BCRS. The Service does not track BCRS costs that it considers to be inconsequential, such as invoices less than $3,000. The Service also does not identify costs that it would have incurred whether or not the BCRS program had been established, such as overhead. Additionally, the Service reported that no staff have been hired because of the BCRS program, nor have any staff been dedicated to work full-time on the program. In response to a recommendation in our April 2000 BCRS report, the Service issued BCRS cost-recovery regulations in July 2000 and reported using these regulations, and amendments, to track and allocate BCRS costs. We are concerned, however, that the regulations can be interpreted as not requiring the Service to provide baseline comparisons for certain BCRS costs, e.g., printing, sales, and distribution, although the Stamp Out Breast Cancer Act specifically states that reasonable costs in these areas attributable to the BCRS should be recouped from the BCRS’ surcharge revenue. Additionally, in our April 2000 report, we recommended that the Service make available to Congress the BCRS cost data and analyses necessary to provide assurance that postal ratepayers are not involuntarily contributing funds to breast cancer research. Although the Service committed to Congress to provide it with the data and analyses, Service officials told us that for a number of reasons the Service has not yet done so. In August 2003, Service officials said that they plan to reexamine their BCRS regulations and, as soon as practicable, provide Congress with current BCRS data and analyses. Full BCRS Program Costs Unknown Although the full cost of the BCRS program is not known, the Service reported that the bulk of the program’s costs, from inception through May 16, 2003, were about $9.5 million. These costs do not include (1) direct costs for items the Service considers to be inconsequential, such as the cost of items that do not exceed $3,000 per invoice and (2) indirect costs that the Service would have incurred whether or not the BCRS program had been established, such as overhead. Additionally, the $9.5 million does not include any staffing-related costs because, according to postal officials, no staff were hired for the BCRS program nor were any staff dedicated full- time to work on the program. These officials told us that all work associated with the BCRS was absorbed by existing staff and staff budget— i.e., the Service incurred no additional staffing-related expenses because of the BCRS. They also told us that the Service, with the exception of the law department, has not tracked staff hours devoted to the BCRS because it was not cost-effective to quantify and recoup inconsequential costs associated with the BCRS. Because all costs associated with the BCRS were not identified and tracked, the full cost of operating and administering the BCRS program is not known. The reported costs of the BCRS through May 16, 2003, are shown in table 1, broken down by type of cost. In addition to these costs, the Service could incur additional costs associated with the BCRS before its sales period ends, which is currently scheduled for December 31, 2003. Allocation of BCRS Program Costs between the Postage Portion and Surcharge Revenue Under the cost-recovery regulations the Service applies to the BCRS, the Service determined that $8.7 million, about 91 percent, of the $9.5 million in BCRS costs were recovered through the First-Class postage rate. The Service also determined that the remaining $853,000 in costs were not those normally incurred with a comparable commemorative stamp and therefore were recovered through the BCRS’ surcharge revenue. That is, about 9 percent of BCRS program costs were recovered through the surcharge revenue. Table 2 identifies, by cost item, the Service’s reported cost of operating and administering the BCRS program, from inception through May 16, 2003; and the allocation of those costs between those covered by the First-Class postage rate and costs that were recouped from the BCRS’ surcharge revenue. Service’s Approach to Cost Recovery Has Evolved In response to a recommendation in our April 2000 BCRS report, the Service issued BCRS cost-recovery regulations in July 2000, which it subsequently amended in 2001. At the time of our April 2000 report, the Service was using informal, evolving criteria to make decisions about which costs would be recouped from the BCRS’ surcharge revenue and had not issued regulations in this area. In July 2000, the Service issued a revision to its Administrative Support Manual (ASM) that specified a “Cost Recovery Policy for the Breast Cancer Research Semipostal Stamp.” The ASM provisions, which are viewed by the Service as part of its regulations, specified that the Service was to recover BCRS costs that are determined to be incremental costs from its surcharge revenue. The regulations described some types of costs that the Service had determined to be incremental to the BCRS. Examples of such costs included (1) design and production costs in excess of the cost to produce equivalent stamps; (2) packaging costs in excess of the cost to package equivalent stamps; and (3) printing costs for items other than stamps that are specific to the BCRS, such as flyers and special receipts. In June 2001, the Service published in the Federal Register its regulations covering semipostals issued under the Semipostal Authorization Act. Among those regulations was 39 C.F.R. 551.8, which established procedures for determining costs to be offset from semipostal differential revenue. On December 27, 2001, the Postal Service published a similar version of this regulation in section 645 of the ASM. The ASM regulations were made applicable to semipostals issued under the Semipostal Authorization Act, as well as the BCRS. The December 2001 revision to the ASM (hereafter referred to as regulations) no longer refer to “incremental costs,” as was done in the July 2000 version. The December 2001 regulations state that the Service is to recover BCRS costs that are determined to be in excess of the costs normally incurred for commemorative stamps having similar sales; physical characteristics; and marketing, promotional, and public relations activities. These regulations prescribe that on the basis of judgment and available information, the Service is to identify stamp(s) comparable with the BCRS and create a profile of selected cost characteristics, thereby establishing a baseline for cost comparison purposes. According to the regulations, BCRS costs that exceed the baseline costs for comparable commemorative stamps are to be recovered from the BCRS’ surcharge revenue. In May 2003, we asked the Service to provide us the baseline costs for the comparable stamps being used to determine what costs are to be recovered from the BCRS’ surcharge revenue. In July 2003, the Service provided us with what it referred to as costs above comparable stamp costs that were recouped from the BCRS’ surcharge revenue and updated that information in August 2003. However, the Service did not provide us with the actual baselines used in making the determinations about which costs were to be recouped from the BCRS’ surcharge revenue. The Service’s December 2001 regulations provide guidance regarding its BCRS cost-recovery criteria. The regulations state that cost items recoverable from the BCRS’ surcharge revenue may include, but are not limited to, the following: packaging costs in excess of the cost to package comparable stamps, printing costs of flyers and special receipts, costs of changes to equipment, costs of developing and executing marketing and promotional plans in excess of the cost for comparable stamps, and other costs specific to the BCRS that would not normally have been incurred for comparable stamps. In addition, the Service’s regulations state that BCRS costs that meet the following criteria will not be tracked: costs that the Service determines to be inconsequentially small, which include those cost items not exceeding $3,000 per invoice; costs for which the cost of tracking would be burdensome (e.g., costs for which the cost of tracking exceeds the cost to be tracked); costs attributable to mail to which the BCRS is affixed (i.e., costs that are attributable to the appropriate class and/or subclass of mail); and administrative and support costs that the Service would have incurred whether or not the BCRS program had been established. The regulations further identify the following BCRS costs—those the Service would normally incur for comparable stamps—as recovered through the First-Class postage portion of the BCRS stamp price. Therefore, baselines have not been established for these costs, which are as follows: stamp design (including market research); stamp production and printing; stamp shipping and distribution; estimated training for field staff, except for special training associated stamp sales (including employee salaries and benefits); withdrawal of the stamp issue from sale; destruction of unsold stamps; and incorporation of semipostal images into advertising for the Postal Service as an entity. BCRS Cost-Recovery Regulations May Not Allow the Service to Identify and Recoup All Costs Attributable to the BCRS The Stamp Out Breast Cancer Act specifically recognizes that printing, sales, and distribution costs attributable to the BCRS are among the types of reasonable costs the Service should recover from the BCRS’ surcharge revenue. Section 414 (c) (2) of the act states that the Service must recover from the BCRS’ surcharge revenue “an amount sufficient to cover reasonable costs . . . in carrying out this section, including those attributable to the printing, sale, and distribution of stamps under this section.” The Service has determined, and we have no basis to challenge its discretion in this regard, that “reasonable costs” are costs in excess of those normally incurred for a comparable stamp. However, we are concerned that the regulations the Service issued to implement this requirement can be interpreted as not requiring the Service to provide baseline comparisons for certain BCRS costs, e.g., printing, sales, and distribution, although the Stamp Out Breast Cancer Act specifically states that reasonable costs in these areas attributable to the BCRS should be recouped from the BCRS’ surcharge revenue. Our concerns with the regulations include the following: BCRS printing costs: The Service’s December 2001 regulations can be interpreted as not requiring baseline comparisons for BCRS printing costs. The regulations could be interpreted to mean that all BCRS printing costs are covered by the First-Class postage portion and comparisons with baseline costs are not necessary. This interpretation is supported by the fact that, as of August 2003, the Service had not established a baseline cost for comparable stamps against which to compare BCRS printing costs. The Service did, however, provide information showing that the BCRS’ printing costs between 1998 and 2003 ranged from $3.35 per thousand stamps to $7.39 per thousand. The Service also provided information on printing costs for the three stamps that it considers comparable with the BCRS. The printing costs for these three stamps ranged from $11.52 per thousand in 1999 to $14.34 per thousand in 1997. Additionally, the Service provided printing costs for various commemorative stamps in 1998 through 2002. That information would tend to support the view that printing costs for the BCRS have not exceeded the printing costs for other commemoratives. Nevertheless, the Service did not establish a baseline for making BCRS printing cost comparisons. Therefore, the Service has not demonstrated that its regulations establish an adequate process for ensuring that excess semipostal costs are identified and recouped from surcharge revenues. Following its regulations, the Service reported that it did not recoup from the BCRS’ surcharge revenue any of the $3,597,000 it incurred in BCRS printing costs. Without a comparison between actual BCRS printing costs and the baseline printing costs for comparable stamps, the Service lacks assurance that it is identifying and recouping excess costs from BCRS surcharge revenue. BCRS sales costs: The Service’s December 2001 regulations can be interpreted as not requiring baseline comparisons for BCRS sales costs. The regulations can be interpreted to mean that all BCRS sales costs are covered by the First-Class postage portion and comparisons with baseline costs are not necessary. As of August 2003, the Service had not established a baseline cost for comparable stamps against which to compare BCRS sales costs. Unlike BCRS printing costs, the Service reported that it did not track BCRS sales costs because they were “minimal,” but it was unable to provide documentation supporting this position. The Service has reported that the BCRS was available for sale at over 27,000 post offices across the country, where salaries and benefits for its clerks average about $30 per hour. Service officials told us that no staff were hired for the BCRS program nor were any staff dedicated full-time to work on the program. However, the Service commented in July 2003 that each semipostal generates sales costs that it would not incur for commemorative stamps, such as time spent responding to customer questions about the fund-raising involved. In addition, the Service has reported that stamp sales costs are 24 cents per dollar for stamps sold at the window, compared with 14 cents for stamps sold at vending machines. However, the Service has more recently taken the position that stamp sales costs are substantially less than previously calculated. In September 2003, the Service was in the process of reviewing its stamp sales costs, but revised stamp sales figures were not yet available. Therefore, it is unclear whether the Service has incurred sales costs for the BCRS that are greater than those incurred for comparable commemorative stamps. Without a comparison between actual or estimated BCRS sales costs and the baseline sales costs for comparable stamps, the Service lacks assurance that it is identifying and recouping excess costs from surcharge revenue. In addition to these examples, we have similar concerns regarding other BCRS costs that are being handled in a manner similar to that described for BCRS printing, as well as sales. These other costs include stamp design, shipping, and distribution; estimated training for field staff, except for special training associated with the BCRS; withdrawal of the stamp issue from sale; destruction of unsold stamps; and incorporation of BCRS images into advertising for the Postal Service as an entity. We discussed our concerns about the Service’s cost-recovery regulations and their impact with Service officials, especially in light of statements made by Service officials in June 2001 that the issuance of multiple semipostals at the same time could significantly increase the administrative burden on the Service and ultimately burden existing staff and limited resources. Service officials said that their overriding concern in developing the cost-recovery regulations was to avoid having to establish cost-tracking systems that would cost more to develop and implement than the surcharge revenue to be collected from semipostals, including the BCRS. We pointed out that the Service already performs a number of cost-related studies that could possibly be used or modified to capture or estimate incremental semipostal costs, or that new approaches to capture or estimate such information might be possible and not be cost prohibitive. Service officials also said that in developing the regulations, they had not intended to preclude the Service from recovering excess costs in the printing, sales, and distribution categories, and they believe they can do so under the existing regulations. However, we remain concerned that the regulatory provisions do not require the Service to do so. In fact, the Service has not established baseline costs that would allow it to identify and recoup excess costs for printing, sales, and distribution. Therefore, we continue to believe that a reassessment of the regulatory provisions would be warranted. In view of our concerns, Service officials told us, in August 2003, they were planning such a reassessment. The Service Has Not Yet Met Its Commitment to Congress to Provide It with BCRS Cost Data and Analyses In our April 2000 BCRS report, we recommended that the Service make available the data and analysis showing which BCRS costs have been recovered through the First-Class postage rate to provide assurance that postal ratepayers are not involuntarily contributing funds to breast cancer research. In a letter addressed to Chairman John M. McHugh of the former Subcommittee on the Postal Service, House Committee on Government Reform, the Service committed to provide, within 60 days of the conclusion of the BCRS’ initial 2-year sales period (i.e., September 28, 2000), an analysis of the BCRS costs that the Service recovered through the base First-Class Mail, single-piece, first-ounce postage rate. The letter further stated that the analysis would demonstrate that the BCRS’ incremental costs have been recovered solely from the surcharge revenue, and that its nonincremental costs have been recovered through the base postage rate. As of August 12, 2003, the Service had not yet provided the recommended BCRS cost data and analysis to Congress. Service officials explained that an administrative oversight, as well as subsequent events, led to the Service’s not making this information available to Congress. The officials acknowledged that a consultant had drafted an internal paper that presented and analyzed fiscal year 1999 cost data on the BCRS. However, the officials noted that this paper had not been reviewed by postal management and was drafted more than 2 years ago, before the Service issued its current regulations on BCRS cost recovery. As we previously recommended, we continue to believe that the Service should prepare and make available the data and analyses of BCRS costs in order to provide ratepayers assurance that they are not involuntarily contributing funds to breast cancer research. Further, we believe that making available current data and analyses are even more important now than before, given that additional semipostals have been authorized; and more semipostals are likely in the future. More specifically, Congress has authorized two additional semipostals; and in August 2003, it was considering legislation authorizing two more semipostals and extending the sales period for the BCRS. Congress has also given the Postal Service specific authority to issue semipostals of its own choosing. Service officials told us in August 2003 they were planning a reassessment of the earlier BCRS internal paper and would provide Congress and us with the results of that reassessment as soon as practicable. Effectiveness of the BCRS as a Fund-Raiser The BCRS has continued to be an effective means of raising funds for breast cancer research. Although neither the Stamp Out Breast Cancer Act nor amendments to the act provide quantitative measures for evaluating the effectiveness of the BCRS as a fund-raiser, the act did provide that the BCRS was to provide the public a voluntary and convenient way of raising funds for breast cancer research. We reported in April 2000 that the BCRS had been successful to those ends. Since then, the BCRS has continued to be a voluntary and convenient way for the public to contribute millions of dollars for breast cancer research. BCRS sales have fluctuated over time; however, the BCRS has raised over $30 million for breast cancer research since it was issued in July 1998. Additionally, most key stakeholders told us that for the most part, they viewed the BCRS as an effective fund-raiser; and the public’s view of the BCRS was generally positive, as reflected in the results from our survey. As of September 2003, the Service had transferred to NIH and DOD about $30.8 million from funds raised by the BCRS for breast cancer research. These federal organizations reported to us that they have established programs to fund innovative breast cancer research conducted by various research institutions. NIH and DOD are not required to issue reports to Congress detailing how BCRS-generated funds were used or the accomplishments that resulted from the BCRS-funded research. The BCRS Remains Voluntary and Convenient and Has Raised Millions of Dollars for Research The BCRS has remained voluntary and convenient, as provided for by the act, and has raised over $30 million for breast cancer research since it was issued in July 1998. Postal patrons have the choice of purchasing regular First-Class postage stamps at 37 cents each or contributing to breast cancer research by purchasing the BCRS at 45 cents each. The BCRS remains convenient in that it is available for purchase from a variety of postal sources, including post offices, although two stakeholders reported instances when some post offices in their areas did not have the BCRS when they visited. Figure 4 shows the various sources from which the BCRS can be purchased. Our public opinion surveys—including our current 2003 survey and our earlier 1999 survey, both conducted by the same firm—indicate that about 70 percent of the public views semipostals as a convenient way to contribute to designated causes. These and other estimates from our 2003 survey are subject to sampling errors of less than +/- 6 percentage points (95 percent confidence level), as well as to additional errors of unknown magnitude due to the 89 percent nonresponse rate for the survey as discussed in appendix I. As envisioned by the act, the BCRS has raised a substantial amount of money for breast cancer research. Postal officials report that since the BCRS was issued on July 29, 1998, the Service has sold over 450 million of this semipostal, generating over $30 million, net of costs, for breast cancer research. If BCRS sales continue at the rate it has been selling in fiscal year 2003, about 486 million will have been sold by the time BCRS sales are scheduled to end on December 31, 2003—generating approximately $35 million in surcharge revenue. BCRS Sales Have Fluctuated Over Time Quarterly BCRS sales fluctuated considerably between 1998 and 2003 but have generally trended lower after reaching a high point of almost 40 million sales in quarter 3, 2000. During the early years that the BCRS was for sale—quarter 4, 1998 through quarter 4, 2000—quarterly sales varied from a low of 18.3 million to a high of 39.8 million, with average quarterly sales of 26.4 million. During the latter years—from quarter 1, 2001, through quarter 3, 2003, sales ranged from 14.9 million to 27.8 million, with average quarterly sales of 19.5 million. To help shed additional light on the continued effectiveness of the BCRS as a means of fund-raising, we also looked at quarterly sales data for the Heroes of 2001 semipostal to see if there was a discernable decline in BCRS sales during the quarters when both semipostals were being sold simultaneously. Although sales of the BCRS trended somewhat lower during the 4 quarters the Heroes semipostal was for sale, postal officials and other stakeholders did not believe there was a strong correlation. Postal officials pointed out that although BCRS sales declined during the period from quarter 4, 2002, through quarter 2, 2003, they did not drop nearly as precipitously as the sales of the Heroes semipostal—which fell from 45.4 million in quarter 4, 2002, to 11.0 million in quarter 3, 2003. Also, some postal officials and other stakeholders believed that over the long term, postal patrons who repeatedly purchase semipostals tend to support causes that have organized, nationwide support bases. For example, some postal officials and other stakeholders believe many people who purchase BCRSs know someone who is fighting breast cancer or fought it in the past. Likewise, postal patrons who repeatedly purchase BCRSs are likely to be aware that the BCRS is supported by many of the national breast cancer organizations or their affiliates. However, some postal officials and other stakeholders speculated that the Heroes of 2001 semipostal may have initially been purchased by a large, diverse population eager to provide assistance to the families of emergency relief personnel killed or permanently disabled in connection with the terrorists attacks on September 11, 2001. However, these postal officials and other stakeholders suspected that large initial sales figures for the Heroes semipostal were not sustainable because that semipostal did not benefit from the support of a long-established, well-organized, nationwide network of organizations to keep the Heroes semipostal in the pubic eye. Figure 5 shows the number of BCRSs sold since date of issuance through quarter 3, 2003, as well as the number of Heroes of 2001 semipostals sold from date of issuance through quarter 3, 2003. Key Stakeholders Believe the BCRS Has Been an Effective Fund-Raiser The key stakeholders we spoke with that expressed a view about the effectiveness of the BCRS believed it had been effective in raising funds for breast cancer research. Some of the stakeholders who did not express a view on the effectiveness of the BCRS provided other comments about semipostals. Opinions of Key Stakeholders Who Expressed View That the BCRS Has Been an Effective Fund-Raiser Key stakeholders who believed the BCRS has been an effective fund-raiser included the Postal Service; Dr. B.I. Bodai (the individual credited with conceiving the idea for the BCRS and who, along with Ms. Betsy Mullen, lobbied Congress for the BCRS); Ms. Betsy Mullen (the Women’s Information Network Against Breast Cancer), the Susan G. Komen Breast Cancer Foundation; the American Cancer Society; and the American Philatelic Society. According to postal officials, the effectiveness of the BCRS as a means of fund-raising is self-evident for two particular reasons. First, the BCRS has raised over $30 million for breast cancer research since it was issued in July 1998. Second, more than 450 million BCRS’s had been sold through quarter 3, 2003, making the BCRS very popular when compared with the Service’s best-selling commemorative stamps. Postal officials note that although BCRS sales have periodically waxed and waned, yearly sales totals have remained strong since the BCRS was issued. Dr. B.I. Bodai believed the BCRS has been a more effective, consistent fund-raiser than expected. He said no one anticipated that the pennies generated from the sale of each BCRS across the country would, over time, total well over $30 million. Dr. Bodai said the BCRS was popular with families affected by breast cancer, but he believed sales could have been significantly higher if the Service and the various breast cancer organizations had even more vigorously and consistently promoted the BCRS over the past 5 years. Ms. Betsy Mullen of the Women’s Information Network Against Breast Cancer stated she believed the BCRS had been a very effective fund-raiser. Further, she noted that the BCRS’ effectiveness wasn’t just limited to raising funds, but was also extremely effective at raising awareness of breast cancer and the fight to eradicate it. Ms. Mullen also stated that the Women’s Information Network Against Breast Cancer had worked very closely with Congress to ensure that money raised by the BCRS not supplant congressional appropriations for breast cancer research, and she believed money raised by the BCRS had not been used to supplant congressional appropriations to NIH and DOD for breast cancer research. She stated that from an educational perspective, the BCRS has been “priceless” in its role of promoting breast cancer awareness as a women’s health issue. She said she believed that because of the BCRS, many more women have gotten mammograms than otherwise would have, and many lives therefore have been saved. The Susan G. Komen Breast Cancer Foundation stated that the BCRS has consistently been an effective means of raising funds since it was issued in 1998. The foundation expressed the belief that over the years, the BCRS has proven to be even more successful than anyone had initially anticipated. The foundation reiterated its earlier position that the BCRS has been a unique and innovative fund-raising tool and has raised breast cancer awareness on a global scale. Further, the foundation stated that if anything, it has become an even stronger supporter of the BCRS over the years. The foundation and its 118 affiliates across the country have found the BCRS to be not only a great means for raising awareness, but also an excellent promotional tool that has helped stimulate breast cancer organizations’ fund-raising activities—particularly at the local level. The American Cancer Society believed that time has proven the BCRS to be an effective means of raising funds for breast cancer research. As we reported in 2000, the American Cancer Society’s position had been that it was too early to label the BCRS as either effective or ineffective. However, the society stated that the BCRS has since shown that it has effectively raised money for breast cancer research. Society officials recalled that they had previously been concerned that the BCRS might take momentum away from federal funding for breast cancer research or adversely affect fund- raising organizations’ ability to raise research funds. They stated, however, that they had seen no evidence, over the past 5 years, to indicate that the BCRS had taken momentum away from federal funding for breast cancer research or adversely affected the American Cancer Society’s ability to raise research funds. The society said that it still believes vigilance is in order to ensure that the BCRS does not affect research funding or fund- raising, but otherwise it has no concerns about the BCRS. Society officials said that the BCRS fits well with the society’s goals—one of which is to increase awareness of breast cancer. The society stated that it supports the BCRS. American Philatelic Society officials stated that they had been surprised at stamp collectors’ acceptance of the BCRS in particular, and semipostals in general. As we reported in 2000, the society was opposed to semipostals and believed they were a tax on the hobby of stamp collecting. Over time, however, the society has come to believe that the BCRS’ strong sales indicate that semipostals are now widely accepted, making them effective fund-raisers. Nevertheless, the officials cautioned that although stamp collectors are now accepting of semipostals, they do not want to see more than one or two new semipostal issues per year. Otherwise, stamp collectors would be forced to buy too many of the higher priced semipostal issues each year in order to maintain complete stamp collections. Comments Made by Other Key Stakeholders The National Breast Cancer Coalition (NBCC) stated that its position on the BCRS had not changed since our April 2000 BCRS report. Officials stated that NBCC still believes there are more effective ways of raising money for research than using semipostals. NBCC stated that a better gauge of the BCRS’ effectiveness would be how well the surcharge revenue was spent on research rather than simply how much money the BCRS raised. NBCC continues to believe that effectively lobbying Congress holds the most promise for raising significant amounts of money for breast cancer research. The Chairperson of the Citizen’s Stamp Advisory Committee stated that it was outside the scope of the committee’s role to evaluate or take a position on the effectiveness of the BCRS. The Citizen’s Stamp Advisory Committee is a 15-member group of citizens appointed by and serving at the pleasure of the Postmaster General for the primary purpose of providing the Postal Service with a “breadth of judgment and depth of experience in various areas that influence subject matter, character and beauty of postage stamps.” Under Postal Service regulations implementing the Semipostal Authorization Act, the committee is also responsible for reviewing eligible semipostal proposals and making recommendations to the Postmaster General on worthy cause(s) and executive agency(ies) eligible to receive funds raised by semipostals. The Chairperson emphasized that Postal Service management decides policy, administrative, and operational matters related to semipostals—not the Citizen’s Stamp Advisory Committee. She stated that the committee’s primary function is to review proposals for stamps and select subjects for recommendation to the Postmaster General that are both interesting and educational. Survey Respondents View Semipostals in a Positive Light To determine the public’s awareness of the BCRS and its view of semipostals in general, we included pertinent questions in our survey of the public. We asked the same question about awareness of the BCRS that we asked in our August 1999 survey to look for evidence about whether the public had become more aware of the BCRS over time. The survey results suggest that about 29 percent of adults were aware of the BCRS at the time of our recent inquiry—which occurred almost 5 years after the BCRS was issued. About 37 percent of women and about 19 percent of men were aware of the BCRS. The survey results from our August 1999 survey, which was conducted about 1 year after the BCRS went on sale, indicated that about 24 percent of adults were aware of the BCRS at that time. About 29 percent of women and about 18 percent of men were aware of the BCRS in 1999. We are unable to determine whether the changes in our awareness estimates are due to genuine changes in awareness or to sampling errors and other nonsampling errors related to the 89 percent nonresponse rate, as discussed in appendix I. To help gauge the public’s experience with the BCRS, we also asked the survey participants whether they had ever purchased a BCRS. About 12 percent report they had purchased the BCRS. We did not ask a similar question in our 1999 public opinion survey. Transfers of Surcharge Revenue to NIH and DOD for Breast Cancer Research As of September 2003, the Service had transferred to NIH and DOD about $30.8 million from funds raised by the BCRS for breast cancer research. NIH and DOD reported to us that they have established programs to award funds for innovative breast cancer research conducted by various research institutions. As noted in our April 2000 BCRS report, the act specifies that after deducting its reasonable costs, the Service is to transfer 70 and 30 percent of the remaining surcharge revenue generated by the BCRS to NIH and DOD, respectively. The act also specifies that such transfers be made at least twice yearly under arrangements as agreed to between the Service and those agencies. Further, the act specifies that NIH and DOD are to use transferred BCRS surcharge revenues for breast cancer research. Unlike any agency that was to receive funds generated from semipostals issued under the Semipostal Authorization Act, NIH and DOD are not subject to annual reporting requirements. Agencies that receive funds from semipostals issued under the Semipostal Authorization Act are required to submit annual reports to Congress that include (1) the total amount of funds received during the year; (2) an accounting of how the funds were allocated or otherwise used; and (3) a description of any significant advances or accomplishments made during the year that were funded, in whole or in part, out of amounts received. Information currently reported to Congress on NIH’s and DOD’s use of research funds generated by the BCRS does not adequately support congressional oversight. As mandated, our periodic reports to Congress focus primarily on the BCRS’ costs, effectiveness, and appropriateness; not on how NIH and DOD use BCRS surcharge revenues for breast cancer research and the accomplishments resulting from such research. To help manage their respective BCRS funded research programs, NIH and DOD require award recipients to provide periodic reports on the progress being made and breakthroughs achieved. This is the same information that Congress requires of agencies receiving surcharge revenues generated by semipostals issued under the Semipostal Authorization Act; and this readily available information could be, if required, submitted by NIH and DOD to Congress on an annual basis. To date, the Service has complied with the requirements in the Stamp Out Breast Cancer Act regarding the transfers of BCRS surcharge revenue to NIH and DOD. NIH and DOD are using BCRS surcharge revenue transferred to them to fund breast cancer research. Table 3 shows the transfers, by fiscal year, that have been made since the BCRS was issued in July 1998. NIH and DOD officials said that, as required by the Stamp Out Breast Cancer Act, they have been using transferred BCRS surcharge revenue to fund breast cancer research. NIH officials said that revenue received from the BCRS surcharge revenue has been used to fund breast cancer research under the National Cancer Institute’s (NCI) “Insight Awards to Stamp Out Breast Cancer” initiative. The officials said that this program was designed to fund high-risk exploration by scientists who are employed outside the federal government and conduct breast cancer research at their institutions. They reported that 86 awards had been made as of April 2003, and most of the awards were for 2-year periods with several projects still alive. Discounting a single, one-time supplement for $4,300, individual awards ranged from $47,250 to $142,500 and averaged about $111,000. The officials stated that these insight awards were innovative and high-risk projects; and many have been successful in leading to new insights and approaches in the biology, diagnosis, and treatment of breast cancer. The officials stated that NCI is currently considering additional research projects to be funded using BCRS surcharge revenue not yet committed. Detailed information provided by NIH/NCI on breast cancer research awards funded with proceeds from BCRS surcharge revenue is reprinted in appendix II. DOD officials told us that revenue received from the BCRS’ surcharge revenue had been used to fund “DOD Breast Cancer Research Program Idea Awards,” which are administered by the U.S. Army Medical Research and Materiel Command. Idea Awards are intended to encourage innovative approaches to breast cancer research. DOD officials told us that 19 awards had been made as of April 2003. Individual awards ranged from $5,000 to $578,000 and averaged about $356,500. These awards have focused on research into such areas as the biology of cancer cell growth and tumor formation, immunotherapy, and new areas of breast cancer detection. The officials stated that DOD plans to continue investing money received from BCRS surcharge revenue into programs that will encourage innovative approaches to breast cancer research. The officials also stated that about $256,000 of the transferred funds had been used for management expenses. Detailed information provided by DOD on breast cancer research awards funded with proceeds from BCRS surcharge revenue is reprinted in appendix III. Appropriateness of Using Semipostals as a Means of Fund-Raising Most of the key stakeholders we spoke with and the public believe it is appropriate for the Postal Service to sell the BCRS, as well as other semipostals, to raise funds for worthwhile causes. When we issued our April 2000 report, the BCRS was the only semipostal available from the Postal Service. However, since that time, Congress has passed legislation mandating two additional semipostals and is currently considering legislation requiring two more semipostals and extending the sales period for the BCRS. Opinions of the Postal Service, Key Stakeholders, and Others Regarding Appropriateness The Service, NBCC, and the Citizens Stamps Advisory Committee generally viewed using semipostals to raise funds for designated causes as inappropriate; Dr. B.I. Bodai, Ms. Betsy Mullen, the Susan G. Komen Breast Cancer Foundation, the American Cancer Society, and the American Philatelic Society viewed using semipostals to raise funds as appropriate. The public also believes that it is appropriate to use semipostals as fund- raisers. Views of the Postal Service and Other Key Stakeholders The Postal Service has historically been opposed to semipostals. The Service believes that fund-raising through the sale of semipostals is an activity outside the scope of the Service’s mission as defined by the Postal Reorganization Act. The Service also remains concerned that the popularity of the BCRS does not necessarily portend the success of future semipostals, whether mandated by Congress or initiated by the Postal Service, and that future semipostals might generate only modest amounts of revenue while still requiring substantial postal expenditures. Postal officials are further concerned that too many semipostals not be on the market at the same time. The BCRS, initially slated for a 2-year sales period, has been twice extended by Congress and has been on sale for over 5 years. Postal officials worry that if semipostals are mandated but not retired, the market for semipostals might become oversaturated to the detriment of individual semipostals as well as the semipostal program in general. The Susan G. Komen Breast Cancer Foundation stated that the BCRS was appropriate when issued and remains appropriate today. The foundation continues to support the BCRS wholeheartedly. Further, the foundation believed that the BCRS provides an easy and convenient way for the public to support and contribute to breast cancer research. The foundation stated that during the 5 years the BCRS has been for sale, it has become “a unifying symbol of the fight to find a cure for breast cancer which has become woven into the fabric of America.” When feasible, the foundation uses the BCRS on both mass mailings and individual pieces of correspondence. The American Cancer Society continues to believe that it is appropriate to use the BCRS as a means of fund-raising. The society has held this opinion since the BCRS was first issued. The American Philatelic Society stated that its position on the appropriateness of the BCRS has moderated over time. The society no longer believes it is inappropriate for the Service to issue semipostals, changing its view because of the wide public acceptance of the BCRS. Society officials also told us that although BCRS costs are not identified and tracked with precision, they are in the ballpark given the regulations that the Service has issued for tracking and allocating costs. NBCC stated that its opinion regarding the appropriateness of using the BCRS as a means of fund-raising had not changed since our April 2000 BCRS report. NBCC still had reservations about the appropriateness of the BCRS, and officials stated that they were still concerned that the BCRS might be more of a symbolic gesture, on Congress’ part, than an all-out commitment to fund whatever research is needed to eradicate breast cancer in the shortest possible time. The Chairperson of the Citizen’s Stamp Advisory Committee stated that the committee’s position has always been that semipostals are inappropriate because fundraising is outside the scope of the Postal Service’s mission. The Chairperson noted that the committee had been against the Semipostal Authorization Act. The act mandated that the Service establish a semipostal program, and select causes to be represented by semipostals and agencies to receive funds raised through the sale of semipostals. The committee found it interesting that after giving the Service responsibility for selecting semipostals, Congress has continued to mandate additional semipostals. The committee is concerned that if Congress continues to mandate new semipostals without retiring old ones, a situation could eventually develop where semipostals, which are essentially commemorative stamps with a surcharge, might begin to “crowd out” the Service’s regular commemorative stamp program. This could present a nationwide problem in post offices because there is limited space in window clerks’ stamp drawers for different stamp issues. Because the Service requires that semipostals be available in all post offices at all times, the number of regular commemorative stamp issues might have to be limited to accommodate semipostals unless the number of semipostals for sale at any one time is limited. Dr. B.I. Bodai reiterated his belief that using the Postal Service to issue semipostals for worthy, nonpostal causes is very appropriate and is an example of what good government is all about. Dr. Bodai stated that the BCRS has not only been appropriate from the standpoint of raising money for breast cancer research but has also been extremely valuable as a tool for raising breast cancer awareness on a nationwide basis. He noted that the BCRS is so popular that some states, such as Georgia, have incorporated its image into specialty automobile license plates. Ms. Betsy Mullen of the Women’s Information Network Against Breast Cancer believes that the BCRS is very appropriate, as would be other semipostals that raise funds for worthwhile causes. Ms. Mullen believes that the Service can successfully sell two or more semipostals at the same time. She said that the Service has a long and successful history of concurrently selling multiple commemorative stamps, and the American public has demonstrated over the years its philanthropic support for multiple worthwhile causes. She also said that concurrently selling two or more semipostals is not a detriment to the semipostal program, but rather an enhancement because multiple semipostals cross-promote each other’s sales. She noted that the Service is cross-promoting the sale of the BCRS and Heroes semipostals through its advertisements of these semipostals at post offices. Finally, she stated that the Women’s Information Network Against Breast Cancer uses the BCRS on all of its correspondence, and, because of the BCRS, research is now being done that otherwise would not have been done. The Public’s View The public continues to believe that it is appropriate to use semipostals to raise funds for nonpostal purposes. Our public opinion survey conducted by International Communications Research (ICR) indicated that about 71 percent believe it is very or somewhat appropriate to use semipostals issued by the Postal Service, such as the BCRS, to raise funds for nonpostal purposes and about 23 percent believe it is somewhat or very inappropriate. Six percent had no opinion, said they didn’t know, or volunteered the answer that it would depend on the cause for which the semipostal was being used to raise money. Statistically, these opinions about the appropriateness of semipostals are not large enough to be significantly different from the findings of our 1999 survey. Statutory Authorities and Constraints On the legislative front, several laws have been enacted since our April 2000 BCRS report that affect the BCRS specifically or semipostals in general. These laws have (1) twice extended the sales period for the BCRS, (2) authorized two additional semipostals, and (3) authorized the Service to issue future semipostals. Also, as of August 2003, Congress was considering legislation establishing two more semipostals and extending the sales period for the BCRS until December 31, 2005. As of August 2003, the Service had not issued any semipostals of its own choosing and had no plans to do so until the sales period for congressionally mandated semipostals have ended. We believe this position is consistent with the discretion afforded the Service under the Semipostal Authorization Act. Conclusions We are concerned that the Service’s BCRS regulations can be interpreted as not requiring the Service to provide baseline comparisons for certain BCRS costs, e.g., printing, sales, and distribution, although the Stamp Out Breast Cancer Act specifically states that reasonable costs in these areas attributable to the BCRS should be recouped from its surcharge revenue. Although the Service has provided printing costs for various commemorative stamps, it has not established baseline costs for certain BCRS costs. Without these baselines, the Service lacks assurance that it is identifying and recouping excess costs from the BCRS’ surcharge revenue. If the Service does not recoup costs for items that exceed those of comparable stamps, the Service could be subsidizing BCRS costs. Furthermore, without having baseline cost information for comparable stamps for the cost categories that the Service does track for the BCRS, it is impossible to determine whether the Service has recouped all reasonable costs of the BCRS that exceed those for comparable stamps in such cost categories. Further, the Service has not met its commitment to Congress to provide it with BCRS cost data and analyses, as we had previously recommended, to assure postal ratepayers that they are not involuntarily contributing to breast cancer research. Without current BCRS cost data and analyses, Congress and the public continue to lack assurance that postal ratepayers are not involuntarily contributing funds to breast cancer research. Nearly all of the stakeholders that we spoke with consider the BCRS to be a success, particularly given its sales performance to date. According to NIH and DOD, millions of dollars in BCRS surcharge revenue have contributed to important new insights and approaches in the biology, diagnosis, and treatment of breast cancer, as well as in other areas of research. NIH and DOD provided us information regarding their use of BCRS surcharge revenue as well as advances or accomplishments they achieved. However, NIH and DOD are not required to submit annual reports to Congress like agencies that are to receive funds from semipostals issued under the Semipostal Authorization Act. Congress has twice extended the sales period for the BCRS and is currently considering a third extension. Therefore, establishing annual reporting requirements for NIH and DOD, similar to the statutory reporting requirements established for any agency that would receive funds from semipostals issued under the Semipostal Authorization Act, would prove valuable by providing information on the amount of funds received, how the funds were used, and any accomplishments resulting from the use of those funds, should Congress decide to further extend the BCRS sales period. Matter for Congressional Consideration If Congress decides to extend the sales period for the BCRS past its scheduled end date of December 31, 2003, it may wish to consider establishing a requirement that NIH and DOD annually report to Congress, similar to the requirement for agencies that are to receive surcharge revenues generated from semipostals issued under the Semipostal Authorization Act. Recommendations for Executive Action We are reaffirming our recommendation made in April 2000 that the Postmaster General direct postal management to make available the cost data and analyses showing which BCRS costs have been recovered through the First-Class postage rate to provide assurance that postal ratepayers are not involuntarily contributing funds to breast cancer research. We also recommend that the Postmaster General reexamine and, as necessary, revise the Service’s December 2001 cost-recovery regulations to ensure that the Service establishes baseline costs for comparable commemorative stamps and uses these baselines to identify and recoup excess costs from the BCRS’ surcharge revenue. As part of that process, the Postmaster General should publish the baseline costs it is using. This would help provide assurance that the Service is recouping all reasonable costs of the BCRS from the surcharge revenue. Agency Comments and Our Evaluation The Postal Service provided comments on a draft of this report in a letter from the Senior Vice President, Government Relations dated September 10, 2003. These comments are summarized below and are reprinted as appendix IV. Postal officials also provided technical and clarifying comments, which we have incorporated into the report where appropriate. The Senior Vice President indicated that the Service plans to take appropriate actions to address our specific recommendations. He stated that the Service never intended that its BCRS cost-recovery regulations be interpreted as not requiring establishment of adequate baselines for comparing certain categories of costs. However, he acknowledged that the regulations might need to be revised to make the Service’s intent clearer. Regarding the establishment of baselines, he noted that comparisons between the BCRS and comparable commemoratives could involve different facets in various areas. For example, he noted that printing cost comparisons could be difficult because they may involve differing time periods, different graphic designs, and different print runs. Nonetheless, he said that the Service would reexamine its semipostal regulations with a view toward proposing revisions about what costs are to be identified and recouped from surcharge revenues. In commenting on our reaffirmed recommendation that the Service make available BCRS cost data and analyses, the Senior Vice President stated the Service plans to reassess the earlier analysis it had commissioned on recovery of BCRS costs through the First-Class Mail postage rate in light of the cost-recovery issues raised in our report. He stated that the Service would provide Congress and us with the results of that reassessment upon completion. We are sending copies of this report to the Chairman and Ranking Minority Member, Subcommittee on Health, House Committee on Energy and Commerce; and to the Chairman and Ranking Minority Member, Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform because of their involvement in passage of the Stamp Out Breast Cancer Act. We are also sending copies of this report to Senator Dianne Feinstein and Representative Joe Baca because of their expressed interest in the BCRS; the Postmaster General and Chief Executive Officer, United States Postal Service; the Chairman of the Postal Rate Commission; and other interested parties. Copies will also be made available to others upon request. In addition, this report will be available at our Web site at http://www.gao.gov. Key contributors to this report are listed in appendix V. If you or your staffs have any questions about this letter or the appendixes, please contact me at (202) 512-2834 or E-mail at ungarb@gao.gov. Objectives, Scope, and Methodology Our objectives for this report were to fulfill our legislative mandate to update Congress on (1) the monetary and other resources the Postal Service has expended in operating and administering the Breast Cancer Research Semipostal (BCRS) program, (2) the effectiveness of using the BCRS as a means of fund-raising, and (3) the appropriateness of using the BCRS as a means of fund-raising. We also provide information on the status of recommendations made to the Postmaster General in our April 2000 BCRS report. In essence, we recommended that the Service formalize its criteria for making BCRS cost recovery decisions and make BCRS cost data and analyses available to assure postal ratepayers that they were not involuntarily subsidizing BCRS costs. To describe the monetary and other resources the Service has expended in operating and administering the BCRS program, we updated pertinent information presented in our April 2000 report to reflect current conditions. To do this, we interviewed officials in the Service’s Offices of Stamp Services and Finance responsible for administering the BCRS program and tracking its costs. We gathered and analyzed data on the surcharge revenue raised by the BCRS as well as data on the costs and resources the Service used in operating and administering the BCRS program. We also identified and reviewed the Service’s criteria for determining which costs are to be recouped from the BCRS’ surcharge revenue and, as necessary, discussed with finance officials the application of the Service’s criteria for certain cost items. To determine if the BCRS has been an effective means of fund-raising, we obtained and analyzed BCRS sales data and discussed with finance and stamp services officials how certain events may have affected sales. We obtained similar information for the Heroes of 2001 semipostal and compared sales for the two semipostals. We also obtained information on how much BCRS generated funds had been transferred to the National Institutes of Health (NIH) and Department of Defense (DOD) for breast cancer research, and obtained information on how the money was being used to further breast cancer research. We did not evaluate or assess NIH’s and DOD’s process for determining who would be awarded BCRS research funds, nor did we evaluate any of the individual awards. Additionally, we did not independently verify any of the financial data provided by NIH and DOD. Further, we interviewed all but one of the key stakeholders that we had interviewed for our April 2000 report to determine if their views on the BCRS’ effectiveness as a fund-raiser have changed since our last report. The key stakeholders interviewed included representatives of (1) the American Cancer Society, (2) the National Breast Cancer Coalition (NBCC), (3) the Susan G. Komen Breast Cancer Foundation, (4) Dr. B. I. Bodai, and (5) the American Philatelic Society. We did not interview the current Curator of the Smithsonian Institution’s National Philatelic Collection for this report. We had interviewed the former Curator for our April 2000 report, but the current Curator said that it was not within his personal expertise to evaluate the effectiveness or appropriateness of the BCRS, or semipostals in general, and it would not be proper for him to comment in his role as an official of the Postal Museum. For this report, we also interviewed Betsy Mullen, who is the founder of the Women’s Information Network Against Breast Cancer, and who, along with Dr. B.I. Bodai, lobbied Congress to pass legislation creating the BCRS. Further, we interviewed the Chairperson of the Citizens Stamp Advisory Committee because, since our last BCRS report, the committee has been given the responsibility for reviewing semipostal candidates and making recommendations to the Postmaster General. We did not update the information included in our April 2000 report on foreign postal administration’s semipostal activities because of the time and resources that such work would have required and the limited new information that it likely would have yielded. To determine if the BCRS has been an appropriate means of fund-raising, we interviewed the same key stakeholders identified above to solicit their current views on the appropriateness of using the BCRS to raise funds. We also researched and analyzed applicable sections of the U. S. Code and Postal Service regulations to identify changes that have occurred since our April 2000 report that either affected the BCRS directly or the semipostal program in general. Additionally, we identified and analyzed pending legislation that would affect the Service’s semipostal program. We conducted our review at Postal Service Headquarters in Washington, D.C., from February through August 2003 in accordance with generally accepted government auditing standards. Public Opinion Survey To obtain the public’s opinion of the BCRS in 2003, we contracted with International Communications Research (ICR) of Media, Pa. ICR included five questions about the BCRS and semipostals in its national omnibus telephone survey, conducted on 5 days, from June 27 and July 1, 2003 (Friday through Tuesday). Omnibus surveys of this type also collect demographic information and include questions for other clients on other topics. For our previous survey in 1999, ICR followed the same survey procedures when it asked four of the five questions that we used in 2003. In 2003, interviews were completed with respondents at 1,038 of the estimated 9,046 eligible sampled households, for a response rate of about 11 percent. These survey procedures yield a nonprobability sample of members of the population of the contiguous United States (48 states and the District of Columbia) who are 18 years or older, speak English, and reside in a household with a residential, land-based telephone. The 89 percent nonresponse rate means that estimates in the report are subject to nonsampling errors of unknown magnitude. Selection of Households and Respondents Random digit dial (RDD) equal probability selection methods were followed to identify telephone numbers using the GENESYS Sampling System. The GENESYS system draws numbers from those active banks of telephone exchanges that have at least two household numbers listed and are accessed through land lines. Exchanges assigned to cellular telephones are not included. The interviewers selected a member from each household, using a mixture of random and systematic procedures. Because adult males are more difficult to contact and interview in telephone surveys, ICR took the following measures to meet the specification of at least 500 completed male interviews, or approximately half of the sample. An interviewer first attempted to interview the adult male (aged 18 or older) with the most recent birthday. If that male was not present in the household at the time of the telephone call, then any other male present in the household at that time was selected; if no male was present, then an adult female was selected, with first preference being for the female present with the most recent birthday. Because the specifications were still not met, only males were interviewed during the closing phase of the survey. Although routine procedures specify five attempts to locate a respondent in each household, many households did not receive five calls and had not been contacted by the end of the interview period after one or more calls ended in a busy signal, no answer, or inability to complete a callback attempt. The respondent selection procedures eliminated interviewer judgment from the selection process, but did not yield a random, probability sample of the U.S. population. For example, these procedures exclude females who are present in households at the time when a willing male is present. The procedures also exclude any household members who are not at home at the time the interviewer contacts the household. Survey respondents are weighted in our analyses so that age, sex, education, and regional estimates from our survey will match U.S. data from the March 2002 Current Population Survey (CPS) on these demographic characteristics for the adult population (18 years of age and older) of the 48 contiguous states and the District of Columbia. The number of telephone numbers in the household and number of household members were also considered in the weighting process. Sampling Errors As with all sample surveys, this survey is subject to both sampling and nonsampling errors. The effects of sampling errors, due to the selection of a sample from a larger population, can be expressed as confidence intervals based on statistical theory. The effects of nonsampling errors, such as nonresponse and errors in measurement, may be of greater or lesser importance, but cannot be quantified on the basis of the available data. Sampling errors occur because we use a sample to draw conclusions about a much larger population. The survey’s sample of telephone numbers is based on a probability selection procedure. As a result, the sample was only one of a large number of samples that might have been drawn from the total telephone exchanges throughout the country. If a different sample had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For all the percentages presented in this report, we are 95-percent confident that when only sampling errors are considered, the results we obtained are within +/- 6 percentage points or less of what we would have obtained if we had surveyed the entire study population. For example, our survey estimates that 70 percent of the population feels that it is very or somewhat convenient to use special stamps to raise funds. The 95 percent confidence interval due to solely sampling errors for this estimate is between approximately 66 percent and 73 percent. Nonsampling Errors In addition to the reported sampling errors, the practical difficulties of conducting any survey introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, some types of people may be more likely to be excluded from the study, errors could be made in recording the questionnaire responses into the computer-assisted telephone interview software, and the respondents’ opinions may differ from those of people in the sampled households we did not successfully interview. For this survey, the 11 percent response rate is a potential source of nonsampling error; we do not know if the respondents’ answers are different from the 89 percent who did not respond. With the available information we cannot estimate the impact of the nonresponse on our results. Our results will be biased to the extent that the people at the 89 percent of the telephone numbers that did not yield an interview have different opinions about or experiences with the BCRS than did the 11 percent of our sample who responded. Once a respondent agreed to participate, the nonresponse for any particular item was low. Unless otherwise noted, less than 4 percent of the weighted answers to each question are in the category of not knowing an answer or refusing to answer the particular question. BCRS Questionnaire The section of the questionnaire that obtained information on BCRS issues, including the introduction and the five survey questions, follows: Since 1998, at the direction of Congress, the U.S. Postal Service has been selling a Breast Cancer Research stamp at a price above the First-Class postage rate. The stamp currently sells for 45 cents, with 37 cents covering the First-Class postage rate and most of the remaining 8 cents going to breast cancer research. This stamp is available at post offices, postal stores, special breast cancer fund-raising events, and from rural carriers and some postal vending machines. In order to provide the Congress with the public’s views on this topic, we would like to ask you some questions. BC-1. Prior to hearing what I just told you about the 45-cent Breast Cancer Research stamp, were you aware that the Postal Service was selling such a stamp? 1 Yes 2 No D (DO NOT READ) Don’t Know R (DO NOT READ) Refused BC-2. In your opinion are special stamps with an added cost—such as the 45-cent Breast Cancer Research stamp—a convenient way for you to contribute to a special purpose? (READ LIST. ENTER ONE ONLY) 4 Definitely yes 3 Probably yes 2 Probably no 1 Definitely no D (DO NOT READ) Don’t know/No opinion R (DO NOT READ) Refused BC-3. In your opinion, how appropriate or inappropriate is it to use special stamps issued by the Postal Service to raise funds nonpostal purposes? (READ LIST. ENTER ONE ONLY) 4 Very appropriate 3 Somewhat appropriate 2 Somewhat inappropriate 1 Very inappropriate 5 (DO NOT READ) Would depend on cause/purpose D (DO NOT READ) Don’t know/No opinion R (DO NOT READ) Refused (IF Q 3 = SOMEWHAT INAPPROPRIATE OR VERY INAPPROPRIATE, CONTINUE; ELSE SKIP TO Q 5) National Institutes of Health Breast Cancer Research Awards Funded with Proceeds from the BCRS’ Surcharge Revenue As of April 2003, the National Cancer Institute (NCI) reported that it had funded 86 breast cancer research awards using money transferred to NIH by the Postal Service from the BCRS’ surcharge revenue. The awards totaled about $9.5 million and covered research areas that included prevention, nutrition, biology, diagnosis, treatment, prognosis, metastasis, tumorigenesis, and mutagenesis. Discounting a single, one-time supplement for $4,300, individual awards ranged from $47,250 to $142,500 and averaged $111,395. Thirty-two of the 86 awards were noncompetitive continuations of previous BCRS funded awards. According to NIH officials, they were in the process of awarding the remaining funds that had been transferred to NIH for breast cancer research. Table 4 identifies pertinent information about each award, including the amount of the award, research area, principal investigator, sponsoring institution, and the fiscal year of the award. Department of Defense Breast Cancer Research Awards Funded with Proceeds from the BCRS’ Surcharge Revenue As of April 2003, the U.S. Army Medical Research and Materiel Command reported that it had funded 19 breast cancer research awards using money transferred to DOD by the Postal Service from the BCRS’ surcharge revenue. The awards totaled about $6.8 million and covered research areas that included genetics, imaging, biology, epidemiology, immunology, and therapy. Individual awards ranged from $5,000 to $578,183 and averaged $356,478. According to DOD officials, about $256,000 of the transferred funds had been used for management expenses, and DOD was in the process of awarding the remaining funds. Table 5 identifies pertinent information about each award, including the amount of the award, research area, principal investigator, sponsoring institution, and the fiscal year of the award. Comments from the U.S. Postal Service Contact and Staff Acknowledgments GAO Contact Acknowledgments Alan N. Belkin, Kathleen A. Gilhooly, Kenneth E. John, Stuart M. Kaufman, Roger L. Lively, Jill P. Sayre, and Charles F. Wicker made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs Washington, D.C. 20548-0001
In America, breast cancer is reported as the second leading cause of cancer deaths among women. Given this statistic, the importance of finding a cure cannot be overemphasized. To supplement the billions of federal dollars being spent on breast cancer research, Congress passed legislation creating the Breast Cancer Research Semipostal (BCRS) to increase public awareness of the disease and allow the public to participate directly in raising funds for such research. Since the BCRS was the first semipostal issued by the Postal Service, Congress mandated, and GAO issued, a report in April 2000 on the BCRS' cost, effectiveness, and appropriateness as a fund-raiser. After the report, Congress extended the BCRS sales period through 2003. As mandated, this report updates GAO's prior work as Congress considers another extension to the BCRS sales period. Although the U.S. Postal Service (the Service) has not tracked or estimated all costs associated with the BCRS program, it reported that the bulk of BCRS costs, from inception through May 16, 2003, were about $9.5 million. In April 2000, GAO recommended that the Service issue BCRS cost-recovery regulations and make available cost data and analyses to provide postal ratepayers assurance they were not involuntarily subsidizing BCRS costs. The Service issued regulations in July 2000, but it has not yet submitted the recommended data and analyses to Congress. Service officials attributed the lack of providing Congress with this information to administrative oversight and other factors, but said they would provide Congress with this information as soon as practicable. In 2001, the Service amended its BCRS regulations stating that cost-recovery determinations would be made using baseline costs for comparable commemorative stamps. GAO, however, is concerned that the regulations can be interpreted as not requiring the Service to provide for baseline comparisons for certain BCRS costs, e.g., printing, sales, and distribution, although the Stamp Out Breast Cancer Act states that reasonable costs attributable to the BCRS in these areas should be recouped. The Service has not established baseline costs for these categories. Without these baselines, the Service lacks assurance that it is identifying and recouping excess costs from BCRS surcharge revenue. The BCRS continues to be an effective means of raising funds for breast cancer research. Sales have fluctuated, but the BCRS has raised over $30 million for research since it was issued in July 1998. NIH and DOD--recipients of research funds generated by the BCRS--are not subject to the same statutory reporting requirements as agencies that are to receive funds generated by semipostals issued under the Semipostal Authorization Act. Such agencies are required to submit an annual report to Congress on the amount of funds received, how the funds were used, and accomplishments. The public and key stakeholders GAO spoke with believe it is appropriate for the Service to issue semipostals.
Background The Medical Device Amendments of 1976 established three classes of medical devices. Under current law, these three device classes are defined as follows: Class I devices are those for which compliance with general controls, such as good manufacturing practices specified in FDA’s quality system regulation, are sufficient to provide reasonable assurance of their safety and effectiveness. Class II devices are subject to general controls and may also be subject to special controls, such as postmarket surveillance, patient registries, or specific FDA guidelines, if general controls alone are insufficient to provide reasonable assurance of the device’s safety and effectiveness. Class III devices are subject to general controls, but are distinguished from class I and II devices because class III devices are those (1) for which insufficient information exists to determine whether general and special controls are sufficient to provide a reasonable assurance of the safety and effectiveness of the device and (2) that support or sustain human life or are of substantial importance in preventing impairment of human health, or that present a potential unreasonable risk of illness or injury. Devices Exempt from FDA Premarket Review Under federal regulations, many types of devices are exempt from FDA premarket review. Although FDA does not track the number of devices that are actually sold or marketed in the United States, manufacturers are required to register with FDA and provide a list of devices intended for commercial distribution, including device types that are exempt from premarket review. As shown in figure 1, about 67 percent of the more than 50,000 separate devices that manufacturers listed with FDA during fiscal years 2003 through 2007 were exempt from premarket review. Of the exempt devices that manufacturers listed with FDA, about 95 percent were class I devices, for example reading glasses and forceps. About 5 percent were class II devices, for example wheeled stretchers and mercury thermometers. Premarket Review Process for Class III Devices With the enactment of the Medical Device Amendments of 1976, Congress imposed requirements under which all class III devices would be approved through the PMA process before being marketed in the United States. However, when it passed the 1976 amendments, Congress distinguished between those devices in commercial distribution before the date of enactment and those entering the market on or after enactment. Preamendment devices. Class III devices that were in commercial distribution prior to May 28, 1976 (referred to as preamendment devices) were allowed to be reviewed and cleared for the U.S. market without PMA approval until FDA published final regulations requiring each device type to obtain approval for the U.S. market through the PMA process. Postamendment devices. Devices that were not in commercial distribution prior to May 28, 1976 (referred to as postamendment devices) were classified automatically into class III and required to go through the PMA process unless FDA either (1) determined they were substantially equivalent to a preamendment device type for which premarket approval is not required or (2) reclassified the device type into class I or class II. Within this framework, Congress thus envisioned that class III devices would be approved through the more stringent PMA process and that the premarket review of class I and class II devices would entail a lesser degree of scrutiny. By the late 1980s, FDA had not acted to require PMAs for many preamendment class III device types. In 1990, the SMDA required FDA to 1. before December 1, 1995, order industry submission of safety and effectiveness information for preamendment class III device types that were not yet required to go through the PMA process; 2. after ordering industry submission of safety and effectiveness information but before December 1, 1995, publish regulations for each such device either revising its classification into class I or class II or requiring it to remain in class III; and 3. as promptly as is reasonably achievable, but not later than 12 months after the effective date of a regulation requiring a device to remain in class III, establish a schedule for the promulgation of regulations requiring the submission of PMAs for the preamendment class III device types required to remain in class III. The House of Representatives report accompanying the SMDA stated that “In formulating these schedules, the FDA should take into account its priorities and limited resources, together with the Committee’s intention that the evaluation process be expeditious.” In May 1994, FDA published a notice in the Federal Register announcing a strategy for implementation of the SMDA. According to the FDA memorandum outlining this strategy, the agency planned the following: To publish proposed regulations by 1996 requiring PMAs for 15 device types that FDA had determined to present an unreasonably high risk to public health because significant issues of safety or effectiveness or both were not being resolved or, to the best of FDA’s knowledge, had little probability of being resolved. According to FDA, the timetable for publication of each final regulation would be based on specific data needs, comments received (in response to the proposed rule), and the existence, if any, of petitions received to reclassify the devices. To order manufacturers to submit information on safety and effectiveness by 1998 for 58 device types. FDA identified 27 of these device types as not presenting as great a risk to the public health in light of FDA’s knowledge and experience with the devices. FDA identified the other 31 device types as strong candidates for reclassification. FDA’s strategy stated that after receipt of the safety and effectiveness information, the agency would proceed with rule making to either reclassify the devices or retain them in class III. To issue one proposed regulation in 1994 requiring PMAs for 44 device types in limited use. The agency’s strategy established a plan to start addressing the class III device types that were allowed to go through the 510(k) process, but it did not establish completion dates for doing so. See appendix III for additional information on the FDA strategy. FDA’s 510(k) Review Process As a general rule, devices are subject to 510(k) premarket review unless exempt or required to go through the PMA process. Specifically, the 510(k) process, established in 1976, requires a device manufacturer to notify FDA 90 days before it intends to market a device and to establish that the device is substantially equivalent to a legally marketed device that does not require a PMA. The legally marketed device is referred to as a predicate device. Under federal regulations, a predicate device can be a device that was legally marketed prior to May 28, 1976, for which a PMA is not was marketed on or after May 28, 1976, and was found to be substantially equivalent to a legally marketed device through the 510(k) process; or was reclassified by FDA from class III to class II or I. FDA reviews each 510(k) submission to determine whether the device in question is SE or NSE to a predicate device. To be SE, a device must (1) have the same intended use as the predicate device and (2) have the same technological characteristics as the predicate device or have different technological characteristics and submitted information demonstrates that the device is as safe and effective as the marketed device and does not raise different questions of safety or effectiveness. Because the predicate device may be a device that was marketed on or after May 28, 1976, that was found SE when compared to another legally marketed device through the 510(k) process, there could be multiple iterations of a given device type cleared through the 510(k) process. As a result, a 510(k) submission for a new device in 2008 could be compared to the 20th iteration of a device type that was on the market before 1976. Figure 2 shows FDA’s 510(k) decision-making process. Relative to the PMA process, the 510(k) premarket review process is generally: Less stringent. For most 510(k) submissions, clinical data are not required and substantial equivalence will normally be determined based on comparative device descriptions, including performance data. In contrast, in order to meet the PMA approval requirement of providing reasonable assurance that a new device is safe and effective, most original PMAs and some PMA supplements require clinical data. In addition, other aspects of FDA’s premarket review are less stringent for 510(k) submissions than for PMA submissions. For example, FDA generally does not inspect manufacturing establishments as part of the 510(k) premarket review process—the 510(k) review process focuses primarily on the end product of the manufacturing process rather than the manufacturing process itself. In contrast, the agency does inspect manufacturing establishments as part of its review of original PMA submissions. Manufacturing establishments that produce devices cleared through the 510(k) process, as well as those that produce devices approved through the PMA process, are subject to periodic inspections under FDA’s normal inspection program. Faster. FDA generally makes decisions on 510(k) submissions faster than it makes decisions on PMA submissions. FDA’s fiscal year 2009 goal is to review and decide on 90 percent of 510(k) submissions within 90 days and 98 percent of them within 150 days. The comparable goal for PMAs is to review and decide upon 60 percent of original PMA submissions in 180 days and 90 percent of them within 295 days. Less expensive. The estimated cost to FDA for reviewing submissions is substantially lower for 510(k) submissions than for PMA submissions. For fiscal year 2005, for example, according to FDA the estimated average cost for the agency to review a 510(k) submission was about $18,200, while the estimate for a PMA submission was about $870,000. For the applicant, the standard fee provided to FDA at the time of submission is also significantly lower for a 510(k) submission than for a PMA submission. In fiscal year 2009, for example, the standard fee for 510(k) submissions is $3,693, while the standard fee for original PMA submissions is $200,725. Consumer advocates have raised questions regarding the number of devices, particularly class III devices, that are cleared through the 510(k) process and regarding the use of the 510(k) process to clear devices that may utilize new technologies that are different than those in the marketed devices to which they are compared. Officials of associations representing medical device manufacturers, however, have asserted that the 510(k) premarket review is an important tool for reviewing device submissions, saying that it is a rigorous process that gives FDA the flexibility to identify and request the information it needs to assess the safety and effectiveness of medical devices. FDA Used the 510(k) Process to Review Class I and II Device Submissions, and Used Both the 510(k) and PMA Processes to Review Class III Device Submissions In fiscal years 2003 through 2007, FDA reviewed all submissions for class I and II devices through the 510(k) process, and reviewed submissions for some types of class III devices through the 510(k) process and others through the PMA process. Specifically, FDA reviewed all 13,199 submissions for class I and class II devices through the 510(k) process, clearing 11,935 (90 percent) of these submissions. FDA also reviewed 342 submissions for class III devices through the 510(k) process, clearing 228 (67 percent) of these submissions. In addition, the agency reviewed 217 original PMA submissions and 784 supplemental PMA submissions for class III devices and approved 78 percent and 85 percent, respectively, of these submissions. Although Congress envisioned that class III devices would be approved through the more stringent PMA process, we found that FDA has not published regulations requiring PMA submissions for some types of preamendment class III devices nor has it reclassified them. As a result, some types of class III devices have been cleared for the U.S. market through the 510(k) process. Table 1 summarizes the FDA review decisions, by class of device, in fiscal years 2003 through 2007 for 510(k) and PMA submissions. FDA Reviewed All Submissions for Class I and Class II Devices through the 510(k) Process FDA reviewed all class I and class II device submissions in fiscal years 2003 through 2007 through the 510(k) process. As shown in table 2, FDA cleared approximately 9 out of every 10 of the 510(k) submissions for class I and class II devices for which FDA made review decisions during this time period. Of the 10,670 510(k) submissions for class II devices that FDA cleared in fiscal years 2003 through 2007, FDA’s databases identified one-quarter as being for devices that were implantable; were life sustaining; or presented significant risk to the health, safety, or welfare of a patient (see table 3). Of these characteristics, implantable was the most frequently identified characteristic. In terms of 510(k) submissions for class I devices, according to FDA, none of the more than 1,200 510(k) submissions for class I devices that FDA cleared during the same time period were for devices that were implantable; were life sustaining; or presented significant risk to the health, safety, or welfare of a patient. FDA Reviewed Submissions for Some Class III Devices Types through the 510(k) Process and Others through the PMA Process In fiscal years 2003 through 2007, FDA reviewed submissions for some types of class III devices through the 510(k) process, and other types of class III devices through the PMA process. Specifically, FDA reviewed 342 submissions for new class III devices through the 510(k) process, determining 228 (67 percent) of these submissions to be SE to a predicate device. During the same time period, FDA reviewed 217 original PMA submissions and 784 supplemental PMA submissions for class III devices and approved 78 percent and 85 percent of them, respectively. (See fig. 3.) FDA Has Not Issued Regulations Requiring PMA Submissions for Some Types of Class III Devices Although Congress envisioned that class III devices would be approved through the more stringent PMA process, and the SMDA required that FDA establish a schedule for doing so, this process remains incomplete. The 228 class III submissions that FDA cleared through the 510(k) process in fiscal years 2003 through 2007 were allowed to undergo premarket review through the 510(k) process because they were for preamendment class III device types, or those substantially equivalent to them, for which FDA had not yet issued regulations either requiring PMA submissions or reclassifying them. These 228 510(k) submissions involved 24 device types (see table 4). Of these types, 16 were included in one of the priority groups in FDA’s 1994 strategy for reclassifying or requiring PMAs for class III device types, and in particular 4 device types—accounting for 39 of the 228 submissions—were among those that FDA identified as presenting an unreasonably high risk to public health. The class III submissions FDA cleared through the 510(k) process were more likely than other 510(k) submissions to be for device types that were implantable; were life sustaining; or pose a significant risk to the health, safety, or welfare of a patient. Of the 228 510(k) submissions for class III devices that FDA cleared in fiscal years 2003 through 2007, FDA’s databases flagged 66 percent as being for device types that are implantable, life sustaining, or of significant risk (see fig. 4). This compares to no 510(k) submissions for class I devices and 25 percent of 510(k) submissions for class II devices. Four of the 24 class III device types for which FDA cleared 510(k) submissions in fiscal years 2003 through 2007 have since been reclassified by FDA as class II device types. Twenty of the 24 device types, however, may still be cleared through the 510(k) process. Further, there are other preamendment class III device types that did not happen to have any 510(k) submissions cleared in fiscal years 2003 through 2007 that are also still eligible to be cleared through the 510(k) process. FDA officials have acknowledged the importance of publishing regulations requiring PMA submissions or reclassifying preamendment class III device types. When asked for their time frame for doing so, the officials did not provide one. Rather, they responded that that the agency is committed to addressing this issue as resources and priorities permit. Relatively Few Class II and Class III 510(k) Submissions Had a New Intended Use or New Technological Characteristics In our review of 510(k) submission files for which FDA reached a determination of SE or NSE in fiscal years 2005 through 2007, we found that FDA determined that relatively few devices had a new intended use or new technological characteristics. Overall, we found that FDA determined about 1 percent of class II and III submissions had a new intended use and about 15 percent had new technological characteristics. For the 510(k) submissions that FDA cleared, FDA found that all of the devices had the same intended use as their predicate devices, and 86 percent also had the same technological characteristics. In contrast, of the 510(k) submissions that FDA determined to be NSE, more than half were for devices that had a new intended use or new technological characteristics. Figure 5 shows the estimated percentage of 510(k) submissions reaching each step in the review process. See appendix V for additional information on FDA’s decision-making process. All 510(k) Submissions That FDA Cleared Had the Same Intended Use and Most Had the Same Technological Characteristics as Predicate Devices All 510(k) submissions for class II and class III devices that FDA cleared in fiscal years 2005 through 2007 had the same intended use and most had the same technological characteristics as predicate devices. In all 4,815 class II and class III submissions cleared through the 510(k) process during this time period, FDA determined that the new devices had the same intended use as their predicate devices. In 86 percent of these submissions, we found that FDA determined that the new devices also had the same technological characteristics as their predicate devices. (See fig. 6.) In 14 percent of the class II and class III submissions cleared through the 510(k) process in fiscal years 2005 through 2007, FDA determined that the new device had new technological characteristics. For the cleared submissions with new technological characteristics, FDA determined, among other things, that either 1. the new technological characteristics could not affect safety or effectiveness—for example, FDA determined that software modifications to a defibrillator allowing physicians greater control over the device’s CPR (cardiopulmonary resuscitation) settings could not affect the safety or effectiveness of the defibrillator—or 2. the new characteristics do not raise new types of safety or effectiveness questions—for example, FDA determined that a digital electrocardiograph did not raise new types of effectiveness questions relative to the predicate device, an analog electrocardiograph. Table 5 shows the distribution of cleared submissions by class and characteristics of the determination. More Than Half of the 510(k) Submissions FDA Determined Not Substantially Equivalent Were for Devices That Had a New Intended Use or New Technological Characteristics We found that of the 248 class II and III submissions that FDA determined to be NSE in fiscal years 2005 through 2007, slightly more than half had a new intended use, had a new technological characteristic that raised new types of safety or had a new technological characteristic that could affect safety or effectiveness and did not have performance data to demonstrate equivalence to the predicate device. We also found that about one in every three 510(k) submissions FDA determined to be NSE had the same intended use and the same technological characteristics as the predicate device, but FDA determined the submissions NSE because of a lack of performance data. An additional 13 percent of submissions were determined NSE for other reasons, such as not providing adequate data early in the review or not having a predicate device (see table 6). Conclusions The 510(k) process plays a major role in FDA’s oversight of medical devices. During fiscal years 2003 through 2007, FDA reviewed over 2,400 510(k) submissions annually and cleared about 90 percent of these submissions for the U.S. market. These included 228 cleared submissions for class III devices. In establishing device classes in 1976, Congress envisioned that all class III devices would eventually be required to undergo premarket review through the more stringent PMA process, which requires the manufacturer to provide evidence, which may include clinical data, providing reasonable assurance that the new device is safe and effective. However, certain preamendment class III device types may be reviewed through the 510(k) process until such time as FDA publishes regulations requiring them to go through the PMA process. In 1990 the SMDA directed FDA to take action on the remaining preamendment class III device types by reclassifying them to a lower class or requiring them to remain in class III and go through the PMA process, but we found that more than 14 years after FDA published its strategy and plans for doing so, a significant number of class III devices—including device types that FDA has identified as implantable; life sustaining; or posing a significant risk to the health, safety, or welfare of a patient—still enter the market through the less stringent 510(k) process. FDA has stated that eventually all class III devices will require FDA approval through the PMA process and FDA officials reported that the agency is committed to addressing this issue, but the agency has not specified time frames for doing so. Without FDA action, the remaining preamendment class III device types—including device types that FDA identified in 1994 as presenting an unreasonably high risk to public health—may enter the U.S. market through FDA’s less stringent premarket notification process. Recommendation for Executive Action We are recommending that the Secretary of Health and Human Services direct the FDA Commissioner to expeditiously take steps to issue regulations for each class III device type currently allowed to enter the market through the 510(k) process. These steps should include issuing regulations to (1) reclassify each device type into class I or class II, or requiring it to remain in class III, and (2) for those device types remaining in class III, require approval for marketing through the PMA process. Agency Comments We received comments on a draft of this report from HHS. (See app. VI.) The department commented that the draft report fairly and accurately describes FDA’s 510(k) program and the department agreed with our conclusions and recommendation. HHS agreed with our recommendation that FDA expeditiously take steps to reclassify or require PMAs for each class III device type currently allowed to enter the market through the 510(k) process, noting that since 1994 (when FDA announced it strategy to implement provisions of the Safe Medical Devices Act of 1990) FDA has called for PMAs or reclassified the majority of class III devices that did not require PMAs at that time. The department’s comments, however, do not specify time frames in which FDA will address the remaining class III device types allowed to enter the market via the 510(k) process, stating instead that the agency is considering its legal and procedural options for completing this task as expeditiously as possible, consistent with available resources and competing time frames. Given that more than 3 decades have passed since Congress envisioned that all class III devices would eventually be required to undergo premarket review through the more stringent PMA process, it is imperative that FDA take immediate steps to address the remaining class III device types that may still enter the market through the less stringent 510(k) process by requiring PMAs for or reclassifying them. The department also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology To review the Food and Drug Administration’s (FDA) use of the 510(k) and premarket approval (PMA) processes to review class I, II, and III device submissions in fiscal years 2003 through 2007, we used FDA’s 510(k) and PMA databases. These databases contain information on device submissions, including the name of the device, the FDA-assigned product code, the status of the submission, and any FDA decisions related to the submission and the dates of those decisions. In both cases, we obtained and analyzed data on submissions for which FDA made a review decision in fiscal years 2003 through 2007. We also used FDA’s Device Nomenclature Management System to determine other attributes of the device types covered by the device submissions. The 510(k) submissions we analyzed included traditional and abbreviated 510(k) submissions. We did not include special 510(k) submissions, which are requests for clearance of modifications to devices that have already been cleared through the 510(k) process (see table 7). The PMA submissions we analyzed included original PMA submissions and some supplemental PMA submissions. Specifically, we included supplemental PMA submissions that represented requests for approval for a significant change in a device: panel-track supplements, which are requests for approval for a significant change in design, performance, or use of a device for which clinical data are necessary to provide a reasonable assurance of safety and effectiveness; and 180-day (user-fee) supplements, which are requests for approval for a significant change in components, materials, design, specification, software, color additives, or labeling. We did not include other types of PMA supplements, such as real- time supplements, which are requests for approval for a minor change to a device, such as a minor change in design, sterilization, software, or labeling. To assess the reliability of these data, we interviewed FDA officials knowledgeable about these databases, performed electronic testing for accuracy and completeness, and where applicable compared our results to aggregate information from other sources, such as published FDA reports and the FDA Web site. We determined that the data were sufficiently reliable for the purposes of this report. In order to examine the extent to which FDA has determined that devices reviewed through the 510(k) process had new intended uses or new technological characteristics, we used FDA’s 510(k) database to select and review a stratified random sample of class II and all class III 510(k) submission files from fiscal years 2005 through 2007. See table 7 for the scope of our file review. All 163 class III submissions that met the inclusion criteria were included in the sample. The 296 class II cases included in the sample constituted a random sample of the 4,900 class II submissions that met the inclusion criteria. The class II submissions included in the sample were stratified by decision, meaning that class II submissions determined not substantially equivalent (NSE) were oversampled so that the results could be generalizable to the universe of all class II submissions, to class II submissions determined NSE, or to class II submissions determined substantially equivalent (SE). The sample contained a total of 459 submissions. See tables 8 and 9 for the number of submissions by fiscal year, class, and decision. We conducted our file review in June 2008. We collected data primarily from the FDA reviewer memo, which contained information concerning the steps FDA took to reach its determination of SE or NSE. This information included the incremental decisions FDA made concerning the use and technological characteristics of the new device, and in sum, defined the path through an FDA decision tree the reviewer took to reach a determination of SE or NSE. See figures 7 and 8 for detailed and simplified versions, respectively, of FDA’s decision tree. We recorded the individual decisions made in each case, and analyzed the results with respect to the path the FDA reviewer took to reach the final determination of SE or NSE. In the 10 cases where we could not determine the steps FDA took to reach its determination during our file review, we requested additional information from FDA officials. Officials from the Office of Device Evaluation in FDA’s Center for Devices and Radiological Health reviewed the files in question and provided us with the information we requested. To assess the reliability of these data, we compared our results with information from FDA’s 510(k) database and Device Nomenclature Management System. In addition, FDA officials stated that the data in the files were accurate and reliable and provided input in the development of our data collection instrument. In addition to our data analysis, we reviewed relevant laws and regulations concerning the premarket review process. We also interviewed FDA officials from the FDA centers and offices that process device submissions (Center for Biologics Evaluation and Research, Office of Device Evaluation, and Office of In Vitro Diagnostic Device Evaluation and Safety). Finally, we interviewed representatives from professional associations representing device manufacturers (the Advanced Medical Technology Association, the ECRI Institute, the Medical Device Manufacturers Association, and the Medical Imaging and Technology Alliance) and consumer advocates (the National Research Center for Women & Families and Public Citizen). We conducted this performance audit from March 2008 to January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Third-Party Review of 510(k) Submissions The FDA Modernization Act of 1997 directed FDA to accredit third parties (called accredited persons) in the private sector to conduct the initial review of 510(k) submissions for low- to moderate-risk devices. Under FDA’s Accredited Persons Program, device manufacturers may contract with accredited organizations (third parties) to review certain 510(k) submissions for a negotiated fee. The third party uses the same statutory and regulatory criteria as FDA to determine substantial equivalence, documents its review and recommendation, and forwards the 510(k) submission and documentation to FDA’s Center for Devices and Radiological Health. At the center, a third-party 510(k) submission is assessed by an FDA supervisor, who may accept or change the substantial equivalence recommendation of the third party. After completing the supervisory assessment, FDA issues a letter to the 510(k) applicant via the third-party reviewer with a final determination on the 510(k) submission. During the third-party review, the FDA supervisor can request additional information from the third party and the third party can request additional information from the 510(k) applicant. FDA expanded the program to include more than 670 class I and class II device types to be eligible for 510(k) review by a third party. These include device types for diagnostic ultrasound systems, computed tomography X-ray systems, and surgical lasers. However, not all of the accredited third parties are authorized to review all device types eligible for third-party review. For example, in October 2008 FDA’s Web site listed 7 of 11 accredited third parties as authorized to review 510(k) submissions for hearing aids. Device types that are not eligible for third-party review include all class III devices; class II devices intended to be permanently implantable, life sustaining, or life supporting; and class II devices requiring clinical data to support their 510(k) clearance. During our review of FDA’s 510(k) database, we found three instances of 510(k) submissions in which class II devices that were life sustaining were cleared for market through the third-party review program during fiscal years 2003 through 2007. FDA officials explained that about five life-sustaining, class II device types, hemodialysis devices, had inadvertently been added to the list of devices eligible for third-party review when the list was expanded in 2001, and that in May 2003, FDA removed the life-sustaining class II device types from the list of devices eligible for third-party review on FDA’s Web site. The FDA officials said that while the three 510(k) submissions for class II life- sustaining device submissions had been submitted through the third-party review program, FDA also conducted its own review of the three 510(k) submissions before they were cleared for marketing. During fiscal years 2003 through 2007, FDA reviewed and made final determinations on 1,082 third-party 510(k) submissions (see table 10). According to FDA, the number of third-party submissions increased as the result of (1) increased familiarity with the third-party review program among potential applicants, (2) the increase in the number of device types eligible for the program, and (3) less financial disincentives to use the third-party review program as FDA instituted device user fees. An FDA official familiar with the program stated that the third-party review program may be more attractive to device manufacturers because third- party review 510(k) submissions are processed faster than traditional 510(k) submissions. The official noted, however, that as FDA’s review of traditional 510(k) submissions becomes more efficient, the advantages of the third-party review program in terms of timeliness may diminish, which could lead to fewer third-party review 510(k) submissions. Table 11 shows the third-party review 510(k) submissions by medical specialty. Appendix III: FDA’s Implementation of Safe Medical Devices Act Provisions The Safe Medical Devices Act of 1990 (SMDA) amended the definition of class II devices and required FDA, for each preamendment class III device type and before December 1, 1995, to (1) order manufacturers to submit information on safety and effectiveness to FDA and (2) publish proposed and final regulations to reclassify each device type into class II or class I or to require it to remain in class III. For those devices for which FDA published a regulation requiring the device to remain in class III, the SMDA further directed FDA to, as promptly as reasonably achievable but not later than 12 months after the effective date of the regulation requiring the device to remain in class III, establish a schedule for the promulgation of regulations requiring the submission of PMAs. In an April 19, 1994, memorandum from the Acting Director of the FDA Center for Devices and Radiological Health’s Office of Device Evaluation, FDA outlined its strategy for implementation of the SMDA. Specifically, FDA grouped 117 preamendment class III device types for which FDA had not yet initiated any action to require the submission of PMAs into three groups and prioritized the devices to facilitate the SMDA activities. (See table 12.) The agency’s proposed strategy established a plan for beginning to address the class III device types that were continuing to be reviewed through the 510(k) process, but did not establish completion dates for doing so. As of October 2008, FDA had reclassified 45 device types and published regulations requiring PMAs for 53 device types. Therefore, of the 117 preamendment class III device types covered by FDA’s strategy, 19 device types remain in class III and may be cleared through the 510(k) process. Four of those 19 device types are types that FDA had placed in group 3 and designated high priority—that is, they are device types that FDA had determined to present an unreasonably high risk to public health because significant issues of safety or effectiveness were not being resolved or, to the best of FDA’s knowledge, had little probability of being resolved. Appendix IV: Additional Information on 510(k) Submissions for Class III Devices Reviewed by FDA This appendix summarizes the results from GAO analysis of FDA’s data for class III 510(k) submissions with FDA review decisions in fiscal years 2003 through 2007. The following tables show FDA’s final decisions for submissions for class III devices for each fiscal year through the 510(k) process (table 13); the primary medical specialties for submissions for class III devices cleared through the 510(k) process (table 14); and a detailed list of all device types covered by the class III devices cleared through the 510(k) process, including the status of these device types as of October 2008 (table 15). Appendix V: FDA’s 510(k) Decision-Making Process This appendix presents the additional information from GAO analysis of FDA’s 510(k) submission files for which FDA reached a determination of SE or NSE in fiscal years 2005 through 2007. The following figures show FDA’s detailed decision-making process for class II and class III submissions (fig. 9); the decision-making process for class II devices alone (fig. 10); and the decision-making process for class III devices alone (fig. 11). Appendix VI: Comments from the Department of Health and Human Services Appendix VII: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the contact named above, Kim Yamane, Assistant Director; Susannah Bloch; Matt Byer; Sean DeBlieck; Linda Galib; Julian Klazkin; and Dan Ries made key contributions to this report. Related GAO Products Medical Devices: FDA Faces Challenges in Conducting Inspections of Foreign Manufacturing Establishments. GAO-08-780T. Washington, D.C.: May 14, 2008. Reprocessed Single-Use Medical Devices: FDA Oversight Has Increased, and Available Information Does Not Indicate That Use Presents an Elevated Health Risk. GAO-08-147. Washington, D.C.: January 31, 2008. Medical Devices: Challenges for FDA in Conducting Manufacturer Inspections. GAO-08-428T. Washington, D.C.: January 29, 2008. Medical Devices: FDA’s Approval of Four Temporomandibular Joint Implants. GAO-07-996. Washington, D.C.: September 17, 2007. Medical Devices: Status of FDA’s Program for Inspections by Accredited Organizations. GAO-07-157. Washington, D.C.: January 5, 2007. Food and Drug Administration: Limited Available Data Indicate That FDA Has Been Meeting Some Goals for Review of Medical Device Applications. GAO-05-1042. Washington, D.C.: September 30, 2005. Food and Drug Administration: Data to Measure the Timeliness of Reviews of Medical Device Applications Are Limited. GAO-04-1022. Washington, D.C.: August 30, 2004.
The Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS) is responsible for oversight of medical devices sold in the United States. Regulations place devices into three classes, with class III including those with the greatest risk to patients. Unless exempt by regulation, new devices must clear FDA premarket review via either the 510(k) premarket notification process, which determines if a new device is substantially equivalent to another legally marketed device, or the more stringent premarket approval (PMA) process, which requires the manufacturer to supply evidence providing reasonable assurance that the device is safe and effective. Class III devices must generally obtain an approved PMA, but until FDA issues regulations requiring submission of PMAs, certain types of class III devices may be cleared via the 510(k) process. The FDA Amendments Act of 2007 mandated that GAO study the 510(k) process. GAO examined which premarket review process--510(k) or PMA--FDA used to review selected types of device submissions in fiscal years 2003 through 2007. GAO reviewed FDA data and regulations, and interviewed FDA officials. In fiscal years 2003 through 2007, as part of its premarket review to determine whether devices should be permitted to be marketed in the United States, FDA: (1) reviewed 13,199 submissions for class I and II devices via the 510(k) process, clearing 11,935 (90 percent) of these submissions; (2) reviewed 342 submissions for class III devices through the 510(k) process, clearing 228 (67 percent) of these submissions; and (3) reviewed 217 original and 784 supplemental PMA submissions for class III devices and approved 78 percent and 85 percent, respectively, of these submissions. Although Congress envisioned that class III devices would be approved through the more stringent PMA process, and the Safe Medical Devices Act of 1990 required that FDA either reclassify or establish a schedule for requiring PMAs for class III device types, this process remains incomplete. GAO found that in fiscal years 2003 through 2007 FDA cleared submissions for 24 types of class III devices through the 510(k) process. As of October 2008, 4 of these device types had been reclassified to class II, but 20 device types could still be cleared through the 510(k) process. FDA officials said that the agency is committed to issuing regulations either reclassifying or requiring PMAs for the class III devices currently allowed to receive clearance for marketing via the 510(k) process, but did not provide a time frame for doing so.
Background FAA is responsible for ensuring safe, orderly, and efficient air travel in the national airspace system. NWS supports FAA by providing aviation-related forecasts and warnings at air traffic facilities across the country. Among other support and services, NWS provides four meteorologists at each of FAA’s 21 en route centers to provide on-site aviation weather services. This arrangement is defined and funded under an interagency agreement. FAA’s Mission and Organizational Structure FAA’s primary mission is to ensure safe, orderly, and efficient air travel in the national airspace system. FAA reported that, in 2007, air traffic in the national airspace system exceeded 46 million flights and 776 million passengers. In addition, at any one time, as many as 7,000 aircraft—both civilian and military—could be aloft over the United States. In 2004, FAA’s Air Traffic Organization was formed to, among other responsibilities, improve the provision of air traffic services. More than 33,000 employees within FAA’s Air Traffic Organization support the operations that help move aircraft through the national airspace system. The agency’s ability to fulfill its mission depends on the adequacy and reliability of its air traffic control systems, as well as weather forecasts made available by NWS and automated systems. These resources reside at, or are associated with, several types of facilities: air traffic control towers, terminal radar approach control facilities, air route traffic control centers (en route centers), and the Air Traffic Control System Command Center. The number and functions of these facilities are as follows: 517 air traffic control towers manage and control the airspace within about 5 miles of an airport. They control departures and landings, as well as ground operations on airport taxiways and runways. 170 terminal radar approach control facilities provide air traffic control services for airspace within approximately 40 miles of an airport and generally up to 10,000 feet above the airport, where en route centers’ control begins. Terminal controllers establish and maintain the sequence and separation of aircraft. 21 en route centers control planes over the United States—in transit and during approaches to some airports. Each center handles a different region of airspace. En route centers operate the computer suite that processes radar surveillance and flight planning data, reformats them for presentation purposes, and sends them to display equipment used by controllers to track aircraft. The centers control the switching of voice communications between aircraft and the center, as well as between the center and other air traffic control facilities. Three of these en route centers also control air traffic over the oceans. The Air Traffic Control System Command Center manages the flow of air traffic within the United States. This facility regulates air traffic when weather, equipment, runway closures, or other conditions place stress on the national airspace system. In these instances, traffic management specialists at the command center take action to modify traffic demands in order to keep traffic within system capacity. See figure 1 for a visual summary of the facilities that control and manage air traffic over the United States. NWS’s Mission and Organizational Structure The mission of NWS—an agency within the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA)—is to provide weather, water, and climate forecasts and warnings for the United States, its territories, and its adjacent waters and oceans to protect life and property and to enhance the national economy. In addition, NWS is the official source of aviation- and marine-related weather forecasts and warnings, as well as warnings about life-threatening weather situations. The coordinated activities of weather facilities throughout the United States allow NWS to deliver a broad spectrum of climate, weather, water, and space weather services in support of its mission. These facilities include 122 weather forecast offices located across the country that provide a wide variety of weather, water, and climate services for their local county warning areas, including advisories, warnings, and forecasts; 9 national prediction centers that provide nationwide computer modeling to all NWS field offices; and 21 center weather service units that are located at FAA en route centers across the nation and provide meteorological support to air traffic controllers. NWS Provides Aviation Weather Services to FAA As an official source of aviation weather forecasts and warnings, several NWS facilities provide aviation weather products and services to FAA and the aviation sector. These facilities include the Aviation Weather Center, weather forecast offices located across the country, and 21 center weather service units located at FAA en route centers across the country. Aviation Weather Center The Aviation Weather Center located in Kansas City, Missouri, issues warnings, forecasts, and analyses of hazardous weather for aviation. Staffed by 65 personnel, the center develops warnings of hazardous weather for aircraft in flight and forecasts of weather conditions for the next 2 days that could affect both domestic and international aviation. The center also produces a Collaborative Convective Forecast Product, a graphical representation of convective occurrence at 2, 4, and 6 hours. This is used by FAA to manage aviation traffic flow across the country. The Aviation Weather Center’s key products are described in table 1. NWS’s 122 weather forecast offices issue terminal area forecasts for approximately 625 locations every 6 hours or when conditions change, consisting of the expected weather conditions significant to a given airport or terminal area, and are primarily used by commercial and general aviation pilots. Center Weather Service Units NWS’s center weather service units are located at each of FAA’s 21 en route centers and operate 16 hours a day, 7 days a week (see fig. 2). Each center weather service unit usually consists of three meteorologists and a meteorologist-in-charge who provide strategic advice and aviation weather forecasts to FAA traffic management personnel. Governed by an interagency agreement, FAA currently reimburses NWS approximately $12 million annually for this support. Center Weather Service Units: An Overview of Systems and Operations The meteorologists at the center weather service units use a variety of systems to gather and analyze information compiled from NWS and FAA weather sensors. Key systems used to compile weather information include FAA’s Weather and Radar Processor, FAA’s Integrated Terminal Weather System, FAA’s Corridor Integrated Weather System, and a remote display of NWS’s Advanced Weather Interactive Processing System. Meteorologists at several center weather service units also use NWS’s National Center Advanced Weather Interactive Processing System. Table 2 provides a description of selected systems. NWS meteorologists at the en route centers provide several products and services to the FAA staff, including meteorological impact statements, center weather advisories, periodic briefings, and on-demand consultations. These products and services are described in table 3. In addition, center weather service unit meteorologists receive and disseminate pilot reports, provide input every 2 hours to the Aviation Weather Center’s creation of the Collaborative Convective Forecast Product, train FAA personnel on how to interpret weather information, and provide weather briefings to nearby terminal radar approach control facilities and air traffic control towers. FAA Seeks to Improve Aviation Weather Services Provided at En Route Centers In recent years, FAA has undertaken multiple initiatives to assess and improve the performance of the center weather service units. Studies conducted in 2003 and 2006 highlighted concerns with the lack of standardization of products and services at NWS’s center weather service units. To address these concerns, the agency sponsored studies that determined that weather data could be provided remotely using current technologies, and that private sector vendors could provide these services. In 2005, the agency requested that NWS restructure its aviation weather services by consolidating its center weather service units to a smaller number of sites, reducing personnel costs, and providing products and services 24 hours a day, 7 days a week. NWS subsequently submitted a proposal for restructuring its services, but FAA declined the proposal citing the need to refine its requirements. In December 2007, FAA issued revised requirements and asked NWS to respond with proposals defining the technical and cost implications of three operational concepts. The three concepts involved (1) on-site services provided within the existing configuration of offices located at the 21 en route centers, (2) remote services provided by a reduced number of regional facilities, and (3) remote services provided by a single centralized facility. NWS responded with three proposals, but FAA rejected these proposals in September 2008, noting that while elements of each proposal had merit, the proposed costs were too high. FAA requested that NWS revise its proposal to bring costs down while stating a preference to move toward a single center weather service unit with a back-up site. As a separate initiative, NWS initiated an improvement program for the center weather service units in April 2008. The goal of the program was to improve the consistency of the units’ products and services. This program involved standardizing the technology, collaboration, and training for all 21 center weather service units and conducting site visits to evaluate each unit. NWS reported that it has completed its efforts to standardize the service units and plans to complete its site visits by September 2009. Table 4 provides a chronology of the agencies’ assessment and improvement efforts. Prior GAO Report Identified Concerns with Center Weather Service Units; Recommended Steps to Improve Quality Assurance In January 2008, we reported on concerns about inconsistencies in products and quality among center weather service units. We noted that while both NWS and FAA have responsibilities for assuring and controlling the quality of aviation weather observations, neither agency monitored the accuracy and quality of the aviation weather products provided at center weather service units. We recommended that NWS and FAA develop performance measures and metrics for the products and services to be provided by center weather service units, perform annual evaluations of aviation weather services provided at en route centers, and provide feedback to the center weather service units. The Department of Commerce agreed with our recommendations, and the Department of Transportation stated that FAA planned to revise its requirements and that these would establish performance measures and evaluation procedures. Proposal to Consolidate Center Weather Service Units Is under Consideration NWS and FAA are considering plans to restructure the way aviation weather services are provided at en route centers. After a 6-month delay, NWS sent FAA its latest proposal for restructuring the center weather service units in June 2009. NWS’s proposal involves consolidating 20 of the 21 existing center weather service units into two locations, with one at the Aviation Weather Center in Kansas City, Missouri, and the other at a new National Centers for Environmental Prediction office planned for the Washington, D.C., metropolitan area. The Missouri center is expected to handle the southern half of the United States while the Washington, D.C., center is expected to handle the northern half of the United States. NWS plans for the two new units to be staffed 24 hours a day, 7 days a week, and to function as back-up sites for each other. These new units would continue to use existing forecasting systems and tools to develop products and services. See figure 3 for a visual summary of the proposed consolidated center weather service unit facilities that control and manage air traffic over the United States. While these new units would continue to use existing forecasting systems and tools to develop products and services, NWS has also proposed new products, services, and tools. Two new products are the Collaborative Weather Impact Product and the terminal radar approach control forecast. The former is expected to expand the Aviation Weather Center’s existing Collaborative Convective Forecast Product to include convection, turbulence, icing, wind, ceiling/visibility, and precipitation type/intensity. The latter is expected to extract data from the Collaborative Weather Impact Product and include precipitation, winds, and convection for the terminal area; the display will allow the forecaster to layer this information on air traffic management information such as jet routes. In addition, NWS plans to create a Web portal to allow FAA and other users to access its advisories, forecasts, and products as well as national, regional, and local weather briefings. To support on-demand briefings at the new center weather service units, NWS plans to use collaboration tools, such as instant messaging and online collaboration software. Given the reduced number of locations in the revised organizational structure, NWS also proposed reducing the number of personnel needed to support its operations from 84 to 50 full-time staff—a reduction of 34 positions. Specifically, the agency determined that it will require 20 staff members for each new center weather service unit; 4 staff members at the Alaska unit; 5 additional forecasters at the Aviation Weather Center to help prepare the Collaborative Weather Impact Product; and a quality assurance manager at NWS headquarters. NWS anticipates the staff reductions will be achieved through scheduled retirements, resignations, and reassignments. However, the agency has identified the transition of its existing workforce to the new centers as a high-impact risk because staff may decline to move to the new locations. NWS also proposed tentative time frames for transitioning to the new organizational structure over a 3-year period. During the first year after FAA accepts the proposal, NWS plans to develop a transition plan and conduct a 9-month demonstration of the concept in order to ensure that the new structure will not degrade its services. Agency officials estimated that initial operating capability would be achieved by the end of the second year after FAA approval and full operating capability by the end of the third year. NWS estimated the transition costs for this proposal at approximately $12.8 million, which includes approximately $3.3 million for the demonstration. In addition, NWS estimated that the annual recurring costs will be about 21 percent lower than current annual costs. For example, using 2009 prices, NWS estimated that the new structure would cost $9.7 million—about $2.6 million less than the current $12.3 million cost. See table 5 for the estimated costs for transitioning the centers. However, it is not clear when and if the agencies will move forward with the proposal. FAA responded to NWS in August 2009 by requesting more information regarding NWS’s proposal. One consideration that may affect the proposal involves the current interagency agreement. The most recent agreement between the two agencies, signed in December 2007, is to expire at the end of September 2009. Before it expires, the two agencies could choose to exercise an option to continue this agreement for another year, terminate the agreement, or sign a new agreement. An FAA official reported that the agency wanted to create a new agreement that includes key dates from the proposal, such as those related to the concept demonstration. This official added that such agreements typically take time to develop and coordinate between the agencies. NWS and FAA Are Working to Establish a Baseline of Current Performance, but Are Not Assessing Key Measures According to best practices in leading organizations, performance should be measured in order to evaluate the success or failure of programs. Performance measurement involves identifying performance goals and measures, establishing performance baselines, identifying targets for improving performance, and measuring progress against those targets. Having a clear understanding of an organization’s current performance—a baseline— is essential to determining whether new initiatives (like the proposed restructuring) result in improved or degraded products and services. In January 2008, we reported that NWS and FAA lacked performance measures and a baseline of current performance for the center weather service units and recommended that they develop performance measures. In response to this recommendation, FAA established 5 performance standards for the center weather service units. FAA also recommended that NWS identify additional performance measures in its proposal for restructuring the center weather service units. While NWS subsequently identified 8 additional performance measures in its proposal, FAA has not yet approved these measures. All 13 performance measures are listed in table 6. NWS officials reported that they have historical data for 1 of the 13 performance measures—participation in the Collaborative Convective Forecast Product—and are working to obtain a baseline for 3 other performance measures. Specifically, in January 2009, NWS and FAA began evaluating how the center weather service units are performing and, as part of this initiative, are collecting data associated with organizational service provision, format consistency, and briefing service provision. As of June 2009, the agencies had completed evaluations of 13 service units and plan to complete evaluations for all 21 service units by September 2009. However, the agencies have not established a baseline of performance for the 9 other performance measures. NWS officials reported that they are not collecting baseline information for a variety of reasons, including that the measures have not yet been approved by FAA and that selected measures involve products that have not yet been developed. A summary of the status of efforts to establish baselines and reasons for not establishing baselines is provided in table 7. While 4 of the potential measures are tied to new products or services under the restructuring, the other 5 could be measured using current products and services. For example, accuracy and customer satisfaction are measures that could be tracked for current operations. NWS continually measures the accuracy of a range of weather products— including hurricane and tornado forecasts. Customer satisfaction measures could be determined by surveying the FAA managers who receive the aviation weather products. It is important to obtain an understanding of the current level of performance in these measures before beginning any efforts to restructure aviation weather services. Without an understanding of the current level of performance, NWS and FAA will not be able to measure the success or failure of any changes they make to the center weather service unit operations. As a result, any changes to the current structure could degrade aviation operations and safety—and the agencies may not know it. NWS and FAA Face Challenges in Efforts to Modify the Current Aviation Weather Structure NWS and FAA face challenges in their efforts to modify the current aviation weather structure. These include challenges associated with (1) interagency collaboration, (2) defining requirements, and (3) aligning any changes with the Next Generation Air Transportation System (NextGen)— a long-term initiative to increase the efficiency of the national airspace system. Specifically, the two agencies have had difficulties in interagency collaboration and requirements development leading to an inability to reach agreement on a way forward. In addition, the restructuring proposals have not been aligned with the national strategic vision for the future air transportation system. Looking forward, if a proposal is accepted, the agencies could face three additional challenges in implementing the proposal, including (1) developing a feasible schedule that includes adequate time for stakeholder involvement, (2) undertaking a comprehensive demonstration to ensure no services are degraded, and (3) effectively reconfiguring the infrastructure and technologies to the new structure. Unless and until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers poses new risks and has little chance of success. Interagency Collaboration To date, FAA and NWS have encountered challenges in interagency collaboration. We have previously reported on key practices that can help enhance and sustain interagency collaboration. The practices generally consist of two or more agencies defining a common outcome, establishing joint strategies to achieve the outcome, agreeing upon agency roles and responsibilities, establishing compatible policies and procedures to operate across agency boundaries, and developing mechanisms to monitor, evaluate, and report the results of collaborative efforts. While NWS and FAA have established policies and procedures for operating across agencies through the interagency agreement and have initiated efforts to establish a baseline of performance for selected measures through their ongoing site evaluations, the agencies have not defined a common outcome, established joint strategies to achieve the outcome, or agreed upon agency responsibilities. Instead, the agencies have demonstrated an inability to work together to resolve issues and to accomplish meaningful change. Specifically, since 2005, FAA has requested that NWS restructure its aviation weather services three times, and then rejected NWS’s proposals twice. Further, after requesting extensions twice, NWS provided its proposal to FAA in June 2009. As a result, it is now almost 4 years since FAA first initiated efforts to improve NWS aviation weather services and the agencies have not yet agreed on what needs to be changed and how it will be changed. Table 8 lists key events. Until the agencies agree on a common outcome, establish joint strategies to achieve the outcome, and agree on respective agency responsibilities, they are unlikely to move forward in efforts to restructure weather services. Without sound interagency collaboration, both FAA and NWS will continue to spend time and resources proposing and rejecting options rather than implementing solutions. Defining Requirements The two agencies’ difficulties in determining how to proceed with their restructuring plans are due in part to a lack of stability in FAA’s requirements for center weather service units. According to best practices of leading organizations, requirements describe the functionality needed to meet user needs and perform as intended in the operational environment. A disciplined process for developing and managing requirements can help reduce the risks associated with developing or acquiring a system or product. FAA released its revised requirements in December 2007 and NWS subsequently provided proposals to meet these requirements. However, FAA rejected all three of NWS’s proposals in September 2008 on the basis that the costs of the proposals were too high, even though cost was not specified in FAA’s requirements. NWS’s latest proposal is based on FAA’s December 2007 requirements as well as detailed discussions held between the two agencies in October 2008. However, FAA has not revised its requirements to reflect the guidance provided to NWS in those discussions, including reported guidance on handling the Alaska center and moving to the two-center approach. Without formal requirements developed prior to the development of the new products and services, FAA runs the risk of procuring products and services that do not fully meet its users’ needs or perform as intended. In addition, NWS risks continued investments in trying to create a product for FAA without clear information on what the agency wants. Alignment with the Next Generation Air Transportation System Neither FAA nor NWS has ensured that the restructuring of the center weather service units fits with the national vision for NextGen—a long- term initiative to transition FAA from the current radar-based system to an aircraft-centered, satellite-based system. Our prior work on enterprise architectures shows that connecting strategic planning with program and system solutions can increase the chances that an organization’s operational and information technology environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in information technology without defining these investments in the context of a larger, strategic vision often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. The Joint Planning and Development Office is responsible for planning and coordinating NextGen. As part of this program, the Joint Planning and Development Office envisions restructuring air traffic facilities, including en route centers, across the country as well as transitioning to new technologies. However, NWS and FAA efforts to restructure the center weather service units have not been aligned with the Joint Planning and Development Office’s vision for transforming air traffic control under the NextGen program. Specifically, the chair of NextGen’s weather group stated that Joint Planning and Development Office officials have not evaluated NWS’s and FAA’s plans for restructuring the center weather service units, and have not been asked to do so. Other groups within FAA are responsible for aligning the agency’s enterprise architecture with the NextGen vision through annual roadmaps that largely define near- and mid-term initiatives. However, recent roadmaps for aviation weather do not include any discussion of plans to restructure the center weather service units or the potential impact that such a change could have on aviation weather systems. Additionally, in its proposal, NWS stated that it followed FAA’s guidance to avoid tightly linking the transition schedule to NextGen’s expected initial operating capability in 2013, but recommended doing so since the specific role of the center weather service units in NextGen operations is unknown. Until the agencies ensure that changes to the center weather service units fit within the strategic-level and implementation plans for NextGen, any changes to the current structure could result in wasted efforts and resources. Schedule Development Looking forward, if a proposal is accepted, both agencies could also face challenges in developing a feasible schedule that includes adequate time for stakeholder involvement. NWS estimated a 3-year transition time frame from current operations to the two-center approach. FAA officials commented they would like to have the two-center approach in place by 2012. However, NWS may have difficulty in meeting the transition time frames because activities that need to be conducted serially are planned concurrently within the 3-year schedule. For example, NWS may need to negotiate with its union before implementing changes that affect working conditions—such as moving operations from an en route center to a remote location. NWS officials acknowledge the risk that these negotiations can be prolonged and sometimes take years to complete. If the proposal is accepted, it will be important for NWS to identify activities that must be conducted before others in order to build a feasible schedule. Demonstrating No Degradation of Service If a proposal is accepted, both agencies could face challenges in demonstrating that existing services will not be degraded during the restructuring. In its proposal, NWS identified preliminary plans to demonstrate the new operational concept before implementing it in order to ensure there is no degradation of service. Key steps included establishing a detailed demonstration plan, conducting risk mitigation activities, and implementing a demonstration that is to last at least 9 months. NWS also proposed that the demonstration will include an independent evaluation by a team of government and industry both before the demonstration, to determine if the demonstration is adequate to validate the new concept of operations, and after, to determine the success of the demonstration. In addition, throughout the 9-month demonstration, NWS plans to have the independent team periodically provide feedback, recommendations, and corrective actions. However, NWS has not yet defined all of the performance measures it will use to determine whether the prototype is successful. In its proposal, NWS stated that the agencies will begin to document performance metrics and develop and refine evaluation criteria during the demonstration. If NWS waits to define evaluation criteria during the evaluation, it may not have baseline metrics needed to compare to the demonstration results. Without baseline metrics, NWS may be unable to determine whether the demonstration has degraded service or not. Technology Transition Both agencies could face challenges in effectively transitioning the infrastructure and technologies to the new consolidated structure, if a proposal is accepted. In its proposal, NWS planned to move its operations from 20 en route centers to two sites within 3 years. However, to do so, the agencies will need to modify their aviation weather systems and develop a communications infrastructure. Specifically, NWS and FAA will need to modify or acquire systems to allow both current and new products for an expanded view of the country. Additionally, NWS will need to develop continuous two-way communications in lieu of having staff on site at each en route center. NWS has recognized the infrastructure as a challenge, and plans to mitigate the risk through continuous dialogue with FAA. However, if interagency collaboration does not improve, attempting to coordinate the systems and technology of the two agencies may prove difficult and further delay the schedule. Conclusions For several years, FAA and NWS have explored ways to improve the operations of the center weather service units by consolidating operations and providing remote services. Meanwhile, the two agencies have to make a decision on the interagency agreement, which will expire at the end of September 2009. If FAA and NWS are to create a new interagency agreement that incorporates key dates within the proposal, decisions on the proposal will have to be made quickly. An important component of any effort to improve operations is a solid understanding of current performance. However, FAA and NWS are not working to identify the current level of performance in five measures that are applicable to current operations. Until the agencies have an understanding of the current level of performance, they will not be able to measure the success or failure of any changes to the center weather service unit operations. As a result, any changes to the current structure could degrade aviation operations and safety—and the agencies may not know it. If the agencies move forward with plans to restructure aviation weather services, they face significant challenges including a poor record of interagency collaboration, undocumented requirements, and a lack of assurance that this plan fits in the broader vision of NextGen. Moreover, efforts to implement the restructuring will require a feasible schedule, a comprehensive demonstration, and a solid plan for technology transition. Until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers has little chance of success. Recommendations for Executive Action To improve the aviation weather products and services provided at FAA’s en route centers, we are making six recommendations to the Secretaries of Commerce and Transportation. Specifically, we are recommending that the Secretaries direct the NWS and FAA administrators, respectively, to immediately identify the current level of performance for the five potential measures that could be identified under current operations (forecast accuracy, customer satisfaction, service delivery conformity, timeliness of on-demand services, and training completion) so that there will be a baseline from which to measure the impact of any proposed operational changes; establish and approve a set of performance measures for the center weather service units; improve interagency collaboration by defining a common outcome, establishing joint strategies to achieve the outcome, and agreeing upon each agency’s responsibilities; establish and finalize requirements for aviation weather services at en ensure that any proposed organizational changes are aligned with NextGen initiatives by seeking a review by the Joint Program Development Office responsible for developing the NextGen vision; and before moving forward with any proposed operational changes, address developing a feasible schedule that includes adequate time for undertaking a comprehensive demonstration to ensure no services are effectively transitioning the infrastructure and technologies to the new consolidated structure. Agency Comments and Our Evaluation The Department of Commerce provided written comments on a draft of this report, signed by the Secretary of Commerce (see app. II). In the department’s letter, NOAA agreed with our recommendations and provided additional information on steps the agency has taken or plans to take to address the recommendations. For example, the agency reported that it is working with FAA to refine its baseline metrics and plans to have baseline metrics in place by the end of fiscal year 2009. NWS also plans to establish a Quality Assurance Manager to work with FAA to develop additional measures and metrics as appropriate. In addition, NOAA reported that since submitting its proposal in May 2008 and receiving a summary of our findings in June 2009, NWS and FAA have made progress in working together. Specifically, NWS has assigned a liaison to the FAA Air Traffic Control System Command Center, met with officials from the Joint Planning and Development Office to discuss the linkage of plans for the center weather service units and NextGen, and held discussions to strengthen the NWS and FAA partnership. The Department of Transportation’s Deputy Director of Audit Relations provided comments on a draft of this report via e-mail. In those comments, she noted that the department agreed to consider our recommendations. In addition, she noted a slight concern involving our discussion of the alignment of plans to restructure the center weather service units with NextGen. Specifically, the department noted that current NextGen plans (1) include modifications to the weather systems used by center weather service units and (2) ensure the delivery of the functions currently provided by center weather service units. However, these statements do not alter the agencies’ need to align any restructuring plans with the NextGen initiative—and that has not occurred. NextGen roadmaps do not include any discussion of plans to restructure the center weather service units or the potential impact that such a change could have on aviation weather systems. The importance of this alignment is underscored by the NWS Director’s recommendation to FAA to provide linkage between restructuring plans and NextGen plans since the specific role of the center weather service units during NextGen operations is not yet known. Both departments also provided technical comments that we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of Transportation, the Director of the Office of Management and Budget, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) determine the status and plans of efforts to restructure the center weather service units, (2) evaluate efforts to establish a baseline of the current performance provided by center weather service units so that the Federal Aviation Administration (FAA) and National Weather Service (NWS) can ensure that any operational changes do not degrade aviation weather services, and (3) evaluate challenges to restructuring the center weather service units. To determine the status of NWS’s plans for restructuring the center weather service units, we reviewed the current interagency agreement, FAA’s proposed requirements, and NWS’s draft and final proposals for addressing FAA’s requirements. We analyzed NWS’s draft transition schedules, cost proposals, and evaluation plans. We also interviewed NWS and FAA officials to obtain clarifications on these plans. To evaluate the agencies’ efforts to establish a baseline of the current performance provided by center weather service units, we reviewed documentation including FAA’s performance standards, the current interagency agreement, NWS’s restructuring proposals and Quality Assurance Surveillance Plan, and the agencies’ plans for evaluating the centers. We compared the agencies’ plans for creating a baseline of current performance with best practices for performance management by the Department of the Navy and General Services Administration. We also interviewed NWS and FAA officials involved in establishing a baseline of current performance provided by center weather service units. To evaluate challenges to restructuring the center weather service units, we reviewed agency documentation, including FAA’s requirements document and NWS’s proposals to restructure the center weather service units. We also reviewed planning documents for the Next Generation Air Transportation System. We compared these documents with best practices for system development and requirements management from the Capability Maturity Model® Integration for Development; and with GAO’s best practices in interagency collaboration and architecture planning. In addition, we interviewed NWS, FAA, and Joint Planning and Development Office officials regarding challenges to restructuring the center weather service units. We performed our work at FAA and NWS headquarters offices, and F Air Traffic Control System Command Center in the Washington, D.C., metropolitan area. We conducted this performance audit from August 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact person named above, Colleen Phillips, Assistant Director; Gerard Aflague; Kate Agatone; Neil Doherty; Rebecca Eyler; and Jessica Waselkow made key contributions to this report.
The National Weather Service's (NWS) weather products are a vital component of the Federal Aviation Administration's (FAA) air traffic control system. In addition to providing aviation weather products developed at its own facilities, NWS also provides on-site staff at each of FAA's en route centers--the facilities that control high-altitude flight outside the airport tower and terminal areas. Over the last few years, FAA and NWS have been exploring options for enhancing the efficiency of the aviation weather services provided at en route centers. GAO agreed to (1) determine the status and plans of efforts to restructure the center weather service units, (2) evaluate efforts to establish a baseline of the current performance provided by these units, and (3) evaluate challenges to restructuring them. To do so, GAO evaluated agency plans for the restructuring and for establishing performance measures. GAO also compared agency efforts to leading practices and interviewed agency officials. NWS and FAA are considering plans to restructure the way aviation weather services are provided at en route centers, but it is not yet clear whether and how these changes will be implemented. In 2005, FAA requested that NWS restructure its services by consolidating operations to a smaller number of sites, reducing personnel costs, and providing services 24 hours a day, 7 days a week. NWS developed two successive proposals, both of which were rejected by FAA--most recently because the costs were too high. FAA subsequently requested that NWS develop another proposal by late December 2008. In response, NWS developed a third proposal that involves consolidating 20 of 21 existing center weather service units into two locations. NWS sent this proposal to FAA in early June 2009. FAA responded to NWS in August 2009 by requesting more information regarding NWS's proposal. In response to GAO's prior concerns that NWS and FAA lacked performance measures and a baseline of current performance, the agencies have agreed on five measures and NWS has proposed eight others. In addition, the agencies initiated efforts to establish a performance baseline for 4 of 13 potential performance measures. However, the agencies have not established baseline performance for the other 9 measures. NWS officials stated they are not collecting baseline information on the 9 measures for a variety of reasons, including that some of the measures have not yet been approved by FAA, and that selected measures involve products that have not yet been developed. While 4 of the 9 measures are tied to new products or services that are to be developed if NWS's latest restructuring proposal is accepted, the other 5 could be measured in the current operational environment. For example, both accuracy and customer satisfaction measures are applicable to current operations. It is important to obtain an understanding of the current level of performance in these measures before beginning any efforts to restructure aviation weather services. Without an understanding of the current level of performance, NWS and FAA may not be able to measure the success or failure of changes they make to the center weather service unit operations. As a result, changes to the current structure could degrade aviation operations and safety--and the agencies may not know it. NWS and FAA face challenges in their efforts to improve the current aviation weather structure. These include challenges associated with (1) interagency collaboration, (2) defining FAA's requirements, and (3) aligning any changes with the Next Generation Air Transportation System--a long-term initiative to increase the efficiency of the national airspace system. If the restructuring proposal is accepted, the agencies face three additional challenges in implementing it: (1) developing a feasible schedule that includes adequate time for stakeholder involvement, (2) undertaking a comprehensive demonstration to ensure no services are degraded, and (3) effectively reconfiguring the infrastructure and technologies. Unless and until these challenges are addressed, the proposed restructuring of aviation weather services at en route centers poses new risks and has little chance of success.
Background Congress, GAO, the Department of Commerce Inspector General, and even the Bureau itself have all noted that the 2000 Census was marked by poor planning, which unnecessarily added to the cost, risk, and controversy of the national head count. In January 2003, we named the 2010 Census a major performance and accountability challenge because of our growing concern over the numerous obstacles to a cost-effective enumeration as well as its escalating price tag. More recently, we reported that while the Bureau’s preparations for the 2010 Census appeared to be further along than at a similar point during the planning cycle for the 2000 Census, considerable risks and uncertainties remained. Thus, it is imperative that the Bureau adequately test the various components of its design for the 2010 Census. A rigorous testing program provides at least four major benefits. First, testing allows the Bureau to refine procedures aimed at addressing problems encountered in past censuses. During the 2000 Census, for example, group quarters were sometimes counted more than once or counted in the wrong location; the wording of the race and ethnicity question confused some respondents, which in some cases resulted in lower quality data; and following up with nonrespondents proved to be costly and labor-intensive. A second benefit is that sound testing can assess the feasibility of new procedures and technologies, such as HHCs (see fig. 1), that have never before been used in a decennial census. Third, a rigorous testing program helps instill a comfort level among members of Congress and other stakeholders that the Bureau (1) has chosen the optimal design given various trade-offs and constraints and (2) has identified and addressed potential risks and will be able to successfully execute its plan. Such confidence building, developed through regular updates and open lines of communication, is essential for continuing congressional support and funding. And finally, proper testing early in the decade will help the Bureau to conduct a dress rehearsal in 2008 that fully assesses all aspects of the census design under realistic conditions. Because of various late requirement changes, certain procedures that were added after the 1998 dress rehearsal for the 2000 Census were not properly tested. Scope and Methodology As agreed with your offices, our objectives for this report were to (1) assess the soundness of the Bureau’s design for the 2004 census test and whether the Bureau implemented the test consistent with its plans, (2) review the quality of the Bureau’s IT security practices, and (3) identify initial lessons learned from conducting the test and their implications for the 2010 Census. To assess the soundness of the design we reviewed pertinent documents that described the Bureau's test and evaluation plans. We systematically rated the Bureau’s approach using a checklist of design elements that, based on our review of program evaluation literature, are relevant to a sound study plan. For example, we reviewed the Bureau's approach to determine, among other things, (1) how clearly the Bureau presented research objectives, (2) whether research questions matched the research objectives, and (3) the appropriateness of the data collection strategy for reaching the intended sample population. As part of our assessment of the Bureau’s test design, we also reviewed evaluations of the prior decennial census to determine the degree to which the new operations being tested addressed problematic aspects of the 2000 Census. However, we did not assess the Bureau’s criteria in selecting its objectives for the 2004 census test. To determine if the Bureau implemented the test consistent with its plans, we made multiple site visits to local census offices in Thomasville, Georgia; and Queens Borough, New York. During these visits, we interviewed local census office mangers and staff, observed various data collection activities, and attended weeklong enumerator training. We observed a total of 20 enumerators as they completed their daily nonresponse follow-up assignments—half of these were in southern Georgia, in the counties of Thomas, Colquitt, and Tift, and half were in Queens (see fig. 2 for maps of the test site areas). The results of these observations are not necessarily representative of the larger universe of enumerators. To evaluate the quality of the Bureau’s IT security practices, we assessed risk management documentation associated with IT systems and major applications for the 2004 census test. We based our determination on applicable legal requirements, Bureau policy, and leading practices described in our executive guide for information security management. We also interviewed key Bureau officials associated with computer security. To identify lessons learned from the 2004 census test, we met with officials from the Bureau’s Decennial Management Division regarding overall test plans and with officials from its Technologies Management Office about using HHCs. Bureau officials and census workers from both test locations also provided suggestions on improving census operations. We requested comments on a draft of this report from the Secretary of Commerce. On December 20, 2004, the Under Secretary for Economic Affairs, Department of Commerce, forwarded written comments from the Bureau (see app. I). We address these comments in the “Agency Comments and Our Evaluation” section at the end of this report. The Census Test Was Generally Sound, but Refinements Could Produce Better Cost and Performance Data The Bureau designed a sound census test and generally implemented it as planned. However, in looking ahead, the Bureau’s planning and investment decisions could benefit from analyzing (1) the degree to which HHCs contributed to the Bureau’s cost containment goal and (2) the results of the targeted second mailing, an operation designed to increase participation by sending a follow-up questionnaire to nonresponding households. Future tests could also be more informative if the Bureau developed quantifiable productivity and other performance requirements for the HHCs and then used the 2006 test to determine whether the devices are capable of meeting those requirements. Collectively, these refinements could provide Bureau officials with better information to guide its IT and other design decisions, as well as refine future census tests. The Bureau Developed a Sound Test Design The design of the 2004 census test contained many components of a sound study (see table 1). For example, the Bureau identified test objectives, designed related research questions, and described a data collection strategy appropriate for a field test. The Bureau also developed evaluation plans for each of the test’s 11 research questions, and explained how stakeholders were involved with the design, as well as how lessons learned from past studies were incorporated. Additional Analysis Would Provide Better Data on the Impact of Key Test Components Although the Bureau plans to evaluate various aspects of the 2004 test, it does not currently plan to assess the impact that the HHCs and targeted second mailing had on cost savings and productivity. According to the Bureau, the census test was focused more on determining the feasibility of using the HHCs and less on the devices’ ability to save money. Likewise, the Bureau said it is not assessing the impact of the targeted second mailing because the operation is not one of its four test objectives for improving (1) field data collection using the HHC, (2) the coverage of undercounted groups, (3) questions about race and ethnicity, and (4) methods for defining special places and group quarters. These decisions might be shortsighted, however, in that the Bureau included the HHCs and targeted second mailing in the 2010 Census design, in part, to reduce staff, improve productivity, and control costs. For example, Bureau studies have shown that sending out replacement questionnaires could yield a gain in overall response of 7 to 10 percent from households that do not respond to the initial census mailing, and thus generate significant cost savings by eliminating the need for census workers to obtain those responses via personal visits. Thus, information on the degree to which the HHCs and second mailing contribute to these key goals could help inform future budget estimates, investment and design decisions, as well as help refine future census tests. Moreover, the feasibility of a targeted second mailing is an open question, as the Bureau has never before included this operation as part of a decennial census. Although a second mailing was part of the original design for the 2000 Census, the Bureau had abandoned it because it was found to be logistically unworkable. A Bureau official said that the second mailing was included in the 2004 test only to facilitate the enumeration process, and it would be better tested in a larger scale operation such as the 2008 dress rehearsal. However, we believe that it would be more prudent to assess the second mailing earlier in the census cycle, such as, during the 2006 test so that its basic feasibility could be assessed, any refinements could be evaluated in subsequent tests, and the impact on savings could be estimated more accurately. Future Tests Could Be Improved While the design of the 2004 test was generally sound, refinements could strengthen the next field test in 2006. Opportunities for improvement exist in at least two areas: ensuring that (1) the HHCs can meet the demanding requirements of field data collection and (2) management of the local census offices mirrors an actual enumeration as much as possible. With respect to the HHCs, because they replace the paper version of the nonresponse follow-up questionnaire the devices must function effectively. Further, this test was the first time the Bureau used the HHCs under census-like conditions so their functionality in an operational environment was unknown. Bureau officials have acknowledged that for the 2004 test they had no predefined indicators of success or failure other than if there was a complete breakdown the test would be halted. This is a very low standard. Now that the Bureau has demonstrated the basic functionality of the computers, it should next focus on determining the specific performance requirements for the HHCs and assess whether the devices are capable of meeting them. For example, the Bureau needs productivity benchmarks for the number of interviews per hour and per day that is expected per census worker. Durability measures, such as how many devices were repaired or replaced, should be considered as well. Assessing whether the HHCs can meet the requirements of nonresponse follow-up will help inform future design and investment decisions for whether or not to include the devices in the 2010 design. Ensuring that key positions in the local census offices are filled from the same labor pool as they would be in an actual decennial census could also enhance future census tests. Such was not the case during the 2004 test when, according to the Bureau, because of difficulties finding qualified applicants, it used an experienced career census employee to manage the overall day-to-day operations of the local census office at the Queens test site. Another career employee occupied the office’s regional technician slot, whose responsibilities included providing technical and administrative guidance to the local census office manager. In the actual census, the Bureau would fill these and other positions with temporary employees recruited from local labor markets. However, because the Bureau staffed these positions with individuals already familiar with census operations and who had ties to personnel at the Bureau’s headquarters, the Queens test may not have been realistic and the test results could be somewhat skewed. The Bureau Needs to Implement Better IT Security Practices The Bureau operated a number of IT systems in order to transmit, manage, and process data for the test. The equipment was located at various places including the Bureau’s headquarters in Suitland, Maryland; its National Processing Center in Jeffersonville, Indiana; a computer facility in Bowie, Maryland; as well as the New York and Georgia test sites. Under Title 13 of the U.S. Code, the Bureau must protect from disclosure the data it collects about individuals and establishments. Thus, the Bureau’s IT network must support both the test’s telecommunications and data processing requirements, as well as safeguard the confidentiality and integrity of respondents’ information. The Federal Information Security Management Act of 2002 (FISMA) requires each agency to develop, document, and implement an agency-wide information security program for the IT systems that supports its operations. Although the Bureau took a number of steps to implement IT security over the systems used for the test, based on available information, the Bureau did not meet several of FISMA’s key requirements. As a result, the Bureau could not ensure that the systems supporting the test were properly protected against intrusion or unauthorized disclosure of sensitive information. For example: IT inventory was not complete. FISMA requires an inventory of major information systems and interfaces. The Bureau did not have a complete inventory that showed all applications and general support IT systems associated with the test. Without such information, the Bureau could not ensure that security was effectively implemented for all of its systems used in the test, including proper risk assessments, adequate security plans, and effectively designed security controls. There was not sufficient evidence that the Bureau assessed all of the devices used in the test for vulnerabilities, or that it corrected previously identified problems. FISMA requires that agencies test and evaluate the effectiveness of information security policies, procedures, and practices for each system at least annually and that agencies have a process for remediating any identified security weaknesses. Since the Bureau could not provide us with a complete inventory of all network components used in the test, we could not determine if the Bureau’s tests and evaluations were complete. Moreover, there was not always evidence about whether the Bureau had corrected past problems or documented reasons for not correcting them. As a result, the Bureau did not have adequate assurance that the security of systems used in the 2004 census test was adequately tested and evaluated or that identified weaknesses were corrected on a timely basis. Assessments were not consistent. FISMA requires agencies to assess the risks that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems. Although the Bureau performed risk assessments for some of the IT components used in the 2004 census test, the documentation was not consistent. For example, documentation of information sensitivity risks (high, medium, and low) for confidentiality, integrity, and availability of information were not consistent and did not always follow Bureau policy. In addition, documents showed different numbers of file servers, firewalls, and even different names of devices. Without complete and consistent risk assessment documentation, the Bureau had limited assurance that it properly understood the security risks associated with the test. The Bureau did not always follow its own risk policies. FISMA requires the implementation of policies and procedures to prevent and/or mitigate security risks. Although Bureau policies allowed for the waiver of security policies, if appropriate, we noted that such policies were not always followed. For example, a waiver for the test of certain password policies was not properly documented and other system documents were not properly updated to reflect the waiver. As a result, the risk assessment for the 2004 census test did not properly identify the related risks and did not identify any compensating controls to reduce the risk to an acceptable level. As the Bureau plans future tests and the census itself, it will be important for it to strengthen its IT security risk management practices, ensuring they fully adhere to FISMA requirements and its own IT security policies. Test Reveals Technical, Training, and Other Challenges in Need of Prompt Resolution The 2004 test suggests that while certain census initiatives have potential, formidable challenges remain. For example, the HHCs show promise in that enumerators were successful in using them to collect data from nonrespondents and remove late mail returns. Still, they were not street ready as they experienced transmission and memory overload problems. Likewise, automated maps were difficult to use, certain questionnaire items confused respondents, and enumerators did not always follow interview protocols. These problems shed light on issues in need of the Bureau’s attention as it develops solutions and incorporates refinements for additional testing in the years ahead. HHCs Were Effective for Conducting Interviews and Removing Late Mail Returns The Bureau purchased 1,212 HHCs for the test at a total cost of about $1.5 million. The devices were sent directly to the two test sites packaged in kits that included a battery, AC adaptor, and modem card for transmitting data via the telephone. The HHCs were also equipped with a Global Positioning System (GPS), a satellite-based navigational system to help enumerators locate street addresses. The Bureau anticipates the HHCs will allow it to eliminate the millions of paper questionnaires and maps that enumerators need when following up with nonrespondents, thereby improving their efficiency and reducing overall costs. Because the Bureau had never used HHCs in the decennial census, an important goal of the test was to see whether enumerators could use them for interviewing nonrespondents (see fig. 3). Most workers we observed had little trouble using the device to complete the interviews. In fact, most said they were pleased with the HHC’s overall functionality, durability, screen clarity, and the ability to toggle between the questionnaire and taking GPS coordinates. Another important function of the HHC was removing late mail returns from each enumerator’s assignment area(s). Between the Georgia and Queens test sites, over 7,000 late mail returns were removed, reducing the total nonresponse follow-up workload by nearly 6 percent. The ability to remove late mail returns from the Bureau’s nonresponse follow-up workload could help save money in that it could eliminate the need for enumerators to make expensive follow-up visits to households that return their questionnaires after the mail-back deadline. Had the Bureau possessed this capability during the 2000 Census, it could have eliminated the need to visit nearly 773,000 late-responding households and saved an estimated $22 million (based on our estimate that a 1-percentage point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the price tag of nonresponse follow-up). Because of the Bureau’s experience in 2000, in our 2002 report on best practices for more cost-effective nonresponse follow-up, we recommended, and the Bureau agreed, that it should develop options that could purge late mail returns from its nonresponse follow-up workload. Technical and Training Difficulties Caused HHC Transmission Problems Each day, enumerators were to transmit completed nonresponse follow-up cases to headquarters and receive assignments, software uploads, or both via a telephone modem (see fig. 4 for a flowchart describing the file transmission process). However, the majority of workers we interviewed had problems doing so, in large part because of technical reasons or because the Bureau’s training did not adequately prepare them for the complexity of the transmission procedure, which was a multistep process involving the connection of a battery pack, cables, and other components. As reliable transmissions are crucial to the success of nonresponse follow- up, it will be important for the Bureau to resolve these issues so that the HHCs can be reevaluated in 2006. Difficulties began during training when the first transmission was supposed to occur and continued through the remainder of the test. During that first transmission, the Bureau needed to upload a number of software upgrades along with each census worker’s first assignment. Many of these transmissions failed because of the volume of data involved. Thus, without cases, the trainees could not complete an important section of on-the-job training. The Bureau acknowledged that these initial problems could have been avoided if the final version of software had been installed on the devices prior to their distribution at training. Transmission problems persisted throughout nonresponse follow-up. According to the Bureau, during the first 2 weeks of this operation, successful data transmission occurred 80 percent of the time once a connection was made. However, a number of enumerators never even established a connection because of bad phone lines, incorrect passwords, and improper setup of their modems. Other transmission problems were due to the local telecommunication infrastructure at both test sites. For example, in Georgia, older phone lines could not always handle transmissions, while in Queens, apartment intercoms that used phone lines sometimes interrupted connections. Further, while the transmission rate ultimately increased to 95 percent-– roughly the maximum allowed by the technology—that level is still short of the performance level needed for 2010. During the 2000 Census, a 95 percent success rate would have resulted in the failure to transmit around 30,000 completed questionnaires each day. During the test, the Bureau also had to contend with census workers who were “living off the grid”; that is, they only used cellular phones and lacked landlines to transmit and receive data from their homes. While individuals could make alternative arrangements, such as using a neighbor’s telephone, an increasing number of people nationwide in the coming years might give up their landline service to rely on cellular phones, which could be problematic for the Bureau. Bureau officials have noted that all these transmission problems need to be addressed before 2010. HHCs experienced memory overloads if too many assignment areas were loaded onto them. An assignment area typically contains 40 housing units or cases that are assigned to an enumerator for nonresponse follow-up. The design was to have an entire assignment area transmitted to the HHC even when as few as one case needed follow-up. However, some enumerators’ HHCs became overloaded with too much data, as cases had to be reassigned due to staff turnover, a larger-than-expected number of refusals, and reassignments resulting from language problems. As such, when HHCs became overloaded they would crash and enumerators had to reconfigure them at the local census office, which made them less productive. To the Bureau’s credit, during the test, it was able to work out a solution to avoid overloads by assigning individual cases instead of the entire assignment area to a census worker’s HHC. Another problem that surfaced during the test was that the HHC’s mapping feature was difficult to use. To contain costs and increase efficiency, the Bureau expects to replace paper maps with the electronic maps loaded on the HHCs for 2010. However, during the test, enumerators reported that they did not always use the mapping function because it ran slowly and did not provide sufficient information. Instead, they relied on local maps or city directories, and one worker explained that she found it easier to use an Internet mapping service on her home computer to prepare for her route. Without the Bureau’s maps, enumerators might not properly determine whether a housing unit was located in the Bureau’s geographic database. This verification is important for ensuring that housing units and the people who reside in them are in the correct census block, as local and state jurisdictions use census population figures for congressional redistricting and allocating federal funds. Enumerators were also unable to use the HHCs’ “go back” function to edit questionnaires beyond a certain point in the interview. In some cases, this led to the collection of incorrect data. For example, we observed one worker complete half an interview, and then discover that the respondent was providing information on a different residence. After the census worker entered the number of residents and their names, the “go back” function was no longer available and as a result that data could not be deleted or edited. Instead, the worker added information in the “notes section” to explain that the interview had taken place at the wrong household. However, Bureau officials told us that they had not planned to review or evaluate these notes and were not aware that such address mix- ups had been documented in the notes section. To the extent address mix-ups and other inconsistencies occur and are not considered during data processing, accuracy could be compromised. In earlier censuses when the Bureau used paper questionnaires, if workers made mistakes, they could simply erase them or record the information on new forms. As mistakes are inevitable, it will be important for the Bureau to ensure that the HHCs allow enumerators to edit information, while still maintaining the integrity of the data. Bureau Needs to Review Format of Coverage Improvement and Race/Ethnicity Questions We found that questions designed to improve coverage and better determine race and ethnicity were awkward for enumerators to ask and confusing for respondents to answer. Consequently, enumerators sometimes did not read the questions exactly as worded, which could adversely affect the reliability of the data collected for these items, as well as the Bureau’s ability to evaluate the impact of the revised questions. Our observations also highlight the importance of ensuring that workers are trained to follow interview protocols; this issue will be discussed later in this report. Coverage Improvement While the Bureau attempts to count everyone during a census, inevitably some people are missed and others are counted more than once. To help ensure that the Bureau properly counts people where they live, the Bureau revised and assessed its residency rules for the 2004 census test. For example, under the residence rules, college students should be counted at their campus addresses if they live and stay there most of the time. The Bureau also added two new coverage questions aimed at identifying household residents who might have been missed or counted in error (see fig. 5 for coverage questions). Enumerators were to show respondents flashcards with the residence rules to obtain the number of people living or staying in the housing unit and to read the two coverage questions. However, during our field visits we noted that they did not consistently use the flashcards, preferring to summarize them instead. Likewise, enumerators did not always ask the new coverage questions as written, sometimes abbreviating or skipping them altogether. A frequent comment from the workers we spoke with was that the two new coverage questions were awkward because the questions seemed redundant. Indeed, one census worker said that he asked the overcount and undercount questions more times than not, but if people were in a hurry, he did not ask the questions. During one of these hurried interviews, we observed that the census worker did not ask the questions and simply marked “no” for the response. Race and Ethnicity Questions Collecting reliable race and ethnicity data is an extremely difficult task. Both characteristics are subjective, which makes accurate measurement problematic. In 2003, the Bureau tested seven different options for formatting the race and ethnic questions, and selected what it thought was the optimal approach to field test in 2004. The Bureau planned to examine respondent reaction to the new race and Hispanic origin questions by comparing responses collected using the paper questionnaire to responses recorded on the HHCs during nonresponse follow-up. One change the Bureau planned to analyze was the removal of the “some other race” write-in option from the questionnaire. In 2000, the Bureau found that when given this option, respondents would check off “some other race,” but did not always write in what their race was. Thus, in the 2004 test, the Bureau wanted to assess respondents’ reaction to the removal of the “some other race” write-in option. Specifically, the Bureau wanted to see whether respondents would skip the item or select from one of the other options given. However, we found that the Bureau formatted the race question on the paper questionnaire differently from the question on the HHC. As shown in figure 6, on the paper version, there is not a category for another race other than those categories listed, thus forcing respondents to select a category or skip the question entirely. This contrasts with the HHCs where, if respondents do not fit into one of the five race categories, the questionnaire format allows them to provide an “other” response and enumerators can record their answers. In fact, the HHC requires enumerators to record a response to the race question and will not allow the interview to continue until a response is entered. As a result, the data recorded by the two questionnaire formats are not comparable as they could produce different data depending on the data collection mode. According to the Bureau, it formatted the paper version of the race question differently from the HHC version because it considered the “other” response option on the HHC a respondent comment and not a write-in response. Nevertheless, if the Bureau’s purpose is to measure respondent reaction to eliminating the write-in option, it is uncertain what conclusions the Bureau will be able to draw given that this option, even though in the form of a comment, is still available to the respondent during the nonresponse follow-up interview. As was the case with the coverage measurement question, enumerators at both test locations did not always follow proper interview procedures because they felt the questions were awkward to ask and confused respondents. For example, some workers did not use the flashcards designed to guide respondents in selecting categories for their race and ethnicity and to ensure data consistency. One census worker said that rather than use the flashcards or ask the questions, he might “eyeball” the race and ethnicity. Another worker said that most people laughed at the Spanish, Hispanic, or Latino origin question and she had complaints about the wording of this question. A third census worker noted that he was “loose with the questions” because he could pose them better. Like lapses to the coverage improvement procedures for the 2004 census test, deviating from the interview procedures for the new race and ethnicity questions may affect the reliability of the data and the validity of the Bureau’s conclusions concerning respondent reaction to these questions. Since the 2004 census test, the 2005 Consolidated Appropriations Act required that the Bureau include “some other race” as a category when collecting census data on race identification. Consequently, the Bureau said it will include this category on all future census tests and the 2010 Census itself. Thus, while research into eliminating the “some other race” category is now moot, it will still be important for the Bureau to have similar formats for the HHCs and paper questionnaires so that similar data can be captured across modes. Likewise, it will be important for the wording of those questions to be clear and for enumerators to follow proper procedures during interviews. New Procedures Should Help Reduce Duplicate Enumerations of Group Quarter Residents, but Other Challenges Remain As noted previously, under its residence rules, the Bureau enumerates people where they live and stay most of the time. To facilitate the count, the Bureau divides residential dwellings into two types: housing units, such as single-family homes and apartments, and group quarters, which include dormitories, prisons, and nursing homes. The Bureau tested new group quarters procedures in 2004 that were designed to address the difficulties the Bureau had in trying to identify and count this population group during the 2000 Census. For example, communities reported instances where prison inmates were counted in the wrong county and residents of college dormitories were counted twice. One refinement the Bureau made was integrating its housing unit and group quarter address lists in an effort to avoid counting them once as group quarters and again as housing units, a common source of error during the 2000 Census. Census workers were then sent out to verify whether the dwellings were in fact group quarters and, if so, to classify the type of group quarter using a revised “other living quarters facility” questionnaire. A single address list could, in concept, help reduce the duplicate counting that previously occurred when the lists were separate. Likewise, we observed that census workers had no problems using the revised facility questionnaire and accompanying flashcard that allowed the respondent to select the appropriate type of living facility. This new procedure addresses some of the definitional problems by shifting the responsibility for defining the group quarter type from the Bureau to the respondent, who is in a better position to know about the dwelling. Another change tested in 2004 was the classification of group homes, which in 2000 was a part of the group quarter inventory. Group homes are sometimes difficult for census workers to spot because they often look the same as conventional housing units (see fig. 7). As a result, they were sometimes counted twice during the 2000 Census—once as a group quarter, and once as a housing unit. For the 2004 test, the Bureau decided to treat group homes as housing units and include them in the housing unit list. Early indications from the Bureau suggest that including group homes as housing units, whereby they receive a short-form questionnaire in the mail, may not work. According to the Bureau, the format of the short form is not well suited to group home residents. For example, the questionnaire asks for the “name of one of the people living or staying here who owns or rents this place.” Since the state or an agency typically owns group homes, these instructions do not apply. The Bureau stated that it plans to reassess how it will identify and count people living in group homes. We identified other problems with the Bureau’s group quarters validation operation during the 2004 census test. For example, we were told that census workers were provided maps of the areas they were assigned but needed maps for adjoining areas so that they could more accurately locate the physical location of the group quarters. In Georgia, where workers used address data from the 2000 Census, the crew leader explained that approximately one-third of all the addresses provided were incorrectly spotted on maps and had to be redone. They also lacked up-to-date instructions—for example, they did not know that they were to correct addresses rather than just delete them if the addresses were wrong. Further, census workers said that scenarios in the manual and classroom training were based on perfect situations; thus, they did not provide adequate training for atypical settings or when problems arose. The Bureau Should Rethink Its Approach to Training Enumerators The success of the census is directly linked to the Bureau’s ability to train enumerators to do their jobs effectively. This is a tremendous task given the hundreds of thousands of enumerators the Bureau needs to hire and train in just a few weeks. Further, enumerators are temporary employees, often with little or no prior census experience, and are expected, after just a few days of training, to do their jobs with minimal supervision, under sometimes difficult and dangerous conditions. Moreover, the individuals who train enumerators—crew leaders—are often recent hires themselves, with little, if any, experience as instructors. Overall, few, if any, organizations face the training challenges that confront the Bureau with each decennial population count. To train the 1,100 enumerators who conducted nonresponse follow-up for the 2004 test, the Bureau employed essentially the same approach it has used since the 1970 Census: crew leaders read material word-for-word from a training manual to a class of 15 to 20 students. The notable exception was that in transitioning from a paper questionnaire to the HHCs, the Bureau lengthened the training time from 3 days to 5 days. However, given the demographic and technological changes that have taken place since 1970, the Bureau might want to explore alternatives to this rigid approach. As noted earlier, during nonresponse follow-up, enumerators experienced a variety of problems that could be mitigated through improved training. The problems included difficulties setting up equipment to transmit and download data; failure to read the coverage and race/ethnicity questions exactly as worded; and not properly using the flashcards, which were designed to help respondents answer specific questions. Most of the shortcomings related to training that we observed during the test were not new. In fact, the Bureau had identified these and a number of other training weaknesses in its evaluation of the 2000 Census, but it is clear they have not been fully resolved. Thus, as the Bureau plans for the 2010 Census, it will be important for it to resolve long-standing training problems as well as address new training issues, such as how best to teach enumerators to use the HHCs and their associated automated processes. Our observations of the test point to specific options the Bureau might want to explore. They include (1) placing greater emphasis on the importance of following prescribed interview procedures and reading questions exactly as worded; (2) supplementing verbatim, uniform training with modules geared toward addressing the particular enumeration challenges that census workers are likely to encounter at specific locales; and (3) training on how to deal with atypical situations or respondent reluctance. To help evaluate its future training needs, the Bureau hired a contractor to review the training for the 2004 test and recommend actions for improving it. From GAO’s work on assessing agencies’ training and development efforts, we have developed a framework that can also help in this regard. Though too detailed to discuss at length in this report, highlights of the framework, and how they could be applied to census training, include: 1. performing proper front-end analysis to help ensure that the Bureau’s enumerator training is aligned with the skill and competencies needed to meet its field data collection requirements and work processes and that the Bureau leverages best practices and lessons learned from training enumerators and from past experience; 2. identifying specific training initiatives that in conjunction with other strategies, improve enumerators’ performance and help the Bureau meet its goal of collecting high-quality data from nonrespondents; 3. ensuring effective and efficient delivery of training that reinforces new and needed competencies, skills, and behaviors without being wedded to past, and perhaps outmoded, methods; and 4. evaluating the training to ensure it is addressing known skill and competency weaknesses through such measures as assessing participant reactions and changes in enumerators’ skill levels and behaviors. Readiness Will Be Critical for Future Tests Several key features of the 2004 test were not test ready; that is, they were not fully functional or mature when they were employed at the test sites. This is a serious shortcoming because it hampered the Bureau from fully evaluating and refining the various census-taking procedures that will be used in subsequent tests and the actual census in 2010. Further, to the extent these features were integrated with other operations, it impeded the Bureau from fully assessing those associated activities as well. Our work, and that of the Department of Commerce Inspector General, identified the following areas where the Bureau needed to be more prepared going into the test: The HHCs crashed, in part, because earlier testing did not identify software defects that caused the download of more data to the HHCs than their memory cards could hold. Transmission failures occurred during enumerator training, in part, because the HHCs were shipped without the latest version of needed software. Although the Bureau ultimately provided the latest software after several weeks, the upgraded version was unavailable for training field operations supervisors and crew leaders and for the initial enumerator training. According to the Department of Commerce Inspector General, the Bureau finalized the requirements for the new group quarter definitions too late for inclusion in group quarters training manuals. Consequently, the training lacked certain key instructions, such as how to categorize group homes. The Bureau experienced other glitches during the test that with better preliminary testing or on-site dry runs, might have been detected and possibly addressed before the test started. These included the slow start- up of the HHC’s mapping function, and the tendency for apartment house intercoms to interrupt transmissions. An important objective of any type of test is to identify what is working and where improvements are needed. Thus, it should not be surprising, and, in fact, should be expected and commended, that shortcomings were found with some of the various activities and systems assessed during the 2004 test. We believe that the deficiency is not the existence of problems; rather it is the fact that several components were incomplete or still under development going into the test, which made it difficult for the Bureau to gauge their full potential. The Bureau had a similar experience in the dress rehearsal for the 2000 Census, when, because a number of new features were not test ready, the Bureau said it could not fully test them with any degree of assurance as to how they would affect the head count. Because of the tight time frames and deadlines of the census, the Bureau needs to make the most of its limited testing opportunities. Thus, as the Bureau plans for the next field test in 2006 and the 2008 dress rehearsal, it will be important for the Bureau to ensure the various census operations are fully functional at the time of the test so they can be properly evaluated. Conclusions The Bureau is well aware that a successful enumeration hinges on early research, development, testing, and evaluation of all aspects of the census design. This is particularly true for the 2010 Census for which, under its current plan, the Bureau will be relying on HHCs and other methods and technologies that (1) have never been used in earlier censuses and (2) are mission critical. Consequently, the 2004 test was an important milestone in the 2010 life cycle because it demonstrated the fundamental feasibility of the Bureau’s basic design and allows the Bureau to advance to the next and more mature phase of planning and development. Nevertheless, while the test revealed no fatal flaws in the Bureau’s approach, the results highlighted serious technical, training, methodological, and procedural difficulties that the Bureau will need to resolve. Since one of the purposes of testing is to determine the operational feasibility of the census design, it is not surprising that problems surfaced. However, looking toward the future, it will be critical for the Bureau to diagnose the source of these challenges, devise cost- effective solutions, and integrate refinements and fixes in time to be assessed during the next field test scheduled for 2006. It will also be important for Congress to monitor the Bureau’s progress as it works to resolve these issues. Recommendations for Executive Action To facilitate effective census planning and development, and to help the Bureau achieve its key goals for the census—reduce risks, improve accuracy, and contain costs, we recommend that the Secretary of Commerce direct the Bureau to take the following eight actions: Analyze the impact that HHCs and the targeted second mailing had on cost savings and other Bureau objectives. Ensure the Bureau’s IT security practices are in full compliance with applicable requirements, such as the FISMA, as well as its own internal policies. Enhance the reliability and functionality of HHCs by, among other actions, (1) improving the dependability of transmissions, (2) exploring the ability to speed up the mapping feature, (3) eliminating the causes of crashes, and (4) making it easier for enumerators to edit questionnaires. Define specific, measurable performance requirements for the HHCs and other census-taking activities that address such important measures as productivity, cost savings, reliability, durability, and test their ability to meet those requirements in 2006. Review and test the wording and formatting of the coverage and race/ethnicity questions to make them less confusing to respondents and thus help ensure the collection of better quality data, and ensure they are formatted the same way on both the HHC and paper versions of the census form. Develop a more strategic approach to training by ensuring the curriculum and instructional techniques (1) are aligned with the skills and competencies needed to meet the Bureau’s data collection requirements and methodology and (2) address challenges identified in the 2004 test and previous censuses. Revisit group quarter procedures to ensure they allow the Bureau to best locate and count this population group. Ensure that all systems and other census-taking functions are as mature as possible and test ready prior to their deployment for the 2006 test, in part by conducting small-scale, interim tests under the various conditions and environments the Bureau is likely to encounter during the test and actual enumeration. Further, to ensure the transparency of the census-planning process and facilitate Congressional monitoring, we also recommend that the Secretary of Commerce direct the Bureau to regularly update Congress on the progress it is making in addressing these and any other challenges, as well as the extent to which the Bureau is on track for meeting the overall goals of the 2010 Census. Agency Comments and Our Evaluation The Under Secretary for Economic Affairs at the Department of Commerce forwarded us written comments from the Census Bureau on a draft of this report on December 20, 2004, which are reprinted in appendix I. The Bureau noted that the 2004 test was its first opportunity to assess a number of the new methods and technologies under development for 2010, and emphasized the importance of a sustained, multiyear planning, testing, and development program to its census modernization effort. The Bureau generally agreed with seven of our nine recommendations, and described the steps it was taking to address our concerns. The Bureau also provided additional context and clarifying language and we have added this information to the report where appropriate. Specifically, the Bureau generally agreed with our recommendations relating to improving IT security practices, the reliability of the HHCs, training, testing, and enumeration procedures—and reported it was already taking a number of steps to address our concerns. We commend the Bureau for recognizing the risks and challenges that lie ahead and taking action to address them. We will continue to monitor the Bureau’s progress in resolving these issues and update Congress on a regular basis. At the same time, the Bureau took exception to our recommendations to (1) analyze the impact that HHCs and targeted second mailings had on cost savings and other Bureau objectives, and (2) define specific, measurable performance requirements for the HHCs and other census-taking activities and test their ability to meet those requirements in 2006. With respect to the first recommendation, the Bureau noted that it did not establish cost- savings and other impacts as test objectives, in part, because the Bureau believes that the national sample mail test that it conducted in 2003 provided a better method for determining the boost in response rates that could accrue from a second mailing. The Bureau maintains that analyzing the impact of the second mailing would provide it with no more information beyond what it has already established from the 2003 test and would be of little value. We believe this recommendation still applies because it will be important for the Bureau to assess the impact of the targeted second mailing on other Bureau objectives. As we noted in the report, the Bureau included the HHCs and targeted second mailing in the 2010 Census design, in part, to reduce staff, improve productivity, and control costs. Further, as we also note in the report, the feasibility of a targeted second mailing is an open question. Thus, information on the degree to which the HHCs and second mailing contribute to these key goals could help inform future budget estimates, investment and design decisions, as well as help refine future census tests. In short, the purpose of the analysis we recommend would not be to see whether these features of the 2010 Census will produce cost- savings, but the extent of those savings and the impact on other Bureau objectives. With respect to the second recommendation, the Bureau noted that it had “baseline assumptions” about productivity, cost-savings, and other measures for the 2004 Census test and that a key objective of the test was to gather information to help refine these assumptions. According to the Bureau, this will also be a key objective of the 2006 Census Test, although its performance goal will not be whether it meets specific measures. Instead, the Bureau intends to focus on successfully collecting information to further refine those assumptions. As a result, the Bureau believes the 2006 test will not be a failure if HHC productivity is not achieved, but that it will be a failure if productivity data are not collected. The Bureau’s position is inconsistent with our recommendation which we believe still applies. As noted in the report, we call on the Bureau to define measurable performance requirements for the HHCs as well as take the next step and assess whether the HHCs can meet those requirements as part of the 2006 test. This information is essential because it will help the Bureau gauge whether HHCs can meet its field data collection needs in 2010. Should the HHCs fail to meet these pre-specified performance requirements during the 2006 test, the Bureau would need to rethink how it employs these devices in 2010. As agreed with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of Commerce and the Director of the U.S. Census Bureau. Copies will be made available to others on request. This report will also be available at no charge on GAO’s home page at http://gao.gov. Please contact me at (202) 512-6806 or daltonp@gao.gov or Robert Goldenkoff, Assistant Director, at (202) 512- 2757 or goldenkoffr@gao.gov if you have any questions. Key contributors to this report were Tom Beall, David Bobruff, Betty Clark, Robert Dacey, Richard Donaldson, Elena Lipson, Ronald La Due Lake, Robert Parker, Lisa Pearson, and William Wadsworth. Comments from the Department of Commerce GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
A rigorous testing and evaluation program is a critical component of the census planning process because it helps the U.S. Census Bureau (Bureau) assess activities that show promise for a more cost-effective head count. The Bureau conducted a field test in 2004, and we were asked to (1) assess the soundness of the test design and the extent to which the Bureau implemented it consistent with its plans, (2) review the quality of the Bureau's information technology (IT) security practices, and (3) identify initial lessons learned from conducting the test and their implications for future tests and the 2010 Census. The Bureau's design for the 2004 census test addressed important components of a sound study, and the Bureau generally implemented the test as planned. For example, the Bureau clearly identified its research objectives, developed research questions that supported those objectives, and developed evaluation plans for each of the test's 11 research questions. The initial results of the test suggest that while certain new procedures show promise for improving the cost-effectiveness of the census, the Bureau will have to first address a number of problems that could jeopardize a successful head count. For example, enumerators had little trouble using hand held computers (HHC) to collect household data and remove late mail returns. The computers could reduce the Bureau's reliance on paper questionnaires and maps and thus save money. The test results also suggest that certain refinements the Bureau made to its procedures for counting dormitories, nursing homes, and other "group quarters" could help prevent the miscounting of this population group. Other aspects of the test did not go as smoothly. For example, security practices for the Bureau's IT systems had weaknesses; the HHCs had problems transmitting data; questionnaire items designed to improve coverage and better capture race/ethnicity confused respondents; enumerators sometimes deviated from prescribed enumeration procedures; and certain features of the test were not fully operational at the time of the test, which hampered the Bureau from fully gauging their performance. With few testing opportunities remaining, it will be important for (1) the Bureau to find the source of these problems, devise cost-effective solutions, and integrate refinements before the next field test scheduled for 2006, and (2) Congress to monitor the Bureau's progress in resolving these issues.
Background The interstate commercial motor carrier industry, primarily the trucking industry, is an important part of the nation’s economy. Trucks transport over 11 billion tons of goods, or about 60 percent of the total domestic tonnage shipped. Buses also play an important role, transporting an estimated 860 million passengers in 2005. FMCSA estimates that there are 711,000 interstate commercial motor carriers, about 9 million trucks and buses, and about 10 million drivers. Most motor carriers are small; about 51 percent operate one vehicle, and another 31 percent operate two to four vehicles. Carrier operations vary widely in size, however, and some of the largest motor carriers operate upwards of 58,000 vehicles. Carriers continually enter and exit the industry. Since 1998, the industry has increased in size by an average of about 29,000 interstate carriers per year. In the United States, commercial motor carriers account for fewer than 5 percent of all highway crashes, but these crashes result in about 13 percent of all highway deaths, or about 5,500 of the approximately 43,000 highway fatalities that occur nationwide annually. In addition, on average, about 160,000 of the approximately 3.2 million highway injuries per year involve motor carriers. The fatality rate for trucks has generally decreased over the past 30 years but has been fairly stable since 2002. The fatality rate for buses decreased slightly from 1975 to 2005, but it has more annual variability than the fatality rate for trucks due to a much smaller total number of vehicle miles traveled. (See fig. 1.) In an attempt to reduce the number and severity of crashes involving large trucks, FMCSA was established by the Motor Carrier Safety Improvement Act of 1999. FMCSA assumed almost all of the responsibilities and personnel of the Federal Highway Administration’s Office of Motor Carriers. The agency’s primary mission is to reduce the number and severity of crashes involving large trucks and buses. It carries out this mission by (1) issuing, administering, and enforcing federal motor carrier safety regulations and hazardous materials regulations; (2) providing education and outreach for motor carriers and drivers on the safety regulations and hazardous materials regulations; (3) gathering and analyzing data on motor carriers, drivers, and vehicles; (4) developing information systems to improve the transfer of data; and (5) researching new methods and technologies to enhance motor carrier safety. FMCSA relies heavily on the results of compliance reviews to determine whether carriers are operating safely and, if not, to take enforcement action against them. (See fig. 2.) FMCSA conducts these on-site reviews to determine carriers’ compliance with safety regulations that address areas such as testing drivers for alcohol and drugs, insurance coverage, crashes, driver qualifications, driver hours of service, vehicle maintenance and inspections, and transportation of hazardous materials. Due to resource constraints, FMCSA and its state partners are able to conduct compliance reviews on only about 2 percent of the nation’s estimated 711,000 interstate motor carriers each year. It is FMCSA’s policy to target these reviews at carriers that have been assessed by SafeStat as having the highest risk of crashes, have been the subject of a safety-related complaint submitted to FMCSA, have been involved in a fatal accident, have requested an upgraded safety rating based on safety improvements, or have been assigned a safety rating of conditional following a previous compliance review. Based largely on the number and severity of violations that it identifies during compliance reviews, FMCSA assigns carriers safety ratings that determine whether they are allowed to continue operating. FMCSA can take a range of enforcement actions against carriers with violations, including issuing notices of violation informing carriers of identified violations and indicating that additional enforcement action may be taken if the violations are not corrected; issuing compliance orders directing carriers to perform certain actions that FMCSA considers necessary to bring the carrier into compliance with regulations; assessing fines for violations of the safety regulations; fines require carriers to pay a specific dollar amount to FMCSA; placing carriers or drivers out of service for unsatisfactory safety performance, failure to pay a fine, or imminently hazardous conditions or operations; revoking the operating authority of carriers for failure to carry the required amount of insurance coverage; pursuing criminal penalties in some instances when knowing and willful violations can be proved; and seeking injunctions from a court for violations of a final order such as an out-of-service order. FMCSA has 52 division offices that partner with the 56 recipients of its Motor Carrier Safety Assistance Program grants. FMCSA also funds and oversees enforcement activities, including compliance reviews, at the state level through this grant program. The program was appropriated $188 million, or about 38 percent, of FMCSA’s $501 million appropriation for fiscal year 2006. In fiscal year 2006, FMCSA conducted 9,719 compliance reviews, and its state partners conducted 5,463 compliance reviews. SafeStat assesses carriers’ risks relative to all other carriers based on safety indicators such as their crash rates and safety violations identified during roadside inspections and during prior compliance reviews. A carrier’s score is calculated on the basis of its performance in the following four safety evaluation areas: The accident area reflects a carrier’s crash history relative to other motor carriers based on data from states and MCMIS. The driver area reflects a carrier’s driver-related safety performance and compliance relative to other motor carriers based on driver violations identified during roadside inspections and compliance reviews. The vehicle area reflects a carrier’s vehicle-related safety performance and compliance relative to other motor carriers based on vehicle-related violations identified during roadside inspections and compliance reviews. The safety management area reflects the carrier’s safety management performance relative to other motor carriers based on safety-management-related violations (such as failing to implement a drug or alcohol testing program) and hazardous-materials-related violations identified during compliance reviews and on closed enforcement cases resulting from compliance reviews. A motor carrier’s score is based on the carrier’s relative ranking, indicated as a value, in each of the four safety evaluation areas. This value can range from 0 to 100 in each area, and any value of 75 or greater is considered deficient. Any value of less than 75 is not considered deficient and is not used in calculating a SafeStat score. FMCSA assigns categories to carriers ranging from A to H according to their performance in each of the safety evaluation areas. (See table 1.) Although a carrier may receive a value in any of the four safety evaluation areas, the carrier receives a SafeStat score only if it is deficient in two or more safety evaluation areas. The calculation used to determine a motor carrier’s SafeStat score is SafeStat score = 2 x accident value + 1.5 x driver value + vehicle value + safety management value As shown in the formula, the accident and driver areas have 2.0 and 1.5 times the weight, respectively, of the vehicle and safety management areas. FMCSA assigned more weight to these areas because accidents and driver violations correlate relatively better with future crash risk. In consultation with state transportation officials, insurance industry representatives, safety advocates, and the motor carrier industry, FMCSA used its expert judgment and professional knowledge to assign these weights, rather than determining them through a statistical approach, such as regression modeling. Based on the results of a compliance review, FMCSA assigns the carrier a safety rating of satisfactory, conditional, or unsatisfactory. The safety rating, which is distinct from a carrier’s SafeStat category, reflects FMCSA’s determination of a carrier’s fitness to operate safely. FMCSA issues out-of-service orders to carriers rated unsatisfactory, and these carriers are not allowed to resume operating until they make improvements that result in an upgraded safety rating. Carriers rated conditional are allowed to continue operating, but FMCSA aims to conduct follow-up compliance reviews on these carriers. FMCSA assigns safety ratings based on a carrier’s performance in six areas. (See table 2.) One area is the carrier’s accident rate, and the other five areas involve its compliance with regulations. The five regulation-based areas are (1) minimum insurance coverage and procedures for handling and evaluating accidents; (2) drug and alcohol use and testing, commercial driver’s license standards, and driver qualifications; (3) driver hours of service; (4) vehicle parts and accessories necessary for safe operation; inspection, repair, and maintenance of vehicles; and (5) transportation of hazardous materials. Regardless of a carrier’s safety rating, FMCSA can assess a fine against a carrier with violations, and it is more likely to assess higher fines when these violations are serious. FMCSA uses a tool to help it determine the dollar amounts of its fines. Federal law requires FMCSA to assess the maximum allowable fine against a carrier for each serious violation of federal motor carrier safety and commercial driver’s license laws if the carrier is found to have a pattern of such violations or a record of previously committing the same or a related serious violation. FMCSA’s Policy for Prioritizing Compliance Reviews Targets Many High- Risk Carriers, but Changes to the Policy Could Target Carriers with Even Higher Risk SafeStat identifies many carriers that pose high crash risks. However, modifications to FMCSA’s policy that carriers have to score among the worst 25 percent of carriers in two or more safety evaluation areas to receive high priority for a compliance review and focusing more on crash risk could result in the selection of carriers with a higher aggregate crash risk. FMCSA recognizes that SafeStat can be improved, and as part of its Comprehensive Safety Analysis 2010 reform initiative, which is aimed at improving its processes for identifying and dealing with unsafe carriers, the agency is considering replacing SafeStat with a new tool by 2010. FMCSA’s Policy for Prioritizing Compliance Reviews Leads the Agency to Conduct Compliance Reviews on Many High- Risk Carriers but Not on Other Higher Risk Ones FMCSA’s policy for prioritizing carriers for compliance reviews based on their SafeStat scores results in FMCSA’s conducting compliance reviews on carriers with a higher aggregate crash risk than carriers that are not selected. As a result, FMCSA’s prioritization policy has value as a method for targeting high-risk carriers. But changes to the policy could result in targeting carriers with an even higher aggregate crash risk. According to our analysis of SafeStat’s June 2004 categorization of carriers, the 4,989 carriers that received high priority for a compliance review (SafeStat categories A or B) had a higher aggregate crash risk (102 crashes per 1,000 vehicles in the 18 months following the SafeStat categorization) than the remaining 617,034 carriers (27 crashes per 1,000 vehicles). (See table 3.) However, the 2,464 carriers that scored among the worst 25 percent of carriers in the accident evaluation area alone (SafeStat category D) had a slightly higher aggregate crash risk (112 crashes per 1,000 vehicles) than did the carriers in SafeStat categories A or B. Furthermore, the 1,090 carriers that scored among the worst 10 percent and the 492 carriers that scored among the worst 5 percent of carriers in the accident area (and did not score among the worst 25 percent of carriers in any other area) had even higher aggregate rates of 148 and 213 crashes per 1,000 vehicles, respectively. Our analysis suggests that FMCSA’s targeting of high-risk carriers could be enhanced by giving high priority for a compliance review to carriers that score among the worst 25, 10, or 5 percent of carriers in the accident evaluation area alone. We recognize that giving such carriers high priority for a compliance review would increase FMCSA’s and the states’ compliance review workloads unless FMCSA were to make another change to its prioritization policy that resulted in removing the same number of carriers from the high-priority categories A and B. For example, if FMCSA had given high priority to the 492 carriers that scored among the worst 5 percent of carriers in the accident evaluation area in June 2004, it could have removed the 492 carriers in categories A or B with the lowest SafeStat score in order to hold its and the states’ compliance review workloads constant. The lowest-scoring carriers in categories A and B had an aggregate crash risk of 65 crashes per 1,000 vehicles, less than one-third the crash risk of the carriers that could have replaced them (214 crashes per 1,000 vehicles). We also found that carriers that scored among the worst 25 percent, 10 percent, or 5 percent of carriers in either the driver, vehicle, or safety management areas (and did not score among the worst 25 percent of carriers in any other area) had a lower aggregate crash risk than carriers in SafeStat categories A or B. Of these various groups of carriers with poor performance in a single area, the carriers that scored among the worst 10 percent of carriers in the driver area had the highest aggregate crash risk (70 crashes per 1,000 vehicles). A Regression Model Performs Better Than Current SafeStat Model and the Prioritization Approach We Developed In our June 2007 report, we estimated that FMCSA could improve SafeStat’s performance by about 9 percent by using a statistical regression model approach to weight the accident, driver, vehicle, and safety management evaluation areas instead of its current approach, which is based on expert judgment. Employing this approach would have allowed FMCSA to identify carriers with almost twice as many crashes in the following 18 months as those carriers identified under its current approach. We found that although the driver, vehicle, and safety management evaluation area scores are correlated with the future crash risk of a carrier, the accident evaluation area correlates the most with future crash risk and should be weighted more heavily than the current SafeStat formula weights this area. These results corroborate studies performed by the Volpe National Transportation Systems Center and Oak Ridge National Laboratory, the latter of which also employed statistical approaches. (See app. I for a discussion of these studies.) We believe that our regression model approach from our June 2007 report is preferable to the prioritization approach we developed in this report because it provides for a systematic assessment of the relative contributions of accidents and driver, vehicle, and safety management violations. That is, by its very nature, the regression model approach looks for the “best fit” in identifying the degree to which prior accidents and driver, vehicle, and safety management violations identify the likelihood of carriers having crashes in the future, compared with the current SafeStat approach and the prioritization approach we developed for this report, both of which use expert judgment to establish the relationship among the four evaluation areas. In addition, because the regression model could be run monthly—as is the current SafeStat model—any change in the degree to which accidents and driver, vehicle, and safety management violations better identify future crashes will be automatically considered as different weights are assigned to the four evaluation areas. This is not the case with the current SafeStat model, in which the evaluation area weights generally remain constant over time. Thus, the systematic assessment and the automatic updating of evaluation area weights using a regression model approach better ensure the targeting of carriers that pose high crash risks—both currently and in the future. We compared the performance of our regression model approach to the current SafeStat model and to two alternative approaches that employ the current SafeStat model approach (with the current weighting of evaluation areas) but give higher priority to some carriers in category D (carriers that scored among the worst 25 percent of carriers in only the accident evaluation area). The two alternatives were substituting carriers in the worst 5 percent of the accident evaluation area for carriers in SafeStat categories A and B with (1) the lowest accident area scores and (2) the lowest overall SafeStat numerical scores. The regression model approach performed better than the current SafeStat approach and at least as well as the alternatives discussed in this report, in terms of identifying carriers that experienced a higher aggregate crash rate or a greater number of crashes. (See table 4.) For example, the regression model approach identified carriers with an average of 111 crashes per 1,000 vehicles over an 18-month period compared with the current SafeStat approach that identified carriers for compliance reviews with an average of 102 crashes per 1,000 vehicles. The regression model approach also performed at least as well as the alternatives discussed in this report in terms of identifying carriers with the highest aggregate crash rate and much better than the alternatives in identifying carriers with the greatest number of crashes. Finally, the alternatives discussed in this report were superior to the results of FMCSA’s current prioritization policy in terms of identifying carriers with both a higher aggregate crash rate and a greater number of crashes. FMCSA officials told us that the agency plans to assess whether the approach developed in this report—giving high priority to carriers that perform very poorly in only the accident evaluation area (such as those that scored among the worst 5 percent)—would be an effective use of its resources. However, FMCSA officials expressed concern that adopting our regression model approach would reduce the effectiveness of FMCSA’s compliance review program by targeting many compliance reviews at carriers that, despite high crash rates, have good compliance records. FMCSA believes that compliance reviews of such carriers, compared with compliance reviews of carriers in SafeStat categories A or B (carriers that, by definition, have a history of noncompliance), have less potential to reduce accidents. FMCSA said that this is because compliance reviews are designed to reduce crashes by identifying safety violations that some carriers then correct, and compliance reviews of carriers with good compliance records but high crash rates have historically identified fewer serious violations than compliance reviews of carriers in SafeStat categories A and B. FMCSA officials told us that, as part of its Comprehensive Safety Analysis 2010 reform initiative, the agency is evaluating the potential for new ways to address motor carriers that are having crashes, but that it believes are not good candidates for the compliance review tool. (See the discussion on FMCSA’s Comprehensive Safety Analysis 2010 reform initiative in a subsequent section.) We agree with FMCSA that the use of our model could tilt enforcement heavily toward carriers with high crash rates and away from carriers with compliance problems. We believe that use of the model would enhance motor carrier safety, even if it resulted in FMCSA reviewing carriers with good compliance records. FMCSA’s mission—and the ultimate purpose of compliance reviews—is to reduce the number and severity of truck and bus crashes. As previously discussed, we found that while driver, vehicle, and safety management evaluation area scores are correlated with the future crash risk of a carrier, high crash rates are a stronger predictor of future crashes than is poor compliance with safety regulations. These facts suggest that FMCSA would improve motor carrier safety more by targeting carriers with high crash rates, even if they have better compliance records, than by targeting carriers in SafeStat categories A and B with significantly lower crash rates but with worse compliance records. The missing piece in the puzzle is that FMCSA does not have a good understanding of why some carriers, despite good compliance records, have high crash rates; how compliance reviews affect their crash rates; and what other approaches may be effective in reducing their crash rates. We believe that developing this understanding would be a natural outgrowth of implementing our regression model approach. FMCSA officials also said that placing more emphasis on the accident evaluation area would increase emphasis on the least reliable type of data used by SafeStat—crash data—and in so doing, it would increase the sensitivity of the results to crash data quality issues. However, our June 2007 report found that FMCSA has made a considerable effort to improve the reliability of crash data. That report also concluded that as FMCSA continues its efforts to have states improve crash data, any sensitivity of results from our regression model approach to crash data quality issues should diminish. FMCSA officials were also concerned that our issuing two reports on SafeStat within several months of each other could be interpreted as an indictment of SafeStat and of FMCSA’s responsiveness to our June 2007 report on this issue. This is not the case. SafeStat does a good job of identifying carriers that pose high crash risks. As we reported in June 2007, we found that SafeStat is nearly twice as effective (83 percent better than) as random selection in identifying carriers that pose high crash risks and, therefore, has value for improving safety. Nonetheless, we found that FMCSA’s policy for prioritizing compliance reviews could be improved by applying either our regression model approach or one of the prioritization approaches we developed in this report. While we believe that the regression model approach provides somewhat better safety results, we understand, as discussed in our June 2007 report, that it could require FMCSA to re-educate the motor carrier industry and others, such as safety advocates, insurers, and the public, about the new approach. We would prefer that FMCSA implement our recommendation that it use our regression model approach but adopting either our regression model approach or one of the prioritization approaches we developed in this report would, in our opinion, improve FMCSA’s targeting of high-risk carriers. The recommendation that we make in this report reflects this conclusion. Finally, FMCSA has been very helpful and responsive during both our—largely concurrent—reviews. FMCSA Has Acted to Address Data Quality Problems That Potentially Hinder SafeStat’s Ability to Identify High-Risk Carriers For our June 2007 report, we assessed the quality of the data used by SafeStat and the degree to which the quality of the data affects SafeStat’s identification of high-risk carriers, and we identified actions FMCSA has taken to improve the quality of the data used by SafeStat. We found that crash data reported by the states from December 2001 through June 2004 have problems in terms of timeliness, accuracy, and completeness that potentially hinder FMCSA’s ability to identify high-risk carriers. Regarding timeliness, we found that including late-reported data had a small impact on SafeStat—had all crash data been reported within 90 days of when the crashes occurred, 182 of the carriers identified by SafeStat as highest risk would have been excluded (because other carriers had higher crash risks), and 481 carriers that were not originally designated as posing high crash risks would have scored high enough to be considered high risk, resulting in a net addition of 299 carriers (or 6 percent) to the original 4,989 carriers that the SafeStat model ranked as highest risk in June 2004. We were not able to quantify the effect of incomplete or inaccurate data on SafeStat’s ability to identify carriers that pose high crash risks, because doing so would have required us to gather crash records at the state level—an effort that was impractical. FMCSA has acted to improve the quality of SafeStat’s data by completing a comprehensive plan for data quality improvement, implementing an approach to correct inaccurate data, and providing grants to states for improving data quality, among other things. We could not quantify the effects of FMCSA’s efforts to improve the completeness or accuracy of the data for the same reason as just mentioned. (See app. II for a more detailed discussion of the quality of the data used by SafeStat.) FMCSA Is Considering Replacing SafeStat with a New Tool by 2010 As part of its Comprehensive Safety Analysis 2010, a reform initiative aimed at improving its processes for identifying and dealing with unsafe carriers and drivers, FMCSA is considering replacing SafeStat with a new tool by 2010. The new tool could take on greater importance in FMCSA’s safety oversight framework because the agency is considering using the tool’s assessments of carriers’ safety to determine whether carriers are fit to continue operating. In contrast, SafeStat’s primary use now is in prioritizing carriers for compliance reviews, and determinations of operational fitness are made only after the compliance reviews are completed. While the new tool may use some of the same data included in SafeStat, such as carriers’ crash rates and driver and vehicle violations identified during compliance reviews and roadside inspections, it may also consider a broader range of behavioral data related to crashes than does SafeStat. For example, the new tool may consider information from crash reports, such as whether driver fatigue, a lack of driver experience, a medical reason, a mechanical failure, shifting loads, or spilled or dropped cargo, were cited as causal or contributing factors. An FMCSA official told us that the agency is analyzing the relationship between these factors and crash rates to help it determine how the factors should be assessed and the relative weights to place on the factors. We believe that, compared with the expert-judgment-based approach that FMCSA used to select the weights for SafeStat’s evaluation areas, this analytical approach has the potential to better identify high-risk carriers. FMCSA’s Management of Its Compliance Reviews Promotes Thoroughness and Consistency FMCSA manages its compliance reviews in a fashion that meets our standards for internal control, thereby promoting thoroughness and consistency in the reviews. For example, it records its policies and procedures related to compliance reviews in an operations manual. FMCSA also provides investigators with classroom and on-the-job training on how to plan for and conduct compliance reviews. In addition, it employs an information system that documents the results of compliance reviews and allows FMCSA and state managers to review the compliance reviews for thoroughness, accuracy, and consistency. FMCSA uses several approaches to monitor its compliance review program, including an agencywide review in 2002 that led to several changes in the program. FMCSA Communicates Its Compliance Review Policies and Procedures through an Electronic Manual and Training FMCSA’s communication of its policies and procedures related to conducting compliance reviews meets our standards for internal control. These standards state that an organization’s policies and procedures should be recorded and communicated to management and others within the entity who need it and in a form (e.g., clearly written and provided as a paper or electronic manual) and within a time frame that enables them to carry out their responsibilities. FMCSA records and communicates its policies and procedures electronically through its “Field Operations Training Manual” (hereafter called the operations manual), which it provides to all federal and state investigators and their managers. The operations manual includes guidance on how to prepare for a compliance review. For example, it tells investigators that they must download and review a report that includes information on the carrier’s accidents, drivers, and inspections, and it explains how this information can help the investigator focus the compliance review. It also specifies the minimum number of driver and vehicle maintenance records to be examined and the minimum number of vehicle inspections to be conducted during a compliance review. FMCSA aims to update its operations manual twice a year. It posts updates to the operations manual that automatically download to investigators and managers when they connect to the Internet. In between these updates, FMCSA communicates policy changes by e-mail. In addition to the operations manual, FMCSA provides training to investigators on its policies and procedures related to compliance reviews. FMCSA policy requires that investigators successfully complete classroom training and examinations before they conduct a compliance review. The training covers the safety and hazardous materials regulations and software tools used during compliance reviews. According to FMCSA officials, investigators then receive on-the-job training, which allows them to accompany an experienced investigator during compliance reviews. This training lasts until managers decide that the trainees are ready to complete a compliance review on their own, typically after 3 to 6 months on the job. Investigators can also take additional classroom training on specialized topics throughout their careers. Furthermore, according to FMCSA officials, FMCSA’s division offices hold periodic and ad hoc meetings to train investigators about policy changes related to compliance reviews. In addition, in commenting on a draft of this report, FMCSA noted that it has an annual safety investigator certification process to ensure that only qualified personnel conduct compliance reviews. FMCSA Investigators Use an Information System to Document the Results of Compliance Reviews FMCSA’s documentation of compliance reviews meets our standards for internal control. These standards state that all transactions and other significant events should be clearly and promptly documented, and the documentation should be readily available for examination. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. The standards also state that control activities, including reviews of information and system edit checks, should help to ensure that all transactions are completely and accurately recorded. FMCSA and state investigators use an information system to document the results of their compliance reviews, including information on crashes and any violations of the safety regulations that they identify. This documentation is readily available to FMCSA managers, who told us that they review it to help ensure completeness and accuracy. FMCSA officials told us that the information system also helps ensure thoroughness and consistency by prompting investigators to follow FMCSA’s policies and procedures, such as requirements to meet a minimum sample size. The information system also includes checks for consistency and reasonableness and prompts investigators when the information they enter appears to be inaccurate. An FMCSA manager told us that managers typically assess an investigator’s thoroughness by comparing the investigator’s rate of violations identified over the course of several compliance reviews with the average rate for investigators in their division office; a rate that is substantially below the average suggests insufficient thoroughness. Generally, FMCSA and state investigators and managers said they found the information system to be useful. FMCSA Monitors the Performance of Its Compliance Reviews and Has Taken Actions to Address Identified Issues FMCSA’s performance measurement and monitoring of compliance review activities meet our standards for internal control. These standards state that managers should compare actual performance to planned or expected results and analyze significant differences. Monitoring of internal controls should include policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. According to FMCSA and state managers and investigators, the managers review all compliance reviews in each division office and state to ensure thoroughness and consistency across investigators and across compliance reviews. The investigators we spoke with generally found these reviews to be helpful, and several investigators said that the reviews helped them learn policies and procedures and ultimately perform better compliance reviews. FMCSA and state managers told us that they also use monthly reports to track the performance of investigators using measures such as the numbers of reviews completed and the rates of violations found. Managers generally found that these reports provide useful information on investigators’ performance, and several managers said that they use the reports to help identify specific areas where an investigator needs additional coaching or training. However, several state managers said that monitoring of their investigators’ performance would be enhanced if they had access to FMCSA’s monthly report on their investigators; currently, states rely on their own custom reports. FMCSA told us that it plans to make its monthly report on state investigators available to state managers by October 2007. In addition to assessing the performance of individual investigators, FMCSA periodically assesses the performance of FMCSA division offices and state agencies, and it conducted an agencywide review of its compliance review program in 2002. According to officials at one of FMCSA’s service centers, the service centers lead triennial reviews of the compliance review and enforcement activities of each division office and its state partner. These reviews assess whether the division offices and state partners are following FMCSA policies and procedures, and they include an assessment of performance data for items such as number of compliance reviews conducted, rate of violations identified, and number of enforcement actions taken. The officials said that some reviews identify instances of deviations by division offices from FMCSA’s compliance review policies, but that only minor adjustments by the division offices are needed. The officials also said that the service centers compile best practices identified during the reviews and share these among the division offices and state partners. To ensure that concerns identified during the reviews are addressed, the officials said that the service centers monitor the quality of individual compliance reviews that lead to enforcement cases and the monthly reports on division office and state activities. The officials said that the service centers also check on responses to previously identified concerns during the triennial reviews. FMCSA’s agencywide review indicated that inconsistencies and bottlenecks in the compliance review process were reducing its efficiency and effectiveness, and FMCSA made several changes in 2003 aimed at improving compliance review policies, procedures, training, software, and supporting motor carrier data. Examples of problems identified and actions taken are as follows: FMCSA discouraged repeat visits to high-risk motor carriers that had received unsatisfactory ratings during their last compliance review within the past 12 months because the agency believed that not enough time had elapsed to show whether safety improvements had taken effect. FMCSA discouraged safety investigators from their earlier practice of favoring violations of drug and alcohol regulations over violations of hours-of-service regulations when they choose which violations to document for enforcement because crash data and FMCSA’s survey of its field staff suggest that compliance with hours-of-service regulations is more important for safety. FMCSA revised its operations manual to encourage FMCSA’s division offices to document the maximum number of areas of the regulations where major safety violations are discovered, rather than penalizing motor carriers for a few violations in a particular area at the expense of other areas. FMCSA’s review also concluded that most investigators were not following FMCSA’s policy requiring them to perform vehicle inspections as part of a compliance review if the carrier has not already received the required number of roadside vehicle inspections. FMCSA has since changed its policy so that inspecting a minimum number of vehicles is no longer a strict requirement—if an investigator is unable to inspect the minimum number of vehicles, he or she must explain why in the compliance review report. FMCSA told us that, as part of their review of individual compliance reviews, division office managers ensure that when compliance reviews have fewer than the minimum number of vehicle inspections, investigators provide adequate justification in their reports. We did not verify this statement because we did not have enough time or resources. We did, however, assess the extent to which compliance reviews included the minimum number of vehicle inspections. In fiscal year 2005, FMCSA and its state partners conducted 7,436 compliance reviews on carriers that had not already received the minimum number of vehicle inspections; of these, only 254 compliance reviews (3 percent) included the minimum number of vehicle inspections. FMCSA’s review also found that investigators considered inspections to be the one aspect of compliance reviews, other than licensing and insurance verification, that had the smallest effect on carriers’ safety performance. FMCSA’s review team recommended that FMCSA establish new criteria for conducting vehicle inspections during compliance reviews, and suggested that inspections could be made optional. In contrast, in 2002, the National Transportation Safety Board (the Safety Board) recommended that FMCSA require that all compliance reviews include vehicle inspections. The Safety Board based its recommendation on its belief that the vehicles that receive roadside inspections may be less likely to have violations than the vehicles that could be inspected during a compliance review. In July 2006, FMCSA responded that implementing this recommendation would be imprudent because it would divert attention from driver and other safety factors, and FMCSA’s recent study of the causes of large truck crashes indicates the importance of driver factors, such as driving too fast for conditions and driver fatigue. FMCSA has not changed its policy, but an FMCSA official told us that under the operational model that FMCSA has proposed for its Comprehensive Safety Analysis 2010 reform initiative, vehicle inspections during compliance reviews would be optional. FMCSA also told us that it is developing a policy that would allow investigators conducting compliance reviews to inspect vehicles that operate in intrastate commerce. FMCSA believes that this policy will increase the number of compliance reviews with the minimum number of vehicle inspections. Finally, FMCSA’s review found that although investigators generally sampled the number of carrier records required by FMCSA’s policies, the number of undersized samples of drivers’ work hour logs was a cause for concern. The review said that a lack of clarity in FMCSA’s requirements for how carriers must document drivers’ hours was likely resulting in some carriers having too few records to sample. FMCSA is working to clarify its documentation requirements, but it has not set a date for completing this task. Each of the Major Applicable Areas of the Safety Regulations Is Covered by Most Compliance Reviews From fiscal year 2001 through fiscal year 2006, each of the nine major applicable areas of the safety regulations was covered by most of the approximately 76,000 compliance reviews conducted by FMCSA and the states. (See table 5.) An FMCSA official told us that not every compliance review is required to cover all nine areas and cited the following reasons: Follow-up compliance reviews of carriers rated unsatisfactory or conditional are sometimes streamlined to cover only the area or areas of the regulations in which the carrier had violations. Commercial driver’s license standards and drug and alcohol use and testing regulations apply primarily to those carriers that operate one or more vehicles weighing over 26,000 pounds (gross vehicle weight rating), that haul hazardous material, or that transport more than 15 passengers. Minimum insurance coverage regulations apply only to for-hire carriers and private carriers of hazardous materials; they do not apply to private passenger and nonhazardous materials carriers. However, according to an FMCSA official, the area of these regulations that had the lowest rate of coverage—vehicle parts and accessories necessary for safe operation—is required for all compliance reviews except streamlined reviews that exclude this area. Vehicle inspections are supposed to be a key investigative technique for assessing compliance with this area, and the FMCSA official said that the lower rate of coverage for this area likely reflects the small number of vehicle inspections that FMCSA and the states conduct during compliance reviews. In addition to the safety regulations, compliance reviews of hazardous materials carriers, shippers, and cargo tank facilities must cover hazardous materials regulations. In fiscal years 2005 and 2006, FMCSA conducted about 6,000 compliance reviews of hazardous materials operators. Collectively, these compliance reviews covered between 40 percent and 80 percent of the various individual areas of these regulations. However, none of these compliance reviews was required to cover all areas of the hazardous materials regulations; the required areas vary with the type of operator. Because the categories that MCMIS uses to classify hazardous materials operators are different from the categories used to determine which areas of the regulations must be covered, we could not determine, for the different types of operators, the extent to which FMCSA’s compliance reviews covered the required areas. FMCSA Follows Up with Many Carriers with Serious Safety Violations but Does Not Assess Maximum Fines against All of the Serious Violators Required by Law FMCSA placed many carriers rated unsatisfactory in fiscal year 2005 out of service and followed up with nearly all of the rest to determine whether they had improved. In addition, FMCSA monitors carriers to identify those that are violating out-of-service orders. However, it does not take additional action against many of the violators of out-of-service orders that it identifies. Furthermore, FMCSA does not assess the maximum fines against all of the serious violators that we believe the law requires, partly because FMCSA does not distinguish between carriers with a pattern of serious safety violations and those that repeat a serious violation. FMCSA Followed Up with Almost All Carriers That Received a Proposed Safety Rating of Unsatisfactory FMCSA followed up with 1,193 of 1,196 carriers (99.7 percent) that received a proposed safety rating of unsatisfactory following a compliance review that was completed in fiscal year 2005. FMCSA’s follow-up generally ensured that these carriers either made safety improvements that resulted in an upgraded final safety rating or—as required for carriers that also receive a final safety rating of unsatisfactory—were placed out of service. More specifically, FMCSA used the following approaches to follow up with these carriers: Follow-up compliance review. Based on such reviews, FMCSA upgraded the final safety ratings of 663 carriers (329 to satisfactory, and 334 to conditional). Assignment of a final rating of unsatisfactory and issuance of an out- of-service order. FMCSA assigned a final rating of unsatisfactory to 312 carriers and issued an out-of-service order to 309 (99 percent) of them. An FMCSA official told us that it did not issue an out-of-service order to 2 carriers because it could not locate them, and it did not issue an out-of- service order to another carrier because the carrier was still subject to an out-of-service order that FMCSA issued several years prior to the 2005 compliance review. Review of evidence of corrective action. Carriers can request an upgraded safety rating by submitting evidence of corrective action to FMCSA. Based on reviews of such evidence, FMCSA upgraded the final safety ratings of 217 carriers (23 to satisfactory, and 194 to conditional). Administrative review. Carriers that believe FMCSA made an error in assigning their proposed safety rating may request the agency to conduct an administrative review. Based on the administrative review, FMCSA upgraded the final safety rating of 1 carrier to conditional. FMCSA did not assign final safety ratings to the remaining 3 carriers. For 1 of these carriers, MCMIS indicates that the compliance review that resulted in the proposed rating of unsatisfactory did not identify any violations, even though carriers without violations are not supposed to receive a proposed unsatisfactory rating. For another of the carriers, MCMIS shows crashes, inspections, and a compliance review while also indicating that the carrier is inactive. FMCSA has been unable to locate the final carrier, and MCMIS indicates that the carrier is inactive. Unless FMCSA upgrades a proposed unsatisfactory safety rating or grants a carrier an extension, the agency is required under its policy to assign the carrier a final rating of unsatisfactory and to issue it an out-of-service order on the 46th day after the date of FMCSA’s notice of a proposed unsatisfactory rating for carriers of hazardous materials or passengers and on the 61st day for other types of carriers. Of the 309 out-of-service orders that FMCSA issued to carriers rated unsatisfactory following compliance reviews conducted in fiscal year 2005, 276 (89 percent) were issued on time, 28 (9 percent) were issued between 1 and 10 days late, and 5 (2 percent) were issued more than 10 days late. FMCSA also assigned final upgraded safety ratings within these time frames in 837 (95 percent) of the 881 cases in which it upgraded these ratings. FMCSA assigned 20 upgrades (2 percent) between 1 and 10 days late, and it assigned another 20 (2 percent) more than 10 days late. MCMIS did not have information on the timing of the other 4 upgrades. An FMCSA official told us that when an out-of-service order was issued more than 1 week late, the primary reason for the delay was that the responsible FMCSA division office had difficulty scheduling a follow-up compliance review and thus waited to issue the orders. The official said that other delays were caused by clerical errors; extended periods during which certain division offices operated without a person serving in the position with primary responsibility for ensuring that out-of-service orders are issued on time; a lack of complete compatibility between MCMIS and FMCSA’s enforcement database; and, in one service center whose policy is to personally serve out-of-service orders to carriers, insufficient advance notification by the service center to its division offices that an order was to be served. The official noted that the last two issues have been addressed and said that FMCSA plans to more closely monitor the timeliness of the issuance of out-of-service orders in all of FMCSA’s division offices. FMCSA Monitors Carriers to Identify Those That Are Violating Out-of-Service Orders, but It Does Not Take Additional Action against Many of the Violators It Identifies FMCSA uses two primary means to try to ensure that carriers that have been placed out of service do not continue to operate. First, FMCSA partners with states to help them suspend, revoke, or deny vehicle registration to carriers that have been placed out of service. FMCSA refers to these partnerships as the Performance and Registration Information Systems Management program (PRISM). PRISM links FMCSA databases with state motor vehicle registration systems and roadside inspection personnel to help identify vehicles operated by carriers that have been issued out-of-service orders. As of January 2007, 45 states had been awarded PRISM grants, and 27 states were operating with PRISM capabilities. FMCSA officials told us that some states have not applied for PRISM grants because they do not want to bear the costs that are not covered by the grants or they have not made the legislative changes required to implement PRISM. According to an FMCSA official, FMCSA has also begun working with PRISM states to enable them to receive automated notifications of carriers that have been placed out of service. PRISM can also identify carriers that attempt to register vehicles under a different carrier name, and FMCSA provided us with information on two out-of-service carriers that Connecticut, using PRISM, had caught trying to register vehicles by using a new company name. In addition, in commenting on a draft of this report, FMCSA said that during the first 6 months of fiscal year 2007, states that reported data to FMCSA indicated that at least 104 motor carriers had their state vehicle registrations suspended, revoked, or denied based on an FMCSA order to cease interstate operations. FMCSA and its state partners also monitor carriers for indicators—such as roadside inspections, moving violations, and crashes—that the carriers may be violating an out-of-service order. First, FMCSA recently began to require the state partners that receive Motor Carrier Safety Assistance Program grants to check during roadside inspections whether carriers are operating under revoked authority and to take enforcement action against any that are. Second, FMCSA visits some suspect carriers that it identifies by monitoring crash and inspection data to examine their records to determine whether they did indeed violate the order. FMCSA told us it is difficult for it to verify that such carriers were operating in violation of out-of-service orders because its resources do not allow it to visit each carrier or conduct roadside inspections on all vehicles, and we agree. In fiscal years 2005 and 2006, 677 of 1,741 carriers (39 percent) that were subject to an out-of-service order had a roadside inspection or crash; FMCSA cited only 36 of these 677 carriers for violating the out-of-service order. An FMCSA official told us that some of these carriers, such as carriers that were operating intrastate or leasing vehicles to other carriers, may not have been violating the out-of-service order. The official said that the agency did not have enough resources to determine whether each of the carriers was violating the out-of-service order. He also said that FMCSA recently completed a pilot program in which the agency cited obvious violators such as carriers that have a roadside inspection outside their home state. In commenting on a draft of this report, FMCSA said that it is developing new policies and procedures intended to establish a uniform national approach for follow-up, as well as additional enforcement action against motor carriers that have violated an out-of- service order. The Safety Board Recently Concluded That FMCSA Is Making Adequate Progress in Ensuring That Carriers Do Not Operate under Revoked Authority In 2006, the Safety Board assessed FMCSA’s approach to ensuring that carriers whose operating authority has been revoked do not operate and concluded that it was inadequate. The Safety Board recommended that FMCSA establish a program to address this issue. In response to this recommendation, FMCSA noted that, because the numbers of carriers that have been placed out of service or have had their operating authority revoked has significantly increased in recent years, it is difficult to ensure that these carriers do not continue to operate. An FMCSA official attributed this difficulty to FMCSA’s lack of resources to visit each carrier or conduct roadside inspections on all vehicles—the same reason FMCSA cites for not following up on all carriers that may be violating an out-of- service order. Despite this difficulty, FMCSA responded that it (1) is linking its licensing and insurance database to its primary carrier database to improve the ability of roadside inspection personnel in all states and registration offices in PRISM states to identify carriers that have had their operating authority revoked and (2) has directed division office managers to assess fines when data accessed during roadside inspections indicate that carriers were operating under revoked authority. In March 2007, the Safety Board said that FMCSA was making acceptable progress on the recommendation, but expressed concern that some states will choose not to implement PRISM and that, based on the program’s rate of implementation thus far, it will take too long to become fully operational in many other states. The Safety Board, therefore, encouraged FMCSA to implement PRISM more rapidly in all states. An FMCSA official told us that the agency is already making a concerted effort to encourage the 5 states without PRISM to adopt the program and the 18 PRISM states that do not yet have full PRISM capabilities to achieve them. FMCSA Has Reduced the Number of Carriers Rated Conditional That Need Follow-up Compliance Reviews, but the Timeliness of These Reviews Is Difficult to Assess FMCSA’s policy requires the agency to conduct follow-up compliance reviews on all carriers rated conditional and, over the last several years, the agency has reduced the number of such carriers needing review. After the Department of Transportation Inspector General reported in 1999 that FMCSA allowed motor carriers with less than satisfactory ratings to continue operations for extended periods of time, FMCSA began requiring follow-up compliance reviews on all carriers rated conditional. In fiscal years 2005 and 2006, respectively, FMCSA conducted 2,537 and 2,692 follow-up reviews of carriers rated conditional or unsatisfactory, exceeding its annual goal of 2,500 follow-up reviews. In addition, from fiscal year 2000 through fiscal year 2006, the number of carriers rated conditional that needed a follow-up review decreased from about 40,000 to about 30,000. While FMCSA has reduced the number of carriers rated conditional that need a follow-up review, it is difficult to assess the agency’s timeliness in conducting these reviews because FMCSA’s policy does not specify a time frame for following up on carriers with conditional safety ratings. The policy does discourage follow-up reviews within 12 months because FMCSA believes that more time is needed to show the effects of safety improvements. Yet the policy also gives FMCSA’s division office administrators the discretion to determine whether a follow-up review should be conducted within 12 months. Almost half of all carriers that received a conditional rating from fiscal year 2002 through fiscal year 2004 received a follow-up review within 12 months; however, because of the policy’s allowance for discretion, we could not determine how many, if any, of these follow-up reviews, occurred too soon. (See table 6.) In addition, because FMCSA does not specify a deadline for conducting follow-up reviews, we could not determine whether any of the reviews occurred too late. Our analysis of the timing of follow-up reviews shows that from fiscal year 2002 through fiscal year 2004, 66 percent of the carriers that received a conditional rating received a follow-up review within 24 months, while 7 percent received a follow-up review more than 24 months after they received their conditional rating. Another 27 percent of the carriers still needed a review as of September 2006. FMCSA Is Developing a New Safety Rating Methodology In 1999, the Safety Board recommended that FMCSA lower its threshold for rating a carrier unsatisfactory to include carriers with an unsatisfactory rating in either the driver or vehicle factor of the rating scheme. The Safety Board has classified this recommendation as one of its “most wanted” safety improvements since 2000. Although FMCSA has not yet decided whether it will implement this recommendation, it is developing a new rating methodology as part of its Comprehensive Safety Analysis 2010 reform initiative, and it plans to implement the methodology in 2010. As mentioned previously, the new methodology would base determinations of whether carriers are fit to continue operating on assessments made by the tool that FMCSA is developing to replace SafeStat, rather than on the results of compliance reviews. FMCSA believes that the new approach will enable the agency to assess the safety fitness of a larger share of the motor carrier industry. FMCSA is also considering determining the safety fitness of drivers, and applying interventions to those that it deems need them. FMCSA believes that the increased focus that this would bring to the safety of drivers is important because the results of its recent study on the causes of large truck crashes indicate that drivers of large trucks and other vehicles involved in truck crashes are 10 times more likely to be the cause of the crash than other factors, such as weather, road conditions, and vehicle performance. In addition, FMCSA is considering eliminating the conditional rating and using only two ratings—“continue to operate” and “unfit.” An FMCSA official told us that FMCSA may eliminate the conditional rating because the agency feels that the current satisfactory rating is being misinterpreted by some government agencies and members of the public that hire carriers as FMCSA’s seal of approval. The official said that the agency believes that the “continue to operate” rating, which would be given to all carriers that are allowed to continue to operate, is less likely to be viewed as a seal of approval than the satisfactory rating, which indicates a level of safety that is greater than the conditional rating that also allows carriers to continue operating. Depending on their safety performance, carriers or drivers allowed to continue operating could be subject to interventions, such as Web-based education, warning letters, requests for submission of documents, targeted roadside inspections, focused on-site reviews, comprehensive on-site reviews (similar to compliance reviews), and enforcement actions. Policy Change Gives FMCSA Appropriate Discretion in Performing Statutorily Required Reviews of High-Risk Carriers From August 2006 through February 2007, data from MCMIS indicate that FMCSA performed compliance reviews on 1,136 of the 2,220 (51 percent) carriers that were covered by FMCSA’s mandatory compliance review policy. Under the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users, FMCSA is required to conduct compliance reviews on carriers rated in SafeStat categories A or B for 2 consecutive months. In response to this requirement, in June 2006, FMCSA implemented a policy requiring a compliance review within 6 months for any such carrier unless the carrier had received a compliance review within the previous 12 months. An FMCSA official told us that the agency did not have enough resources to conduct compliance reviews on all of the 2,220 carriers within the first 6-month period. In April 2007, FMCSA revised the policy because the agency believes that it required compliance reviews for some carriers that did not need them, leaving FMCSA with insufficient resources to conduct compliance reviews on other carriers that did need them. The carriers that did not need compliance reviews were those that had already had a compliance review and had corrected identified violations, but these violations continued to adversely affect their SafeStat rating because SafeStat penalizes carriers for violations regardless of whether they have been corrected. This unnecessary targeting drained resources, leaving FMCSA without the means to conduct compliance reviews of carriers that had never received such a review, but, in FMCSA’s view, should have received one because of current safety performance issues that led to their placement in SafeStat categories C, D, or E. The new policy requires compliance reviews within 6 months for carriers that have been in SafeStat categories A or B for 2 consecutive months and received their last compliance review 2 or more years ago (or have never received a compliance review). In addition, compliance reviews are recommended for carriers that have been in SafeStat categories A or B for 2 consecutive months and received their last compliance review more than 1 year ago but less than 2 years ago. FMCSA division offices can decide not to conduct a compliance review on such a carrier if (1) its SafeStat category changes to a category other than A or B or (2) its safety evaluation area values are based largely on prior compliance review violations that have been corrected or on accidents or inspections that occurred prior to the carrier’s last compliance review. We believe that these changes are consistent with the act’s requirement and give FMCSA appropriate discretion in allocating its compliance review resources. FMCSA Has Substantially Reduced Its Backlog of Enforcement Cases From October 2005 through October 2006, FMCSA reduced its backlog of enforcement cases that had been open for 6 months or more by about 70 percent (from 807 to 247). As the Department of Transportation Inspector General has noted, a large backlog of enforcement cases negatively affects the integrity of the enforcement process for two reasons. First, because FMCSA considers only closed enforcement cases when targeting motor carriers for a compliance review, high-risk motor carriers are less likely to be selected if they have an open enforcement case. Second, because FMCSA assesses smaller fines against carriers with open cases than against those with closed cases, it may not assess appropriate fine amounts against carriers with multiple enforcement cases (the number of prior enforcement cases is one of the criteria that FMCSA uses to determine fine amounts). FMCSA’s 2002 review of its compliance review program also found that delays in closing enforcement cases were negatively affecting the integrity of the agency’s enforcement process. An FMCSA official told us that in response to this review, the agency assigned a second attorney to work on enforcement cases. In 2005, we recommended that FMCSA establish a goal specifying how much it would like to reduce the enforcement backlog and by what date. In March 2007, FMCSA implemented this recommendation by establishing goals to (1) close, by the end of 2007, its backlog of 63 enforcement cases in its division offices that had been open for 270 days or more and (2) close, by August 31, 2007, its backlog of 14 cases pending before its Assistant Administrator for Enforcement for more than 18 months, without adding other cases to this backlog. FMCSA Does Not Assess Maximum Fines Against All of the Serious Violators That the Law Requires FMCSA does not assess maximum fines against all of the serious violators that we believe the law requires. The law requires FMCSA to assess the maximum allowable fine for each serious violation by a carrier that is found (1) to have a pattern of committing such violations (pattern requirement) or (2) to have previously committed the same or a related serious violation (repeat requirement). The legislative history of this provision provides evidence that FMCSA must assess maximum fines in these two distinct situations. However, FMCSA’s policy on maximum fines does not fully meet these requirements. FMCSA enforces both requirements using what is known as the “three strikes rule,” applying the maximum allowable fine when it finds that a motor carrier has violated the same regulation three times within 6 years. FMCSA officials said they interpret both parts of the act’s requirements to refer to repeat violations, and because they believe that having two distinct policies on repeat violations would confuse motor carriers, FMCSA has chosen to address both requirements with its single three strikes policy. According to FMCSA officials, FMCSA developed the three strikes policy in response to a provision in the Motor Carrier Safety Act of 1984, which permitted FMCSA’s predecessor to assess a fine of up to $1,000 per offense (capped at $10,000) if the agency determined that “a serious pattern of safety violations” existed or had occurred. FMCSA officials told us that when Congress in 1999 enacted the current “pattern of violations” language in the Motor Carrier Safety Improvement Act, the agency interpreted it to be similar to the previous language and to mean three strikes. FMCSA’s interpretation does not carry out the statutory mandate to impose maximum fines in two different cases. In contrast to FMCSA, we read the statute’s use of the distinct terms “a pattern of violations” and “previously committed the same or a related violation” as requiring FMCSA to implement two distinct policies. A basic principle of statutory interpretation is that distinct terms should be read as having distinct meanings. In this case, the statute not only uses different language to refer to the violations for which maximum fines must be imposed, but it also sets them out separately and makes either type of violation subject to the maximum penalties. Therefore, one carrier may commit a variety of serious violations and another carrier may commit a serious violation that is the same as, or substantially similar to, a previous serious violation; the language on its face requires FMCSA to assess the maximum allowable fine in both situations—for a pattern of violations, as well as a repeat offense. FMCSA could define a pattern of serious violations in numerous ways that are consistent with the act’s pattern requirement. Our application of eight potential definitions shows that the number of carriers that would be subject to maximum fines depends greatly on the definition. (See table 7.) For example, a definition calling for two or more serious violations in each of at least four different regulatory areas during a compliance review would have made 38 carriers subject to maximum fines in fiscal year 2006. In contrast, a definition calling for one or more serious violations in each of at least three different regulatory areas would have made 1,529 carriers subject to maximum fines during that time. We also interpret the statutory language for the repeat requirement as calling for a “two strikes” rule as opposed to FMCSA’s three strikes rule. FMCSA’s interpretation imposes the maximum fine only after a carrier has twice previously committed a serious violation. The language of the statute does not allow FMCSA’s interpretation; rather it requires FMCSA to assess the maximum allowable fine for each serious violation against a carrier that has previously committed the same serious violation. In addition, in 2006, the Department of Transportation Inspector General found that FMCSA’s implementation of its three strikes rule had allowed many third strike violators to escape maximum fines. Specifically, of the 533 third strike violators of the hours of service or the drug and alcohol regulations between September 2000 and October 2004, 33 (6 percent) third strike violators were assessed the maximum fine. The Inspector General found that FMCSA did not consider many of these violators to be third strike violators because the agency, in keeping with its policy, did not count the carriers’ violations as strikes unless a violation resulted in the assessment of a fine. FMCSA does not always notify carriers of serious violations without fines and, therefore, FMCSA believes that counting such violations as strikes would violate the due process rights of carriers. The Inspector General agreed and recommended that FMCSA assess a no- dollar-amount fine or use another appropriate mechanism to legally notify a motor carrier of the violation and the policy that future violations will result in the maximum fine amount. An FMCSA official said that the agency is developing a policy designed to address this recommendation and plans to consider the related recommendation in this report as it develops the policy. FMCSA plans to implement the policy by June 2008. In fiscal years 2004 through 2006, there were more than four times as many carriers with a serious violation that constituted a second strike than there were carriers with a third strike. (See table 8.) For example, in fiscal year 2006, 1,320 carriers had a serious violation that constituted a second strike, whereas 280 carriers had a third strike. Carriers with a pattern of violations may also commit a second strike violation. For example, three of the seven carriers that had two or more serious violations in each of at least five different regulatory areas also had a second strike in fiscal year 2006. Were FMCSA to make policy changes along the lines discussed here, we believe that the new policies should address how to deal with carriers with serious violations that both are part of a pattern and repeat the same or similar previous violations. Conclusions FMCSA’s policy for prioritizing carriers for compliance reviews based on their SafeStat scores furthers motor carrier safety because it targets many carriers that pose high crash risks and thus has value for reducing both the number and severity of motor carrier crashes. However, the policy does not always target the carriers that have the highest crash risks. Modifications to the policy that we identified could improve FMCSA’s targeting of high-risk carriers, thereby leading to compliance reviews that would have a greater potential to avoid crashes and their associated injuries and fatalities. Our June 2007 report found that a regression model approach would better identify carriers that pose high crash risks than does SafeStat, enabling FMCSA to better target its resources. We recommended in that report that FMCSA implement such an approach. However, if FMCSA does not implement this recommendation, the analysis presented in this report suggests an alternative approach that would also better target carriers that pose high crash risks. This approach would give high priority for compliance reviews to carriers with very poor scores (such as the worst 5 percent) in the accident safety evaluation area. While FMCSA follows up with most carriers with serious safety violations, it has not established a time frame for carriers rated conditional to receive a follow-up compliance review. As a result, many carriers with conditional ratings can continue to operate for 2 years or more without a follow-up compliance review, posing safety risks to themselves and the public. Finally, we found that FMCSA assesses maximum fines against carriers that twice repeat a serious violation. However, because of FMCSA’s interpretation of the statutory requirement to assess maximum fines against serious violators, many carriers that continue to accrue serious violations do not have the maximum fine assessed against them. Therefore, neither the statutory requirement nor FMCSA’s enforcement is as effective as possible in deterring unsafe practices and, as a result, additional accidents could occur. Recommendations for Executive Action In our June 2007 report on the effectiveness of SafeStat, we recommended that FMCSA use a regression model approach to identify carriers that pose high crash risks rather than its expert judgment approach. Should the Secretary of Transportation decide not to implement that recommendation, we recommend that the Secretary of Transportation direct the FMCSA Administrator to take the following action: to improve FMCSA’s targeting of carriers that pose high crash risks, modify FMCSA’s policy for prioritizing compliance reviews so that carriers with very poor scores (such as the worst 5 percent) in the accident safety evaluation area will be selected for compliance reviews, regardless of their scores in the other areas. We also recommend that the Secretary of Transportation direct the FMCSA Administrator to take the following two actions: to help ensure that carriers rated conditional make safety improvements in a timely manner, establish a reasonable time frame within which FMCSA should conduct follow-up compliance reviews on such carriers and to meet the Motor Carrier Safety Improvement Act’s requirement to assess maximum fines and improve the deterrent effect of these fines, revise FMCSA’s related policy to include (1) a definition for a pattern of violations that is distinct from the repetition of the same or related violations and (2) a two strikes rule rather than a three strikes rule. Agency Comments We provided a draft of this report to the Department of Transportation for its review and comment. The department did not offer overall comments on the draft report. It said that it would assess the efficacy of the first recommendation, but it did not comment on the other recommendations. It offered several technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to congressional committees and subcommittees with responsibilities for commercial motor vehicle safety issues; the Secretary of Transportation; the Administrator, FMCSA; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix V. Appendix I: Other Assessments of SafeStat’s Ability to Identify High-Risk Motor Carriers Several studies by the Volpe National Transportation Systems Center (Volpe), the Department of Transportation Office of Inspector General, the Oak Ridge National Laboratory (Oak Ridge), and others have assessed the predictive capability of the Motor Carrier Safety Status Measurement System (SafeStat) model and the data used by that model. In general, studies that assessed the predictive power of SafeStat offered suggestions to increase that power, and studies that assessed data quality found weaknesses in the data that the Federal Motor Carrier Safety Administration (FMCSA) relies upon. Assessments of SafeStat’s Predictive Capability The studies we reviewed compared SafeStat with random selection to determine which does a better job of selecting carriers that pose high crash risks and assessed whether statistical approaches could improve that selection and whether carrier financial positions or driver convictions are associated with crash risk. Predictive Capability of SafeStat Compared with Random Selection In its 2004 and 1998 studies of the SafeStat model, Volpe analyzed retrospective data to determine how many crashes carriers in SafeStat categories A and B experienced over the following 18 months. The 2004 study used the carrier rankings from an application of the SafeStat model on March 21, 2001. Volpe then compared the SafeStat carrier safety ratings with state-reported data on crashes that occurred between March 22, 2001, and September 21, 2002, to assess the model’s performance. For each carrier, Volpe calculated a total number of crashes, weighted for time and severity, and then estimated a rate per 1,000 vehicles for comparing carriers in SafeStat categories A and B with the carriers in other SafeStat categories. The 1998 Volpe study used a similar methodology. Each study used a constrained subset of carriers rather than the full list contained in the Motor Carrier Management Information System (MCMIS). Both studies found that the crash rate for the carriers in SafeStat categories A and B was substantially higher than for the other carriers during the 18 months after the particular SafeStat run. On the basis of this finding, Volpe concluded that the SafeStat model worked. In response to a recommendation by the Department of Transportation Office of Inspector General, FMCSA contracted with Oak Ridge to independently review the SafeStat model. Oak Ridge assessed the SafeStat model’s performance and used the same data set (for March 21, 2001) provided by Volpe, which Volpe had used in its 2004 evaluation. Perhaps not surprisingly, Oak Ridge obtained a similar result for the weighted crash rate of carriers in SafeStat categories A and B over the 18-month follow-up period. Like the Volpe studies, the Oak Ridge study was constrained because it was based on a limited data set rather than the entire MCMIS data set. Application of Regression Models to Safety Data While SafeStat does better than simple random selection in identifying carriers that pose high crash risks, other methods can also be used. Oak Ridge extended Volpe’s analysis by applying regression models to identify carriers that pose high crash risks. Specifically, Oak Ridge applied a Poisson regression model and a negative binomial model using the safety evaluation area scores as independent variables to a weighted count of crashes that occurred in the 30 months before March 21, 2001. In addition, Oak Ridge applied the empirical Bayes method to the negative binomial regression model and assessed the variability of carrier crash counts by estimating confidence intervals. Oak Ridge found that the negative binomial model worked well at identifying carriers that pose high crash risks. However, the data set Oak Ridge had to use did not include any carriers with one reported crash in the 30 months before March 21, 2001. Because the data included only carriers with zero or two or more reported crashes, the distribution of crashes was truncated. Since the Oak Ridge regression model analysis did not cover carriers with safety evaluation area data and one reported crash, the findings from the study are limited in their generalizeability. However, other modeling analyses of crashes at intersections and on road segments have also found that the negative binomial regression model works well. In addition, our analysis, using a more recent and more comprehensive data set, supports the finding that the negative binomial regression model performs better than the SafeStat model. The studies carried out by other authors advocate the use of the empirical Bayes method in conjunction with a negative binomial regression model to estimate crash risk. Oak Ridge also applied this model to identify motor carriers that pose high crash risks. We applied this method to the 2004 SafeStat data and found that the empirical Bayes method best identified the carriers with the largest number of crashes in the 18 months after June 25, 2004. However, the crash rate per 1,000 vehicles was much lower than that for carriers in SafeStat categories A and B. We analyzed this result further and found that although the empirical Bayes method best identifies future crashes, it is not as effective as the SafeStat model or the negative binomial regression model in identifying carriers with the highest future crash rates. The carriers identified with the empirical Bayes method were invariably the largest carriers. This result is not especially useful from a regulatory perspective. Companies operating a large number of vehicles often have more crashes over a period of time than smaller companies. However, this does not mean that the larger company is necessarily violating more safety regulations or is less safe than the smaller company. For this reason, we do not advocate the use of the empirical Bayes method in conjunction with the negative binomial regression model as long as the method used to calculate the safety evaluation area values remains unchanged. If changes are made in how carriers are rated for safety, this method may in the future offer more promise than the negative binomial regression model alone. Appendix II: FMCSA’s Crash Data Used to Compare Methods for Identifying High-Risk Carriers The quality of crash data is a long-standing problem that hinders FMCSA’s ability to accurately identify carriers that pose high crash risks. Despite the problems of late-reported crashes and incomplete and inaccurate data on crashes, the data were of sufficient quality for our use, which was to assess whether different approaches to categorizing carriers could lead to better identification of carriers that subsequently have high crash rates. Our reasoning is based on our use of the same data set to compare the crash risk of carriers in SafeStat categories A or B and of carriers that score among the worst 25, 10, or 5 percent in an individual safety evaluation area. Limitations in the data would apply equally to both results. FMCSA has undertaken a number of efforts to improve crash data quality. Late Reporting Had a Small Effect on SafeStat’s Ability to Identify High-risk Carriers FMCSA’s guidance requires states to report all crashes to MCMIS within 90 days of their occurrence. Late reporting can cause SafeStat to miss some of the carriers that should have received a SafeStat score. Moreover, since SafeStat scoring involves a relative ranking of carriers, a carrier may receive a SafeStat score and have to undergo a compliance review because crash data for a higher risk carrier were reported late and not included in the calculation. Late reporting affected SafeStat’s ability to identify all high-risk carriers to a small degree—missing about 6 percent-—for the period that we studied. Late reporting of crashes by states also affected the safety rankings of more than 600 carriers, both positively and negatively. When SafeStat analyzed the 2004 data, which did not include the late-reported crashes, it identified 4,989 motor carriers as highest risk, meaning they received a category A or B ranking. With the addition of late-reported crashes, 481 carriers moved into the highest risk category, and 182 carriers dropped out of the highest risk category, resulting in a net increase of 299 carriers (6 percent) in the highest risk category. After the late-reported crashes were added, 481 carriers that originally received a category C, D, E, F, or G SafeStat rating received an A or B rating. These carriers would not originally have been given a high priority for a compliance review because the SafeStat calculation did not take into account all of their crashes. On the other hand, a number of carriers would have fared better if the late- reported crashes had been included in their score. Specifically, 182 carriers—or fewer than 4 percent of those ranked—fell from the A or B category into the C, D, E, F, or G category once the late-reported crashes were included. These carriers would have avoided a compliance review if all crashes had been reported on time. Overall, however, the vast majority of carriers (96 percent) were not negatively affected by late reporting. The timeliness of crash reporting seems to be improving. The median number of days it took states to report crashes to MCMIS dropped from 225 days in calendar year 2001 to 57 days in 2005 (the latest data available at the time of our analysis). In addition, the percentage of crashes reported by states within 90 days of occurrence has jumped from 32 percent in fiscal year 2000 to 89 percent in fiscal year 2006. (See fig. 3.) Incomplete Data from States Limit SafeStat’s Identification of All Carriers That Pose High Crash Risks FMCSA uses a motor carrier identification number, which is unique to each carrier, as the primary means of linking inspections, crashes, and compliance reviews to motor carriers. Approximately 184,000 (75 percent) of the 244,000 crashes reported to MCMIS between December 2001 and June 2004 involved interstate carriers. Of these 184,000 crashes, nearly 24,000 (13 percent) were missing this identification number. As a result, FMCSA could not match these crashes to motor carriers or use data from them in SafeStat. In addition, the carrier identification number could not be matched to one listed in MCMIS for 15,000 (8 percent) other crashes that involved interstate carriers. Missing data or data that cannot be matched to carriers for nearly one quarter of the crashes for the period of our review potentially have a large impact on a motor carrier’s SafeStat score because SafeStat treats crashes as the most important source of information for assessing motor carrier crash risk. Theoretically, information exists to match crash records to motor carriers by other means, but such matching would require too much manual work to be practicable. We were not able to quantify the actual effect of the missing data and the data that could not be matched for MCMIS overall. To do so, we would have had to gather crash records at the state level—an effort that was impractical. For the same reason, we cannot quantify the effects of FMCSA’s efforts to improve the completeness of the data (discussed later). However, the University of Michigan Transportation Research Institute issued a series of reports analyzing the completeness of the data submitted to MCMIS by the states. One of the goals of the research was to determine the states’ crash reporting rates. Reporting rates varied greatly among the 14 states studied, ranging from 9 percent in New Mexico in 2003 to 83 percent in Missouri in 2005. It is not possible to draw wide-scale conclusions about whether states’ reporting rates are improving over time because only 2 of the states—Missouri and Ohio—were studied in multiple years. However, the reporting rates of these 2 states did improve. Missouri experienced a large improvement in its reporting rate, with 61 percent of eligible crashes reported in 2001, and 83 percent reported in 2005. Ohio’s improvement was more modest, increasing from 39 percent in 2000 to 43 percent in 2005. The University of Michigan Transportation Research Institute’s reports also identified a number of factors that may affect states’ reporting rates. One of the main factors affecting reporting rates is the reporting officer’s understanding of crash reporting requirements. The studies note that reporting rates are generally lower for less serious crashes and for crashes involving smaller vehicles, which may indicate that there is some confusion about which crashes are reportable. Some states, such as Missouri, aid the officer by explicitly listing reporting criteria on the police accident reporting form, while other states, such as Washington, leave it up to the officer to complete certain sections of the form if the crash is reportable, but the form includes no guidance on reportable crashes. Other states, such as North Carolina and Illinois, have taken this task out of officers’ hands and include all reporting elements on the police accident reporting form. Reportable crashes are then selected centrally by the state, and the required data are transmitted to MCMIS. Inaccurate Data Potentially Limit SafeStat’s Ability to Identify Carriers That Pose High Crash Risks Inaccurate data, such as information on nonqualifying crashes reported to FMCSA, potentially have a large impact on a motor carrier’s SafeStat score because SafeStat treats crashes as the most important source of information for assessing motor carrier crash risk. The University of Michigan Transportation Research Institute’s reports on state crash reporting show that, among the 14 states studied, incorrect reporting of crash data is widespread. This inaccuracy limits SafeStat’s ability to identify carriers that pose high crash risks. In the most recent reports, the researchers found that, in 2005, Ohio incorrectly reported 1,094 (22 percent) of the 5,037 cases it reported, and Louisiana incorrectly reported 137 (5 percent) of the 2,699 cases it reported. In Ohio, most of the incorrectly reported crashes did not qualify because they did not meet the crash severity threshold. In contrast, most of the incorrectly reported crashes in Louisiana did not qualify because they did not involve vehicles eligible for reporting. Other states studied by the institute had similar problems with reporting crashes that did not meet the criteria for reporting to MCMIS. The addition of these nonqualifying crashes could cause some carriers to exceed the minimum number of crashes required to receive a SafeStat rating and result in SafeStat’s mistakenly identifying carriers as posing high crash risks. Because each report focuses on reporting in one state in a particular year, it is not possible to identify the number of cases that have been incorrectly reported nationwide and, therefore, it is not possible to determine the impact of inaccurate reporting on SafeStat’s calculations. We also found examples of crashes that are reported to MCMIS but cannot be used by SafeStat because of data errors. Specifically, we found that the carrier’s identification number cannot be matched to an identification number in MCMIS in 8 percent of reported crashes. FMCSA cannot link these crashes to specific carriers without an accurate identification number and, therefore, cannot use these crashes in the SafeStat model to identify carriers that pose high crash risks. As noted in the University of Michigan Transportation Research Institute’s reports, states may be unintentionally submitting incorrect data to MCMIS because of difficulties in determining whether a crash meets the reporting criteria. For example, in Missouri, pickups are systematically excluded from MCMIS crash reporting, which may cause the state to miss some reportable crashes. This may occur because, in recent years, a number of pickups have been equipped with rear axles that may increase their weight above the reporting threshold and make crashes involving them eligible for reporting. There is no way for the state to determine which crashes involving pickups qualify for reporting without examining the characteristics of each vehicle. In this case, the number of omissions is likely to be relatively small, but this example demonstrates the difficulty states may face when identifying reportable crashes. In addition, in some states, the information contained in the police accident report may not be sufficient for the state to determine if a crash meets the accident severity threshold. It is generally straightforward to determine whether a fatality occurred as a result of a crash, but it may be difficult to determine whether an injured person was transported for medical attention or a vehicle was towed because of disabling damage. In some states, such as Illinois and New Jersey, an officer can indicate on the form if a vehicle was towed by checking a box, but there is no way to identify whether the reason for towing was disabling damage. It is likely that such uncertainty results in overreporting because some vehicles may be towed for other reasons. FMCSA Has Undertaken Efforts to Improve Crash Data Quality FMCSA has taken steps to try and improve the quality of crash data reporting. As we noted in November 2005, FMCSA has undertaken two major efforts to help states improve the quality of crash data. One program, the Safety Data Improvement Program, has provided funding to states to implement or expand activities designed to improve the completeness, timeliness, accuracy, and consistency of their data. FMCSA has also used a data quality rating system to rate and display ratings for the quality of states’ crash and inspection data. Because these ratings are public, this system creates an incentive for states to improve their data quality. To further improve these programs, FMCSA has awarded additional grants to several states and implemented our recommendations to (1) establish specific guidelines for assessing states’ requests for funding to support data improvement in order to better assess and prioritize the requests and (2) increase the usefulness of its state data quality map as a tool for monitoring and measuring commercial motor vehicle crash data by ensuring that the map adequately reflects the condition of the states’ commercial motor vehicle crash data. In February 2004, FMCSA implemented Data Q’s, an online system that allows for challenging and correcting erroneous crash or inspection data. Users of this system include motor carriers, the general public, state officials, and FMCSA. In addition, in response to a recent recommendation by the Department of Transportation Inspector General, FMCSA is planning to conduct a number of evaluations of the effectiveness of a training course on crash data collection that it will be providing to states by September 2008. While the quality of crash data is sufficient for use in assessing whether different approaches to categorizing carriers could lead to better identification of carriers that subsequently have high crash rates and has started to improve, commercial motor vehicle crash data continue to have some problems with timeliness, completeness, and accuracy. These problems have been well-documented in several studies, and FMCSA is taking steps to address the problems through studies of each state’s crash reporting system and grants to states to fund improvements. As a result, we are not making any recommendations in this area. Appendix III: Review of Studies on Predictors of Motor Carrier and Driver Crash Risk Several studies have identified relationships between certain characteristics of motor carriers and drivers and their crash risks. Theses characteristics include carrier financial performance, carrier size, driver pay, and driver age. Relationship of Motor Carrier Characteristics and Crash Risk The studies we reviewed assessed whether financial performance or other characteristics of carriers, such as size, are associated with crash risk. Carrier Financial Performance Our 1991 study developed a model that linked changes in economic conditions to declining safety performance in the trucking industry. The study hypothesized that a decline in economic performance among motor carriers leads to a decline in safety performance in one or more of the following ways: (1) a lowering of the average quality of driver performance; (2) downward wage pressures encouraging driver noncompliance with safety regulations; (3) less management emphasis on safety practices; (4) deferred truck maintenance and replacement; and/or (5) the introduction of larger, heavier, multitrailer trucks. Using data on 537 carriers drawn from the Department of Transportation and the Interstate Commerce Commission, we found that seven financial ratios show promise as predictors of truck firms’ safety. For five of the seven financial variables we examined, firms in the weakest financial position had the highest subsequent accident rates. For example, weakness in any of three measures of profitability—return on equity, operating ratio, and net profit margin—was associated with subsequent safety problems as measured by accident rates. On behalf of FMCSA, a study carried out by Corsi, Barnard, and Gibney in 2002 examined how data on carriers’ financial performance correlate with a carrier’s safety rating following a compliance review. The authors selected motor carriers from MCMIS in December 2000 with complete data for the accident, driver, vehicle, and safety management safety evaluation areas. Using these data, the authors then matched a total of 700 carriers to company financial statements in the annual report database of the American Trucking Associations. The authors found that carriers that received satisfactory ratings following a compliance review performed better on two financial measures—operating ratio and return on assets— than carriers that received lower ratings. Two practical considerations limit the applicability of the findings from these two studies to SafeStat. First, the studies’ samples of 537 and 700 carriers, respectively, are not representative of the motor carriers that FMCSA oversees. For example, our sample included only the largest for- hire interstate carriers because these were the only carriers that were required to report financial information to the federal government. The carriers selected for the Corsi and others’ study were also not representative because a very small percentage of the carriers evaluated by the SafeStat model in June 2004 had scores for all four safety evaluation areas. About 2 percent had a score for the the safety management safety evaluation area, and of these, not all had complete data for the other three safety evaluation areas. Second, FMCSA does not receive annual financial statements from carriers and, according to an FMCSA official, it is unlikely that the agency could obtain the authority it would need to require financial statements from all carriers. In addition, because the relationships identified by our study are based on data and economic conditions that are almost 20 years old, the relationships would need to be reanalyzed within current conditions to determine whether they still exist. As part of its Comprehensive Safety Analysis 2010 reform initiative, discussed earlier in this report, FMCSA decided not to use financial data to help assess the safety risk of firms because of the limited availability of these data. Other Carrier Characteristics A 1994 study by Moses and Savage found that crash rates decline as firm size increases; the largest 10 percent of firms have an accident rate that is one-third the rate of the smallest 10 percent of firms. Our 1991 study found that the smallest carriers, as a group, had an accident rate that exceeded the rate for all firms by 20 percent. The study by Moses and Savage also found that (1) private fleets that serve the needs of their parent companies, such as manufacturers and retailers, have accident rates that are about 20 percent lower than the rates of carriers that offer for-hire trucking; (2) carriers of hazardous materials have accident rates that are 22 percent higher than the rates of carriers that do not transport these goods; and (3) general freight carriers have accident rates that are 10 percent higher than the rates of other freight carriers. We believe that Moses and Savage’s findings are reasonable given their study’s design, data, and methodology, but because the findings are based on data and economic conditions that are about 15 to 20 years old, current data would need to be reanalyzed within current conditions to determine whether the findings are still valid. As mentioned above, our study shares this limitation and is further limited by an unrepresentative sample of motor carriers. An FMCSA official told us that the agency would not want to rely directly on data on the size of the carrier to assess safety risk because the agency believes that its data on indicators of carrier size, such as revenue, number of drivers, and number of power units, are not of sufficient quality. Similarly, the agency would not want to distinguish between private and for-hire carriers or between carriers that carry different types of freight because it does not believe that its data are sufficiently reliable. Relationship of Driver Characteristics and Crash Risk The studies we reviewed assessed whether driver characteristics— including convictions for traffic violations, age and experience, pay, or frequency of job changes—are associated with crash risk. Driver Convictions for Traffic Violations A series of studies by Lantz and others examined the effect of incorporating conviction data from the state-run commercial driver license data system into the calculation of carriers’ safety management safety evaluation area scores. The studies found that the resulting driver conviction measure is weakly correlated with the crash-per-vehicle rate. However, the studies did not calculate new safety management safety evaluation area scores with the proposed driver conviction measure and then use the updated measure to estimate new SafeStat scores for carriers. FMCSA uses data on driver convictions to help target its roadside inspections, and it is considering using such data in the tool it is developing to replace SafeStat as part of its Comprehensive Safety Analysis 2010 reform initiative. Driver Age and Experience Campbell’s 1991 study found that the risk of a fatal crash is significantly higher for younger truck drivers than for older drivers. Campbell used data from surveys of fatal crashes and large truck travel to calculate fatal involvement rates per mile driven by driver age. Overall, fatal involvement rates remained high through age 26. The fatal crash rates for drivers under 19 years of age were four times higher than the rate for all drivers, and the rates for drivers aged 19 to 20 years were six times higher. Our 1991 study found that younger, less experienced drivers posed greater-than-average accident risks. In particular, compared with drivers 40 to 49 years of age, drivers 21 to 39 years of age have 28 percent greater odds of accident involvement. Compared with those for drivers over 50 years of age, the odds of the youngest group of drivers having an accident are about 60 percent greater. The differences in accident risks between drivers with 0 to 13 years of experience, 14 to 20 years of experience, and 21 or more years of experience followed a very similar pattern. Although Campbell’s study provides only limited information about the quality of the data it used, we believe that its findings are reasonable given the study’s design and methodology, which relied on multiple kinds of analyses to substantiate a higher risk for younger drivers of large trucks. We believe that our 1991 findings are reasonable given our study’s design, data, and methodology. An FMCSA official told us that, at this time, the agency would not be able to use driver age in SafeStat or in a similar model because the agency does not have access to data on all drivers. FMCSA said that it is exploring the possibility of gaining broader access to data on drivers, which are maintained by the states, so that the agency can use the data to help assess the safety of drivers as part of its Comprehensive Safety Analysis 2010 reform initiative. Driver Pay Belzer and others’ 2002 study found that drivers with lower pay had higher crash rates. Because economic theory predicts that low pay levels are associated with poorer performing workers, the study hypothesized that low pay levels for drivers are associated with unsafe driving. The study found that for every 10 percent more in average driver compensation (mileage rate, unpaid time, anticipated annual raise, safety bonus, health insurance, and life insurance), the carriers experienced 9.2 percent fewer crashes. We believe that this finding is reasonable given the study’s design, data, and methodology. An FMCSA official told us that the agency could not use data on driver pay in SafeStat or in a similar model because such data are available only from studies or surveys that do not cover the full population of drivers. Frequency of Job Changes Staplin and others’ 2003 study for FMCSA found that drivers that average three or more jobs with different carriers each year have crash rates that are more than twice as high as drivers that average fewer job changes. Although the study authors acknowledge several limitations in the data used in study, we believe that the data and the analysis approach were sufficiently reliable to support the study’s finding of a relationship between the number of jobs and the number of crashes. An FMCSA official told us that, as for data on driver pay, the agency could not use data on the frequency of job changes in SafeStat or in a similar model because such data are available only from studies or surveys that do not cover the full population of drivers. Appendix IV: Scope and Methodology To determine the extent to which FMCSA’s policy for prioritizing compliance reviews targets carriers that subsequently have high crash rates, we analyzed data from FMCSA’s MCMIS on the June 2004 SafeStat assessment of carriers and on the assessed carriers’ crashes in the 18 months following the SafeStat assessment. We selected June 2004 because this date enabled us to examine MCMIS data on actual crashes that occurred in the 18-month period from July 2004 through December 2005. We defined various groups of carriers for analysis, such as those in each SafeStat category, those to which FMCSA gave high priority (i.e., those in categories A or B), and those in the worst 5 or 10 percent of carriers in a particular safety evaluation area without being in the worst 25 percent of carriers in any other area. We then calculated the aggregate crash rate in the 18 months following the SafeStat assessment for each of these groups by dividing the total crashes experienced by all the carriers in a group during that time period by the total number of vehicles operated by those carriers, as reported on their motor carrier census form. We then compared crash rates among the various groups to determine whether there were any groups with substantially higher aggregate crash rates than the carriers in SafeStat categories A or B. We also talked to FMCSA officials about how FMCSA developed SafeStat, their views on other evaluations of SafeStat, and FMCSA’s plans to replace SafeStat with a new tool. In assessing how FMCSA ensures that its compliance reviews are completed thoroughly and consistently, we reviewed our report on internal control standards for the federal government. We identified key standards in the areas that we believe are critical to maintaining the thoroughness and consistency of compliance reviews, namely the recording and communication of policy to management and others, the clear documentation of processes, and the monitoring and reviewing of activities and findings. We assessed the extent to which FMCSA’s management of its compliance reviews is consistent with these internal control standards by interviewing FMCSA and state managers and investigators. We interviewed investigators who conduct compliance reviews and their managers in FMCSA’s headquarters office, as well as in 7 of FMCSA’s 52 field division offices that work with states, two of its four regional service centers that support division offices, and three state offices that partner with 3 of the FMCSA division offices in which we did our work. We also interviewed two safety investigators in each of the same 7 division offices. The division offices and states that we reviewed— California, Georgia, Illinois, New York, Ohio, Pennsylvania, and Texas— received 30 percent of all the of the grant funds that FMCSA awarded to the states in fiscal year 2005 (the latest year for which data were available) through its primary grant program, the Motor Carrier Safety Assistance Program. Because we chose the seven states judgmentally (representing the largest grantees), we cannot project our findings nationwide.Reviewing a larger number of grantees would not have been practical because of resource constraints. We gathered information on the recording and communication of policy from discussions with FMCSA officials, documents, and system software, including the electronic operations manual. We obtained information about how FMCSA documents the findings of compliance reviews through discussions with FMCSA officials and reviews of FMCSA documents. We obtained information on how FMCSA monitors and reviews the performance of its compliance reviews through discussions with FMCSA officials and reviews of FMCSA documents, including the 2002 report of FMCSA’s Compliance Review Work Group. The data assessments of the number of vehicles inspected during compliance reviews and the percentage of applicable areas of the regulations covered by compliance reviews since 2001 were provided to us by FMCSA. In assessing the extent to which FMCSA follows up with carriers with serious violations, we reviewed regulations directing how FMCSA should follow up and track these violators and analyzed data to determine if FMCSA had met these policies. Particularly, we examined FMCSA policies and discussed with FMCSA officials the agency’s policy to perform a follow-up compliance review on carriers in SafeStat categories A and B, its policy to place carriers rated unsatisfactory out of service, its policy to perform a follow-up compliance review on carriers with a conditional rating, and its reduction of its enforcement backlog. Additional analysis was performed—as of the end of each fiscal year from 2001 through 2006—using data from FMCSA’s MCMIS to determine the total number of carriers with a conditional rating that had not received a follow-up compliance review. We also used MCMIS to determine how many carriers with a conditional rating received a follow-up compliance review and how soon after the original compliance review the second review occurred. To assess FMCSA’s implementation of the statutory requirement to assess the maximum fine against any carrier with either a pattern of violations or previously committed violations, we compared FMCSA’s policy with the language of the act and held discussions with FMCSA officials. In addition, we assessed the number of carriers that would have been assessed the maximum fine under differing definitions of a pattern of violations. We also reviewed the report of the Department of Transportation Inspector General on the implementation of the policy and documents pertaining to FMCSA’s response to the Inspector General’s report. In determining the reliability of FMCSA’s data on compliance reviews, violations, and enforcement cases, we performed electronic testing for obvious errors in accuracy and completeness. As part of a recent evaluation of FMCSA’s enforcement programs, we interviewed officials from FMCSA’s data analysis office who are knowledgeable about the same data sources. We determined that the data were sufficiently reliable for the types of analysis we present in this report. To assess the extent to which the timeliness, completeness, and accuracy of MCMIS and state-reported crash data affect SafeStat’s performance, we carried out a series of analyses with the MCMIS master crash file, and the MCMIS census file, as well as surveying the literature to assess other studies’ findings on the quality of MCMIS data. To assess timeliness, we first measured how many days on average it was taking each state to report crashes to FMCSA by year for calendar years 2000 through 2005. We also recalculated SafeStat scores from June 25, 2004, to include crashes that had occurred more than 90 days previously but had not yet been reported to FMCSA by that date. We compared the number and rankings of carriers from the original SafeStat results with those obtained with the addition of late-reported crashes. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of late reporting in individual states on MCMIS data quality. To assess completeness, we attempted to match all crash records in the MCMIS master crash file for crashes occurring between December 2001 and June 2004 to the list of motor carriers in the MCMIS census file. We used a variety of matching techniques to try and match the crash records without a carrier Department of Transportation number to carriers listed in the MCMIS census file. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of incomplete crash reporting in individual states on MCMIS data quality. To assess accuracy, we reviewed an audit by the Inspector General that tested the accuracy of electronic data by comparing records selected in the sample to source paper documents. In addition, we reviewed the University of Michigan Transportation Research Institute’s studies of state crash reporting to MCMIS to identify the impact of incorrectly reported crashes in individual states on MCMIS data quality. We determined that the data reported to FMCSA for use in SafeStat— while not as timely, complete, or accurate as they could be—were of sufficient quality for our use. Through our analyses, we found that the data identify many carriers that pose high crash risks and are, therefore, useful for the purposes of this report. To understand what other researchers have found about how well SafeStat identifies motor carriers that pose high crash risks, we identified studies through a general literature review and by asking stakeholders and study authors to identify high-quality studies. The studies included in our review were (1) the 2004 study of SafeStat done by Oak Ridge National Laboratory, (2) the SafeStat effectiveness studies done by the Department of Transportation Inspector General and Volpe Institute, (3) the University of Michigan Transportation Research Institute’s studies of state crash reporting to FMCSA, and (4) the 2006 audit by the Department of Transportation Inspector General of data for new entrant carriers. We assessed the methodology used in each study and identified which findings are supported by rigorous analysis. We accomplished this analysis by relying on information presented in the studies and, where possible, discussing the studies with the authors. When the studies’ methodologies and analyses appeared reasonable, we used the findings from those studies in our analysis of SafeStat. We discussed with FMCSA and industry and safety stakeholders the SafeStat methodology issues and data quality issues raised by these studies. We also discussed the aptness of the respective methodological approaches with FMCSA. Finally, we reviewed FMCSA documentation on how SafeStat is constructed and assessments of SafeStat conducted by FMCSA. To identify studies on predictors of motor carrier and driver crash risk, we conducted a general literature review. We shared this preliminary list of studies with the members of the Transportation Research Board’s Committee on Truck and Bus Safety and requested them to identify additional relevant studies. We selected those studies that assessed a relationship between one or more motor carrier or driver characteristics and crash risk. Based on information presented in the selected studies, we assessed the methodology used in each study and report only those findings that were based on sound methodology and analysis. Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, James Ratzenberger, Assistant Director; Carl Barden; Elizabeth Eisenstadt; David Goldstein; Ryan Gottschall; Laurie Hamilton; Eric Hudson; Bert Japikse; and Gregory Wilmoth made key contributions to this report.
The Federal Motor Carrier Safety Administration (FMCSA) has the primary federal responsibility for reducing crashes involving large trucks and buses. FMCSA uses its "SafeStat" tool to target carriers for reviews of their compliance with the agency's safety regulations based on their crash rates and safety violations. As requested, this study reports on (1) the extent to which FMCSA's policy for prioritizing compliance reviews targets carriers with a high risk of crashes, (2) how FMCSA ensures compliance reviews are thorough and consistent, and (3) the extent to which FMCSA follows up with carriers with serious safety violations. To complete this work, GAO reviewed FMCSA's regulations, policies, and safety data and contacted FMCSA officials in headquarters and nine field offices. By and large, FMCSA does a good job of identifying carriers that pose high crash risks for subsequent compliance reviews, ensuring the thoroughness and consistency of those reviews, and following up with high-risk carriers. FMCSA's policy for prioritizing compliance reviews targets many high-risk carriers but not other higher risk ones. Carriers must score among the worst 25 percent of carriers in at least two of SafeStat's four evaluation areas (accident, driver, vehicle, and safety management) to receive high priority for a compliance review. Using data from 2004, GAO found that 492 carriers that performed very poorly in only the accident evaluation area (i.e., those carriers that scored among the worst 5 percent of carriers in this area) subsequently had an aggregate crash rate that was more than twice as high as that of the 4,989 carriers to which FMCSA gave high priority. FMCSA told GAO that the agency plans to assess whether giving high priority to carriers that perform very poorly in only the accident evaluation area would be an effective use of its resources. FMCSA promotes thoroughness and consistency in its compliance reviews through its management processes, which meet GAO's standards for internal controls. For example, FMCSA uses an electronic manual to record and communicate its compliance review policies and procedures and teaches proper compliance review procedures through both classroom and on-the-job training. Furthermore, its investigators use an information system to document their compliance reviews, and its managers review these data, helping to ensure thoroughness and consistency between investigators. For the most part, FMCSA and state investigators cover the nine major applicable areas of the safety regulations (e.g., driver qualifications and vehicle condition) in 95 percent or more of compliance reviews, demonstrating thoroughness and consistency. FMCSA follows up with many carriers with serious safety violations, but it does not assess maximum fines against all of the serious violators that GAO believes the law requires. FMCSA followed up with more than 99 percent of the 1,196 carriers that received proposed unsatisfactory safety ratings from compliance reviews completed in fiscal year 2005, finding that 881 of these carriers made safety improvements and placing 309 others out of service. However, GAO found that FMCSA (1) does not assess maximum fines against carriers with a pattern of varied serious violations as GAO believes the law requires and (2) assesses maximum fines against carriers for the third instance of a violation, whereas GAO reads the statute as requiring FMCSA to assess the maximum fine for the second.
Background ACF’s Children’s Bureau administers and oversees federal funding to states for child welfare services under Titles IV-B and IV-E of the Social Security Act, and states and counties provide these child welfare services, either directly or indirectly through contracts with private agencies. Among other activities, ACF staff are responsible for developing appropriate policies and procedures for states to follow to obtain and use federal child welfare funds and conduct administrative reviews of states’ case files to ensure that children served by the state meet statutory eligibility requirements. In 2001, ACF launched a new outcome-oriented process, known as the Child and Family Services Reviews (CFSR), to improve its existing monitoring efforts, which had once been criticized for focusing exclusively on states’ compliance with regulations rather than on their performance over a full range of child welfare services. Passage of the 1997 Adoption and Safe Families Act (ASFA) helped spur the creation of the CFSR by emphasizing the outcomes of safety, permanency, and well-being for children. Subsequently, ACF consulted with state officials, child welfare experts, and other interested parties, and conducted pilot CFSR reviews in 14 states. In January 2000, ACF released a notice of proposed rule making and published final CFSR regulations. In March 2001, ACF conducted the first of its state reviews in Delaware. By March of 2004, ACF had completed an on-site review in all 50 states, the District of Columbia, and Puerto Rico. Although ACF plans to initiate a second round of reviews, an official start date for this process has not yet been determined. As figure 1 indicates, the CFSR is a four-phase process that involves state staff as well as central and regional ACF officials. This process begins with a statewide assessment in the first phase. The assessment of state performance continues in the second phase, most commonly known as the on-site review, when ACF sends a team of reviewers to three sites in the state for one week to assess state performance. A list of all the outcomes and systemic factors and their associated items appears in appendix II. In assessing performance, ACF relies, in part, on its own data systems, known as the National Child Abuse and Neglect Data System (NCANDS) and the Adoption and Foster Care Analysis and Reporting System (AFCARS), which were designed prior to CFSR implementation to capture, report, and analyze the child welfare information collected by the states. Today, these systems provide the national data necessary for ACF to calculate national standards for key CFSR performance items with which all states’ data will be compared. After the on-site review, ACF prepares a final report for the state— identifying areas needing improvement, as well as the outcomes and systemic factors for which the state was determined not to be in substantial conformity—and provides the state with an estimated financial penalty. As a result, the state must develop a 2-year PIP with action steps to address its noted deficiencies and performance benchmarks to measure progress. Once ACF approves the PIP, states are required to submit quarterly progress reports, which ACF uses to monitor improvement. Pursuant to CFSR regulations, federal child welfare funds can be withheld if states do not show adequate progress as a result of PIP implementation, but these penalties are suspended during the 2-year implementation term. As of January 2004, no financial penalties had been applied, but according to data on the 41 states for which final CFSR reports have been released through December 2003, potential penalties range from $91,492 for North Dakota to $18,244,430 for California. ACF staff in HHS’s 10 regional offices provide technical assistance to states through all phases of the CFSR process, and they are also responsible for reviewing state planning documents required by Title IV-B, assisting with state data system reviews, and assessing states’ use of IV-E funds. In addition to these efforts, ACF has established cooperative agreements with 10 national resource centers to help states implement federal legislation intended to ensure the safety, permanency, and well- being of children and families. ACF sets the resource centers’ areas of focus, and although each center has a different area of expertise, such as organizational improvement or information technology, all of them conduct needs assessments, sponsor national conference calls with states, collaborate with other resource centers and agencies, and provide on-site training and technical assistance to states. Members of the 108th Congress introduced a proposal to provide federal incentive payments directly to states that demonstrate significant improvements to their child welfare systems. At the time of publication, the House of Representatives was considering H.R. 1534, the Child Protective Services Improvement Act, which contains provisions to award grants to states with approved PIPs and additional bonuses to states that have made considerable progress in achieving their PIP goals for the previous year. The CFSR Is a Valuable yet Substantial Undertaking, but Data Enhancements Could Improve Its Reliability ACF and many state officials perceive the CFSR as a valuable process— highlighting many areas needing improvement—and a substantial undertaking, but some state officials and child welfare experts told us that data enhancements could improve its reliability. ACF staff in 8 of the 10 regions considered the CFSR a helpful tool to improve outcomes for children. Further, 26 of the 36 states responding to a relevant question in our survey commented that they generally or completely agreed with the results of the final CFSR report, even though none of the 41 states with final CFSR reports released through 2003 has achieved substantial conformity on all 14 outcomes and systemic factors. In addition, both ACF and the states have dedicated substantial financial and staff resources to the process. However, several state officials and child welfare experts we interviewed questioned the accuracy of the data used to compile state profiles and establish the national standards. While ACF officials in the central office contend that stakeholder interviews and case reviews compliment the data profiles, many state officials and experts reported that additional data from the statewide assessment could bolster the evaluation of state performance. The CFSR Is a Valuable Process for ACF and the States ACF and state officials support the objectives of the review, especially in focusing on children’s outcomes and strengthening relationships with stakeholders, and told us they perceive the process as valuable. ACF staff in 8 of the 10 regions considered the CFSR a helpful tool to improve outcomes for children. Also, ACF officials from 8 regional offices noted that the CFSRs were more intensive and more comprehensive than the other types of reviews they had conducted in the past, creating a valuable tool for regional officials to monitor states’ performance. In addition, state officials from every state we visited told us that the CFSR process helped to improve collaboration with community stakeholders. For example, a court official in New York said that the CFSR acted as a catalyst for improved relations between the state agency and the courts, and he believes that this has contributed to more timely child abuse and neglect hearings. Additionally, a state official in Florida said that the CFSR stimulated discussions among agency staff about measuring and improving outcomes, particularly the data needed to examine outcomes and the resources needed to improve them. Furthermore, state staff from 4 of the 5 states we visited told us the CFSR led to increased public and legislative attention to critical issues in child welfare. For example, caseworkers in Wyoming told us that without the CFSR they doubted whether their state agency’s administration would have focused on needed reforms. They added that the agency used the CFSR findings to request legislative support for the hiring of additional caseworkers. In addition to affirming the value associated with improved stakeholder relations, the ACF officials we talked to and many state officials reported that the process has been helpful in highlighting the outcomes and systemic factors, as well as other key performance items that need improvement. According to our survey, 26 of the 36 states that commented on the findings of the final CFSR report indicated that they generally or completely agreed with the findings, even though performance across the states was low in certain key outcomes and performance items. For example, not one of the 41 states with final reports released through 2003 was found to be in substantial conformity with either the outcome measure that assesses the permanency and stability of children’s living situations or with the outcome measure that assesses whether states had enhanced families’ capacity to provide for their children’s needs. Moreover, across all 14 outcomes and systemic factors, state performance ranged from achieving substantial conformity on as few as 2 outcomes and systemic factors to as many as 9. As figure 2 illustrates, the majority of states were determined to be in substantial conformity with half or fewer of the 14 outcomes and systemic factors assessed. States’ performance on the outcomes related to safety, permanency, and well-being—as well as the systemic factors—is determined by their performance on an array of items, such as establishing permanency goals, ensuring worker visits with parents and children, and providing accessible services to families. The CFSR showed that many states need improvement in the same areas, and table 1 illustrates the 10 items most frequently rated as needing improvement across all 41 states reviewed through 2003. ACF and the States Report That Reviews Have Been a Substantial Undertaking Given the value that ACF and the states have assigned to the CFSR process, both have spent substantial financial resources and staff time to prepare for and implement the reviews. In fiscal years 2001-2003, when most reviews were scheduled, ACF budgeted an additional $300,000 annually for CFSR-related travel. In fiscal year 2004, when fewer reviews were scheduled, ACF budgeted about $225,000. To further enhance its capacity to conduct the reviews, and to obtain additional logistical and technical assistance, ACF spent approximately $6.6 million annually to hire contractors. Specifically, ACF has let three contracts to assist with CFSR-related activities, including training reviewers to conduct the on-site reviews, tracking final reports and PIP documents, and, as of 2002, writing the CFSR final reports. Additionally, ACF hired 22 new staff to build central and regional office capacity and dedicated 4 full-time staff and 2 state government staff temporarily on assignment with ACF to assist with the CFSR process. To build a core group of staff with CFSR expertise, ACF created the National Review Team, composed of central and regional office staff with additional training in and experience with the review process. In addition, to provide more technical assistance to the states, ACF reordered the priorities of the national resource centers to focus their efforts primarily on helping states with the review process. Like ACF, states also spent financial resources on the review. While some states did not track CFSR expenses—such as staff salaries, training, or administrative costs—of the 25 states that reported such information in our survey, the median expense to date was $60,550, although states reported spending as little as $1,092 and as much as $1,000,000 on the CFSR process. For example, California officials we visited told us that they gave each of the state’s three review sites $50,000 to cover the salary of one coordinator to manage the logistics of the on-site review. Although ACF officials told us that states can use Title IV-E funds to pay for some of their CFSR expenses, only one state official addressed the use of these funds in our survey, commenting that it was not until after the on-site review occurred that the state learned these funds could have been used to offset states’ expenses. Furthermore, 18 of 48 states responding to a relevant question in our survey commented that insufficient funding was a challenge—to a great or very great extent—in preparing for the statewide assessment, and 11 of 40 states responded that insufficient funding was similarly challenging in preparing for the on-site review. Officials in other states reported that because available financial resources were insufficient, they obtained non-financial, in-kind donations to cover expenses associated with the on-site review. For example, a local site coordinator in Oklahoma informed us that while the state budgeted $7 per reviewer each day of the on-site review for food, refreshments, and other expenses, she needed to utilize a variety of resources to supplement the state’s budget with donations of supplies and food from local companies and agency staff. States reported that they also dedicated staff time to prepare for the statewide assessment and to conduct the on-site review, which sometimes had a negative impact on some staffs’ regular duties. According to our survey, 45 states reported dedicating up to 200 full-time staff equivalents (FTEs), with an average of 47 FTEs, to the statewide assessment process. Similarly, 42 states responded that they dedicated between 3 and 130 FTEs, with an average of 45 FTEs, to the on-site review process. To prepare for their own reviews, officials in all 5 states we visited told us that they sent staff for a week to participate as reviewers in other states. Additionally, local site coordinators in 4 of the 5 states we visited reported that planning for the CFSR on-site review was a full-time job involving multiple staff over a period of months. An official in Florida also told us that the extensive preparation for the CFSR dominated the work of the quality assurance unit, which was about 20 people at that time. In addition to preparing for the review, staff in all 5 of the states we visited, who served as reviewers in their own states, reported that to meet their responsibility as case reviewers they worked more than 12-hour days during the week of the on-site review. For some caseworkers, dedicating time to the CFSR meant that they were unable or limited in their ability to manage their typical workload. For example, Wyoming caseworkers whose case files were selected for the on-site review told us that they needed to be available to answer reviewers’ questions all day every day during the on-site review, which they said prevented them from conducting necessary child abuse investigations or home visits. Child welfare-related stakeholders—such as judges, lawyers, and foster parents—also contributed time to the CFSR, but some states found it took additional staff resources to initiate and maintain stakeholder involvement over time. According to our survey, 46 states reported that an average of about 277 stakeholders were involved with the statewide assessment, and 42 states responded that an average of about 126 stakeholders participated in the on-site review. However, state officials told us that it was difficult to recruit and maintain stakeholder involvement because long time lapses sometimes occurred between CFSR phases. For example, more than a year can elapse between completion of the statewide assessment and the initiation of PIP development—both key points at which stakeholder participation is critical. Nonetheless, all 5 of the states we visited tried to counter this obstacle by conducting phone and in-person interviews, holding focus groups, or giving presentations in local communities to inform and recruit stakeholders to be involved in the CFSR process. As a result, some stakeholders were involved throughout the process. For example, a tribal representative in Oklahoma assisted with the development of the statewide assessment, was interviewed during the on- site review, and provided guidance on the needs of Indian children during PIP development. States and Child Welfare Experts Report That Several Data Improvements Could Enhance CFSR Reliability State officials in all 5 states, as well as child welfare experts, reported on several data improvements that could enhance the reliability of CFSR findings. In particular, they highlighted inaccuracies with the AFCARS and NCANDS data that are used for establishing the national standards and creating the statewide data profiles, which are then used to determine if states are in substantial conformity. These concerns echoed the findings of a prior GAO study on the reliability of these data sources, which found that states are concerned that the national standards used in the CFSR are based on unreliable information and should not be used as a basis for comparison and potential financial penalty. Several of the state officials we visited and surveyed also questioned the reliability of data given the variation in states’ data-reporting practice, which they believe may ultimately affect the validity of the measures and may place some states at a disadvantage. Furthermore, many states needed to resubmit their statewide data after finding errors in the data profiles ACF would have used to measure compliance with the national standards. According to our national survey, of the 37 states that reported on resubmitting data for the statewide data profile, 23 needed to resubmit their statewide data at least once, with 1 state needing to resubmit as many as five times to accurately reflect revised data. Four states reported in our survey that they did not resubmit their data profiles because they did not know they had this option or they did not have enough time to resubmit before the review. In addition to expressing these data concerns, child welfare experts as well as officials in all of the states we visited commented that existing practices that benefit children might conflict with actions needed to attain the national standards. Specifically, one child welfare expert noted that if a state focused on preventing child abuse and neglect while the child was still living at home, and only removed the most at-risk children, the state may perform poorly on the CFSR reunification measure—assessing the timeliness of children’s return to their homes from foster care arrangements—because the children they did remove would be the hardest to place or most difficult to reunite. Additionally, officials in New York said that they recently implemented an initiative to facilitate adoptions. Because these efforts focus on the backlog of children who have been in foster care for several years, New York officials predict that their performance on the national standard for adoption will be lower since many of the children in the initiatives have already been in foster care for more than 2 years. These same state officials and experts also commented that they believe the on-site review case sample of 50 cases is too small to provide an accurate picture of statewide performance, although ACF officials stated that the case sampling is supplemented with additional information. Of the 40 states that commented in our survey on the adequacy of the case sample size for the on-site review, 17 states reported that 50 cases were very or generally inadequate to represent their caseload. Oklahoma officials we visited also commented that they felt the case sample size was too small, especially since they annually assess more than 800 of their own cases—using a procedure that models the federal CFSR—and obtain higher performance results than the state received on its CFSR. Furthermore, because not every case in the states’ sample is applicable to each item measured in the on-site review, we found that sometimes as few as one or two cases were being used to evaluate states’ performance on an item. Specifically, an ACF contractor said that several of the CFSR items measuring children’s permanency and stability in their living conditions commonly have very few applicable cases. For example, Wyoming had only two on-site review cases applicable for the item measuring the length of time to achieve a permanency goal of adoption, but for one of these cases, reviewers determined that appropriate and timely efforts had not been taken to achieve finalized adoptions within 24 months, resulting in the item being assigned a rating of area needing improvement. While ACF officials acknowledged the insufficiency of the sample size, they contend that the case sampling is augmented by stakeholder interviews for all items and applicable statewide data for the five CFSR items with corresponding national standards, therefore providing sufficient evidence for determining states’ conformity. All of the states we visited experienced discrepant findings between the aggregate data from the statewide assessment and the information obtained from the on-site review, which complicated ACF’s determination of states’ performance. Each of the 5 states we visited had at least one, and sometimes as many as three, discrepancies between its performance on the national standards and the results of the case review. We also found that in these 5 states, ACF had assigned an overall rating of area needing improvement for 10 of the 11 instances where discrepancies occurred. ACF officials acknowledged the challenge of resolving data discrepancies, noting that such complications can delay the release of the final report and increase or decrease the number of items that states must address in their PIPs. While states have the opportunity to resolve discrepancies by submitting additional information explaining the discrepancy or by requesting an additional case review, only one state to date has decided to pursue the additional case review. In addition, among the states we visited, for example, one state acknowledged that it has not opted to pursue the supplemental case review because doing so would place additional strain on its already limited resources. Several state officials and experts told us that additional data from the statewide assessments—or other data sources compiled by the states— could bolster the evaluation of states’ performance, but they found this information to be missing or insufficiently used in the final reports. According to our survey, of the 34 states that commented on the adequacy of the final report’s inclusion of the statewide assessment, 10 states reported that too little emphasis was placed on the statewide assessment. Specifically, 1 state reported that the final report would have presented a more balanced picture of the state’s child welfare system if the statewide assessment were used as a basis to compare and clarify the on-site review findings. Further, child welfare experts and state officials from California and New York—who are using alternative data sources to AFCARS and NCANDS, such as longitudinal data that track children’s placements over time—told us that the inclusion of this more detailed information would provide a more accurate picture of states’ performance nationwide. North Carolina officials also reported in our survey that they tried to submit additional longitudinal data—which they use internally to conduct statewide evaluations of performance—in their statewide assessment, but HHS would not accept the alternative data for use in evaluating the state’s outcomes. An HHS official told us that alternative data are used only to assess state performance in situations where a state does not have NCANDS data, since states are not mandated to have these systems. Given their concerns with the data used in the review process, state officials in 4 of the 5 states believed that the threshold for achieving substantial conformity was difficult to achieve. One state official we visited acknowledged that she believed child welfare agencies should be pursuing high standards to improve performance, but she questioned the level at which ACF established the thresholds. While an ACF official told us that different thresholds for the national standards had been considered, ACF policy makers ultimately concluded that a more rigorous threshold would be used. ACF officials recognize that they have set a high standard. However, they believe it is attainable and supportive of their overall approach to move states to the standard through continuous improvement. In preparation for the next round of CFSRs, ACF officials have formed a Consultation Work Group of ACF staff, child welfare administrators, data experts, and researchers who will propose recommendations on the CFSR measures and processes. The group began its meetings prior to our publication, but no proposals were available. Program Improvement Planning Under Way, but Uncertainties Challenge Plan Development, Implementation, and Monitoring Forty-one states are engaged in program improvement planning, but many uncertainties, such as those related to federal guidance and monitoring and the availability of state resources, have affected the development, implementation, and funding of the PIPs. State PIPs include strategies such as revising or developing policies, training caseworkers, and engaging stakeholders, and ACF has issued regulations and guidance to help states develop and implement their plans. Nevertheless, states reported uncertainty about how to develop their PIPs and commented on the challenges they faced during implementation. For example, officials from 2 of the states we visited told us that ACF had rejected their PIPs before final approval, even though these officials said that the plans were similar in the level of detail included in previously approved PIPs that regional officials had provided. Further, at least 9 states responding to our survey indicated that insufficient time, funding, and staff, as well as high caseloads, were the greatest challenges to PIP implementation. As states progress in PIP implementation, some ACF officials expressed a need for more guidance on how to monitor state accomplishments, and both ACF and state officials were uncertain about how the estimated financial penalties would be applied if states fail to achieve the goals described in their plans. State Plans Include a Variety of Strategies to Address Identified Weaknesses State plans include a variety of strategies to address weaknesses identified in the CFSR review process. However, because most states had not completed PIP implementation by the time of our analysis, the extent to which states have improved outcomes for children has not been determined. While state PIPs varied in their detail, design, and scope, according to our analysis of 31 available PIPs, these state plans have focused to some extent on revising or developing policies; reviewing and reporting on agency performance; improving information systems; and engaging stakeholders such as courts, advocates, foster parents, private providers, or sister agencies in the public sector. Table 2 shows the number of states that included each of the six categories and subcategories of strategies we developed for the purposes of this study, and appendix I details our methodology for this analysis. Our analysis found that every state’s PIP has included policy revisions or creation to improve programs and services. For example, to address unsatisfactory performance in the prevention of repeat maltreatment, California’s PIP includes a policy change, pending legislative approval, granting more flexibility to counties in determining the length of time they spend with families to ensure child safety and improve family functioning before closing cases. Additionally, 30 of the plans included caseworker training; 20 of the plans included requests for state legislative action; and 27 of the plans included requests for federal technical assistance from ACF or the resource centers. For example, California planned to ask its legislature to allow the agency to consolidate standards for foster care and adoption home studies, believing this would facilitate the adoption of children in the state. In addition, New York worked with its legislature to secure additional funding to improve the accessibility of independent living and adoption services for children and families. Our analysis also showed that many states approached PIP development by building on state initiatives in place prior to the on-site review. Of the 42 surveyed states reporting in our survey on this topic, 30 said that their state identified strategies for the PIP by examining ongoing state initiatives. For example, local officials in New York City and state officials in California told us that state reform efforts—borne in part from legal settlements—have become the foundation for the PIP. In New York, the Marisol case was one factor in prompting changes to the state’s child welfare financing structure. Subsequently, the state legislature established a quality enhancement fund. Much of this money today supports strategies in New York’s PIP, such as permanency mediation to support family involvement in case planning and new tools to better assess children’s behavioral and mental health needs. California state officials also informed us that state reform efforts initiated by the governor prior to the CFSR, such as implementing a new system for receiving and investigating reports of abuse and neglect and developing more early intervention programs, became integral elements in the PIP. Insufficient Guidance Hampered State Planning Efforts, but ACF Has Taken Steps to Clarify Expectations and Improve Technical Assistance ACF has provided states with regulations and guidance to facilitate PIP development, but some states believe the requirements have been unclear. Some of the requirements for program improvement planning are outlined in the following table. Some states in our survey indicated that the guidance ACF had provided did not clearly describe the steps required for PIP approval. In addition, some state officials believe that even ACF’s more recent efforts to improve PIP guidance have also been insufficient. Of the 21 states reporting on the PIP approval process in our survey, 6 states—4 reviewed in 2001 and 2 reviewed in 2002—said that ACF did not clearly describe its approval process, and another 8 states reported that ACF described the approval process as clearly as it did unclearly. Further, several states commented in our survey that several steps in the approval process were unclear to them, such as how much detail and specificity the agency expects the plan to include, what type of feedback states could expect to receive, when states could expect to receive such feedback, and whether a specific format was required. Officials in the states we visited echoed survey respondents’ concerns with officials from 3 of the 5 states informing us that ACF had given states different instructions regarding acceptable PIP format and content. For example, California and Florida officials told us that their program improvement plans had been rejected prior to final approval, even though they were based on examples of approved plans that regional officials had provided. In addition, California officials told us that they did not originally know how much detail the regional office expected in the PIP and believed that the level of detail the regional office staff ultimately required was too high. Although some steps may be duplicative, officials in California said that the version of their plan that the region accepted included 2,932 action steps—a number these officials believe is too high given their state’s limited resources and the 2-year time frame to implement the PIP. ACF officials have undertaken several steps to clarify their expectations for states and to improve technical assistance, but state responses to this assistance have been mixed. For example, in 2002, 2 years after ACF released the CFSR regulations and a procedures manual, ACF offered states additional guidance and provided a matrix format to help state officials prepare their plans. ACF officials told us the agency is also helping states through a team approach to providing on-site technical assistance. Under this approach, when ACF determines a state is slow in developing its PIP, the agency sends a team of staff from ACF and resource centers to the state to provide intensive on-site technical assistance. An official from West Virginia who had received this team assistance reported that ACF’s support was very beneficial. Further, ACF has attempted to encourage state officials to start developing program improvement plans before the final report is released. To do so, the agency has provided training to state officials and stakeholders almost immediately after the completion of the on-site review. ACF has sent staff from the resource center for Organizational Improvement to provide such training. An official from Utah, however, reported that the resource center training on PIP development had been general, and she wished the resource center staff had better tailored its assistance and provided more examples of strategies other states are pursuing to improve. Analysis of state survey responses indicates that starting to develop improvement plans early can make the 90-day time frame to prepare a PIP seem adequate. Of 9 states reporting that they started developing their PIP before or during the statewide assessment phase, 5 said that 90 days was adequate. Nonetheless, 21 of 35 state survey respondents reported that the 90-day time frame was insufficient. For example, one respondent reported that 90 days is too short a time to perform tasks necessary for developing a program improvement plan, such as analyzing performance data, involving diverse groups of stakeholders in planning decisions, and obtaining the approval of state officials. Survey results indicate that increasing numbers of states are developing their PIPs early in the CFSR process, which may reflect ACF’s emphasis on PIP development. The following figure shows that of the 18 states reviewed in 2001, only 2 started developing their PIPs before or during the statewide assessment phase. Among states reviewed in 2003, this share increased to 5 of 9. Evidence suggests that lengthy time frames for PIP approval have not necessarily delayed PIP implementation, and ACF has made efforts to reduce the time the agency takes to approve states’ PIPs. For example, officials in 3 of the 5 states we visited told us they began implementing new action steps before ACF officially approved their plans because many of the actions in their PIPs were already under way. In addition, according to our survey, of the 28 states reporting on this topic, 24 reported that they had started implementing their PIP before ACF approved it. Further, our analysis shows that the length of time between the PIP due date, which statute sets at 90 days after the release of the final CFSR report, and final ACF PIP approval has ranged considerably—from 45 to 349 business days. For almost half of the plans, ACF’s approval occurred 91 to 179 business days after the PIP was due. As shown in figure 4, however, our analysis indicates that ACF has recently reduced the time lapse by 46 business days between states’ PIP due dates and ACF’s PIP approval. The shorter time lapse for PIP approval may be due, in part, to the ACF’s emphasis on PIP development. According to one official, ACF has directed states to concentrate on submitting a plan that can be quickly approved. Another ACF official added that because of ACF’s assistance with PIP development, states are now submitting higher-quality PIPs that require fewer revisions. State and Federal Uncertainties Cloud PIP Implementation and Monitoring Program improvement planning has been ongoing, but uncertainties have made it difficult for states to implement their plans and ACF to monitor state performance. Such uncertainties include not knowing whether state resources are adequate to implement the plans and how best to monitor state reforms. In answering a survey question about PIP implementation challenges, a number of states identified insufficient funding, staff, and time—as well as high caseloads—as their greatest obstacles. Figure 5 depicts these results. In regards to funding, an official from Pennsylvania commented that because of the state’s budget shortfall, no additional funds were available for the state to implement its improvement plan, so most counties must improve outcomes with little or no additional resources. A Massachusetts official reported that fiscal problems in his state likely would lead the state to lay off attorneys and caseworkers and to cut funding for family support programs. While state officials acknowledged that they do not have specific estimates of PIP implementation expenses because they have not tracked this information in their state financial systems, many states indicated that to cope with financial difficulties, they had to be creative and use resources more efficiently to fund PIP strategies. Of the 26 states responding to a question in our survey on PIP financing, 12 said that they were financing the PIP strategies by redistributing current funding, and 7 said that they were using no-cost methods. In an example of the latter, Oklahoma officials reported pursuing in-kind donations from a greeting card company so that they could send thank-you notes to foster parents, believing this could increase foster parent retention and engagement. States also reported that PIP implementation has been affected by staff workloads, but these comments were mixed. In Wyoming, for example, caseworkers told us that their high caseloads would prevent them from implementing many of the positive action steps included in their improvement plan. In contrast, Oklahoma caseworkers told us that the improvement plan priorities in their state—such as finding permanent homes for children—have helped them become more motivated, more organized, and more effective with time management. For example, one caseworker explained that she is quicker now at locating birth fathers who were previously uninvolved in the child’s life because she uses the Internet to search for these fathers’ names. She said this new way of exploring leads and information—a strategy that stemmed from PIP development— has been motivating and rewarding because it has decreased the time spent tracking down paternal relatives and increased the number of available placements for the child. ACF officials expressed uncertainty about how best to monitor states’ progress and apply estimated financial penalties when progress was slow or absent, and 3 of the 5 states we visited reported frustration with the limited guidance ACF had provided on the PIPs quarterly reporting process. For example, 4 regional offices told us that they did not have enough guidance on or experience with evaluating state quarterly reports. Some regional offices told us they require states to submit evidence of each PIP action step’s completion, such as training curricula or revised policies, but one ACF official acknowledged that this is not yet standard procedure, although the agency is considering efforts to make the quarterly report submission procedures more uniform. Moreover, ACF staff from one region told us that because PIP monitoring varies by region, they were concerned about enforcing penalties. Finally, shortly before California’s quarterly report was due, state officials told us they still did not know how much detail to provide, how to demonstrate whether they had completed certain activities, or what would happen if they did not reach the level of improvement specified in the plan. Based on data from the states that have been reviewed to date, the estimated financial penalties range from a total of $91,492 for North Dakota to $18,244,430 for California, but the impact of these potential penalties remains unclear. While ACF staff from most regional offices told us that potential financial penalties are not the driving force behind state reform efforts, some contend that the estimated penalties affect how aggressively states pursue reform in their PIPs. For example, regional office staff noted that one state’s separate strategic plan included more aggressive action steps than those in its PIP because the state did not want to be liable for penalties if it did not meet its benchmarks for improvement. State officials also had mixed responses as to how the financial penalties would affect PIP implementation. An official in Wyoming said that incurring the penalties was equivalent to shutting down social service operations in one local office for a month, while other officials in the same state thought it would cost more to implement PIP strategies than it would to incur financial penalties if benchmarks were unmet. Nevertheless, these officials also said that while penalties are a consideration, they have used the CFSR as an opportunity to provide better services. One official in another state agreed that it would cost more to implement the PIP than to face financial penalties, but this official was emphatic in the state’s commitment to program improvement. ACF’s Focus Rests Almost Exclusively on Implementing the CFSR To implement the CFSRs, ACF has focused its activities almost entirely on the four phases of the review process. However, staff in several regions report limitations in providing assistance to states in helping them to meet key federal goals. Although regional staff conduct site visits to states for reasons beyond the CFSR process, conducting the CFSR on-site reviews and providing PIP-related assistance to states account for the majority of regions’ time and travel budgets, according to ACF officials. Further, regional office staff said that more frequent visits with state personnel— visits outside of the CFSR process in particular—would allow them to better understand states’ programs and cultivate relationships with state officials. In addition, state officials in all five of the states we visited said that ACF technical assistance needed improvement, acknowledging that in some cases regional office staff were stretched thin by CFSR demands and in other cases that assistance from resource center staff lacked focus. While ACF officials in the central office said that the CFSR has become the primary method for evaluating states’ performance, they acknowledged that regional staff might still be adjusting to the new way ACF oversees child welfare programs. Further, they told us that ACF is currently reevaluating the entire structure of its training and technical assistance, in part to address these concerns. ACF officials told us that the learning opportunities in the Children’s Bureau are intentionally targeted at the CFSR, but staff in 3 regions told us that this training should cover a wider range of subjects—including topics outside of the CFSR process—so that regional officials could better meet states’ needs. All 18 of the courses that ACF has provided to its staff since 2001 have focused on topics such as writing final CFSR reports and using data for program improvement. ACF officials in the central office said that the course selection reflects both the agency’s prioritization of the CFSR process and staff needs. To ascertain staff training needs, ACF surveyed regional staff in October 2002, and ACF officials told us they used the survey results in deciding which courses to offer. Our analysis of this survey, however, showed that it focused only on training topics directly related to the CFSR, so it might offer only limited information on whether regional officials wanted training on other topics. Specifically, the survey asked staff to check their top 5 training choices from among 11 CFSR-related topics. While survey respondents were also given the opportunity to write in additional training tropics they desired, only 2 of the survey’s 27 respondents did so. One indicated a greater need for training on Indian child welfare issues, and another expressed a desire to learn more about the entire child welfare system. Although it is not possible to determine whether more respondents would have prioritized non-CFSR training areas had the survey been designed to elicit such information, our interviews with regional staff suggest that some of them wish to obtain additional non-CFSR training. For example, a staff member from one region told us she has not been adequately trained in child welfare and believed that her credibility was damaged when a state wanted advice that she could not provide on how to help older youth prepare to exit from foster care. In addition to offering training, ACF organizes biennial conferences for state and federal child welfare officials. Nonetheless, staff from 5 regions told us that they wanted more substantive interaction with their ACF colleagues, such as networking at conferences, to increase their overall child welfare expertise. Staff from 6 of the 10 regions told us that their participation in conferences is limited because of funding constraints. Further, staff in all 10 regions provide ongoing assistance or ad hoc counseling to states, but staff from 6 regions told us they would like to conduct site visits with states more regularly to improve their relationships with state officials and provide more targeted assistance. For example, staff in most regions told us that they assist states predominantly by e-mail and telephone on topics such as interpretation of Title IV-E eligibility criteria. Additionally, staff in 7 regions said that they sometimes visit with states to participate in state planning meetings—part of the annual child and family services planning effort—or to give presentations at state conferences on topics such as court improvement. However, staff in 4 regions felt their travel funds were constrained and explained that they try to stretch their travel dollars by addressing states’ non-CFSR needs, such as court improvements, during CFSR-related visits. While an ACF senior official from central office confirmed that CFSR-related travel constituted 60 percent of its 2002 child welfare-monitoring budget, this official added that CFSR spending represents an infusion of funding rather than a reprioritization of existing dollars and stated that regional administrators have discretion over how the funds are allocated within their regions. In addition, the same official stated that he knew of no instance in which a region requested more money for travel than it received. Concerns from state officials in all 5 of the states we visited echoed those of regional office staff and confirmed the need for improvements to the overall training and technical assistance structure, while respondents’ comments on our survey showed more mixed perceptions on the quality of assistance they received. For example, state officials in New York and Wyoming commented that ACF staff from their respective regional offices did not have sufficient time to spend with them on CFSR matters because regional staff were simultaneously occupied conducting reviews in other states. Further, an Oklahoma state official commented that assistance from one resource center was not as specific or helpful as desired. Specifically, when the state asked the resource center to provide a summary of other states’ policies regarding the intake of abuse and neglect allegations, the resource center did not provide an analysis of sufficient depth for the state to explore possible reforms. According to state survey respondents, however, satisfaction with the training and technical assistance provided by regional offices varied by CFSR phase. For example, among states reviewed in 2001, 2002, and 2003, satisfaction was generally highest in the statewide assessment phase, but then dropped during on-site review and PIP development before rising again in the PIP implementation phase. Across all phases of the CFSR process, however, states reviewed in 2003 had much higher levels of satisfaction with regional office assistance than those states reviewed in 2001, suggesting improvements to regional office training and technical assistance as the process evolved. Further, based on survey data and our follow-up calls with selected states, satisfaction was also mixed in regard to resource center provided assistance. For example, among states reporting in our survey on the quality of assistance provided by the resource center for Organizational Improvement and the resource center for Information Technology—the two resource centers that provide specific support to states regarding data issues and PIP development— satisfaction was generally lower in every phase among states reviewed in 2003 than among states reviewed in 2001. The only exception to this was during the PIP development phase, for which states reviewed in 2003 reported higher levels of satisfaction with the resource center for Organizational Improvement than states reviewed in 2001, suggesting positive responses to the on-site training and technical assistance this resource center has recently been providing to aid states in their PIP planning efforts. ACF officials told us the CFSR has become the agency’s primary mechanism for monitoring states and facilitating program improvement, but they acknowledged that regional office staff might not have realized the full utility of the CFSR as a tool to integrate all existing training and technical assistance efforts. Further, according to ACF officials, meetings to discuss a new system of training and technical assistance are ongoing, though recommendations were not available at the time of publication. Levels of resource center funding, the scope and objectives of the resource centers’ work, and the contractors who operate the resource centers are all subject to change before the current cooperative agreements expire at the close of fiscal year 2004. Conclusions ACF and the states have devoted considerable resources to the CFSR process, but concerns remain regarding the validity of some data sources and the limited use of all available information to determine substantial conformity. Further, no state to date has passed the threshold for substantial conformity on all CFSR measures. The majority of states surveyed agreed that CFSR results are similar to their own evaluation of areas needing improvement. However, without using more reliable data— and in some cases, additional data from state self-assessments—to determine substantial conformity, ACF may be over- or underestimating the extent to which states are actually meeting the needs of the children and families in their care. These over- or underestimates can, in turn, affect the scope and content of the PIPs that states must develop in response. As states face difficult budget decisions, accurate performance information could be critical to deciding how best to allocate resources. We previously reported on the reliability of state-reported child welfare data and recommended that HHS consider additional ways to enhance the guidance and assistance offered to states to help them overcome the key challenges in collecting and reporting child welfare data. In response to this recommendation, HHS said that ACF has provided extensive guidance on how states can improve the quality of their data and acknowledged that additional efforts were under way. In addition, the PIP development, approval, and monitoring processes remain unclear to some, potentially reducing states’ credibility with their stakeholders and straining the federal/state partnership. Similarly, regional officials are unclear as to how they can accomplish their various training and technical assistance responsibilities, including the CFSR. Without clear guidance on how to systematically prepare and monitor PIP-related documents, and how regional officials can integrate their many oversight responsibilities, ACF has left state officials unsure of how their progress over time will be judged and potentially complicated its own monitoring efforts. Recommendations for Executive Action To ensure that ACF uses the best available data in measuring state performance, we recommend that the Secretary of HHS expand the use of additional data states may provide in their statewide assessments and consider alternative data sources when available, such as longitudinal data that track children’s placements over time, before making final CFSR determinations. In addition, to ensure that ACF regional offices and states fully understand the PIP development, approval, and monitoring processes, and that regional offices fully understand ACF’s prioritization of the CFSR as the primary mechanism for child welfare oversight, we recommend that the Secretary of HHS take the following two actions: issue clarifying guidance on the PIP process and evaluate states’ and regional offices’ adherence to this instruction, and provide guidance to regional offices explaining how to better integrate the many training and technical assistance activities for which they are responsible, such as participation in state planning meetings and the provision of counsel to states on various topics, with their new CFSR responsibilities. Agency Comments We received comments on a draft of this report from HHS. These comments are reproduced in appendix IV. HHS also provided technical clarifications, which we incorporated where appropriate. HHS generally agreed with our findings and noted that the CFSR process has already focused national attention on child welfare reform, but because the CFSR is the first review of its kind, HHS is engaged in continuous monitoring and improvement of the process. However, in its technical comments, HHS commented that while it acknowledges that the CFSR is its top priority, it disagreed with our statement that HHS’s focus rests exclusively on implementing the CFSR, stating that the Administration for Children and Families (ACF) also conducts other oversight efforts, such as Title IV-E eligibility reviews and AFCARS assessments. While we acknowledged ACF’s other oversight activities in the background section of the report, this report focuses primarily on the CFSR and we reflected the comments that ACF officials made throughout the course of our work that the CFSR was the primary tool for monitoring state performance and that it served as the umbrella for all monitoring activities undertaken by central and regional ACF staff. HHS further noted in its technical comments that we were wrong to suggest that federal staff do not know how to monitor state PIPs or assess financial penalties. However, we do not report that ACF is unsure of how to monitor PIPs or how to assess financial penalties—rather, we reported that ACF regional staff have not received sufficient guidance on how to best monitor PIPs and that ACF officials have not decided how or when to apply such penalties, even though two states to date have completed their initial PIP implementation timeframe and all states reviewed thus far are engaged in PIP development and implementation. With regard to our first recommendation, HHS acknowledged that several steps are under way to address necessary data improvements and said that states have begun to submit more accurate information in their AFCARS and NCANDS profiles, with HHS’s assistance. HHS also commented that we failed to properly emphasize the states’ responsibility to improve overall data quality. We believe that our report, as well as our previous report on child welfare data and states’ information systems, addresses HHS’s activities and the steps many states have taken to enhance their CFSR data. Given that many states have developed independent data collection tools—and included findings from these instruments in their statewide assessments—our recommendation is meant to encourage HHS to work more closely with all states to supplement their AFCARS and NCANDS data in order to improve the determinations made about state performance. In addition, HHS commented that our report emphasized the limitations of the 50 case sample size without focusing on the expenses and the increased state and federal staff time that would likely be associated with efforts to increase the sample size. We agree that additional expenses and staff time would likely be needed to increase the sample size and recommended that ACF use additional data—beyond the information collected from the 50 case reviews and ACFARS and NCANDS data—to develop a more accurate picture of state performance. This information could include the data that many states already collect on their performance, such as longitudinal information tracking children from their time of entry into the system. In response to our second recommendation, HHS said that it has continued to provide technical assistance and training to states and regional offices, when appropriate. HHS noted that it is committed to continually assessing and addressing training and technical assistance needs. In this context, our recommendation was intended to encourage HHS to enhance existing training efforts and focus both on state and on regional officials’ needs in understanding and incorporating the CFSR process into their overall improvement and oversight efforts. We also provided a copy of our draft report to child welfare officials in the five states we visited—California, Florida, New York, Oklahoma, and Wyoming. We received comments from California, Florida, New York, and Oklahoma, all of which generally agreed with our findings and provided various technical comments, which we also incorporated where appropriate. We are sending copies of this report to the Secretary of Health and Human Services, state child welfare directors, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss this material further, please call me at (202) 512-8403 or Diana Pietrowiak at (202) 512-6239. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Objectives The objectives of our study were to report on (1) ACF’s and the states’ experiences preparing for and conducting the statewide assessments and on-site reviews; (2) ACF’s and the states’ experiences developing, funding, and implementing items in the PIP; and (3) additional efforts that ACF has taken beyond the CFSR to ensure that all states meet federal goals of safety, permanency, and well-being. Scope and Methodology To gather information about ACF’s and the states’ experiences with the CFSR and PIP process, we utilized multiple methodologies to solicit information from both ACF and the states, including (1) a Web-based survey to state child welfare agencies; (2) site visits to five states; (3) a content analysis of all 31 PIPs available as of January 1, 2004; (4), interviews with ACF officials in Washington and all regional offices, directors of all resource centers, and child welfare experts nationwide; and (5) a review of CFSR regulations and the available guidance offered to states. We conducted our work between May 2003 and February 2004 in accordance with generally accepted government auditing standards. Survey To gather information about states’ experiences with each phase of the CFSR and PIP process, we distributed a Web-based survey to all 50 states, the District of Columbia, and Puerto Rico on July 30, 2003. We pretested the survey instrument with officials in the District of Columbia, Kentucky, and Maryland; and after extensive follow-up, we received survey responses from all 50 states and the District of Columbia for a 98 percent response rate. We did not independently verify the information obtained through the survey. The survey asked a combination of questions that allowed for open-ended and close-ended responses. Because some states had not yet begun their statewide assessments and others had already submitted quarterly PIP progress reports at the time that our survey was released, the instrument was designed with skip patterns directing states to comment only on the CFSR stages that they had begun or completed to that point. Therefore, the number of survey respondents for each question varied depending on the number of states that had experienced that stage of the CFSR and PIP processes. To supplement the survey and elaborate on survey responses, we selected 10 states with which to conduct follow-up phone calls based on their answers to the survey’s open-ended questions. These calls helped us obtain more specific examples about states’ experiences preparing for the CFSR; developing, funding, and implementing a PIP; and working with ACF to improve their child welfare systems. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with a number of state officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminated the need to have the data keyed into a database thus removing an additional source of error. Site Visits To gather more detailed information about the states’ experiences with the CFSR and PIP process, we selected five states to visit—California, Florida, New York, Oklahoma, and Wyoming—based on the timing and results of each state’s CFSR, as well as their differences in location, size of child welfare population, degree of privatization of services, size of tribal populations, and whether they had state or locally administered systems. In preparation for the visits and to understand the unique circumstances in each state, we obtained and reviewed relevant literature from each of the five states, such as the statewide assessment, the CFSR final report, and any available PIPs or quarterly reports. Additionally, we reviewed relevant past or current litigation that may affect the states’ delivery of services as identified by the National Center for Youth Law’s Litigation Docket (2002). During our visits to each state, we talked with officials from the state child welfare agency along with officials and staff from at least one local agency office that was selected for the CFSR on-site review. Specifically, in each state we spoke with state and local officials responsible for guiding the states’ efforts throughout the review process; CFSR on-site reviewers; and stakeholders, including judges, child advocates, private providers, foster parents, and child welfare staff. Some detailed information regarding key CFSR milestones among the five states we visited is included in appendix III. Content Analysis of Available PIPs To learn about states’ improvement strategies, we conducted a content analysis of the 31 available PIPs that ACF had approved by January 1, 2004. For each of these PIPs, we classified the state’s action steps as relating to one or more of the following: policies and procedures, data collection and analysis, staff supports, foster parent supports or services and resources for children and families, state legislative supports, and federal technical assistance. Table 2 in the report summarizes how we classified the PIP strategies and indicates the number of states including each strategy in its PIP. Interviews To gather information about ACF’s experience with the CFSR and PIP process, we interviewed ACF officials in Washington, D.C., who are involved in the CFSR process and ACF staff in all 10 of the regional offices; directors of each resource center; and ACF contractors working on CFSR-related activities. Further, we observed the final debriefing sessions for three states—South Carolina, Virginia, and Washington— during the weeks of their respective on-site reviews. In addition to our interviews with ACF officials, we also interviewed 10 prominent child welfare experts and researchers, such as those affiliated with the Chapin Hall Center for Children, the Child Welfare League of America, the National Coalition on Child Protection Reform, and the University of California at Berkeley, to learn additional information about the states’ experiences with the CFSR process, including information about states’ concerns with the reliability of CFSR data, states’ involvement of tribes as stakeholders, and the media’s coverage of egregious child welfare cases. Review of ACF Guidance to States To gather information about CFSR regulations and the available training and technical assistance offered to states, we reviewed ACF’s regulations, its policy memorandums, and the CFSR manual it makes available to states. In addition, we obtained and reviewed a list of all of the resource centers’ training and technical assistance activities provided to the five states we visited during our site visits. Appendix II: List of Outcomes and Systemic Factors and Their Associated Items (Items with an asterisk have associated national standards.) Outcome factors Outcome 1: Children are, first and foremost, protected from abuse and neglect. Item 1: Timeliness of initiating investigations on reports of child maltreatment Item 2: Repeat maltreatment Recurrence of maltreatment* Incidence of child abuse and/or neglect in foster care* Outcome 2: Children are safely maintained in their own homes whenever possible and appropriate. Item 3: Services to family to protect child(ren) in home and prevent removal Item 4: Risk of harm to child(ren) Outcome 3: Children have permanency and stability in their living conditions. Item 5: Foster care re-entries* Item 6: Stability of foster care placement* Item 7: Permanency goal for child Item 8: Reunification, guardianship, or permanent placement with relatives Length of time to achieve permanency goal of reunification* Length of time to achieve permanency goal of adoption* Item 10: Permanency goal of other planned permanent living arrangement Outcome 4: The continuity of family relationships and connections is preserved for children. Item 11: Proximity of foster care placement Item 12: Placement with siblings Item 13: Visiting with parents and siblings in foster care Item 14: Preserving connections Item 15: Relative placement Item 16: Relationship of child in care with parents Outcome 5: Families have enhanced capacity to provide for their children’s needs. Item 17: Needs and services of child, parents, foster parents Item 18: Child and family involvement in case planning Item 19: Worker visits with child Item 20: Work visits with parent(s) Outcome 6: Children receive appropriate services to meet their educational needs. Item 21: Education needs of the child Outcome 7: Children receive adequate services to meet their physical and mental health needs. Item 22: Physical health of child Item 23: Mental health of child. Systemic Factors Systemic factor 1: Statewide information system Item 24: State is operating a statewide information system that, at a minimum, can identify the status, demographic characteristics, location, and goals for the placement of every child who is (or within the immediately preceding 12 months has been) in foster care. Systemic factor 2: Case review system Item 25: Provides a process that ensures that each child has a written case plan to be developed jointly with the child’s parent(s) that includes the required provisions Item 26: Provides a process for the periodic review of the status of each child, no less frequently than once every 6 months, either by a court or by administrative review. Item 27: Provides a process that ensures that each child in foster care under the supervision of the state had a permanency hearing in a qualified court or administrative body no later than 12 months from the date the child entered foster care and no less frequently than every 12 months thereafter. Item 28: Provides a process for termination of parental rights proceedings in accordance with the provisions of the Adoption and Safe Families Act. Item 29: Provides a process for foster parents, pre-adoptive parents, and relative caregivers of children in foster care to be notified of, and have an opportunity to be heard in, any review or hearing held with respect to the child. Systemic factor 3: Quality assurance system Item 30: The state has developed and implemented standards to ensure that children in foster care are provided quality services that protect the safety and health of children. Item 31: The state is operating an identifiable quality assurance system that is in place in the jurisdictions where the services included in the Child and Family Services Plan are provided, evaluates the quality of services, identifies strengths and needs of the service delivery system, provides relevant reports, and evaluates program improvement measures implemented. Systemic factor 4: Training Item 32: The state is operating a staff development and training program that supports the goals and objectives in the Child and Family Services Plan, addresses services provided under Titles IV-B and IV-E, and provides initial training for all staff who deliver these services. Item 33: The state provides for ongoing training for staff that addresses the skills and knowledge base needed to carry out their duties with regard to the services included in the Child and Family Services Plan. Item 34: The state provides training for current or prospective foster parents, adoptive parents, and staff of state-licensed or approved facilities that care for children receiving foster care or adoption assistance under Title IV-E that addresses the skills and knowledge base needed to carry out their duties with regard to foster and adopted children. Systemic factor 5: Service array Item 35: The state has in place an array of services that assess the strengths and needs of children and families and determine other service needs, address the needs of families in addition to individual children in order to create a safe home environment, enable children to remain safely with their parents when reasonable, and help children in foster and adoptive placements achieve permanency. Item 36: The services in item 35 are accessible to families and children in all political jurisdictions covered in the State’s Child and Family Services Plan. Item 37: The services in item 35 can be individualized to meet the unique needs of children and families served by the agency. Systemic factor 6: Agency responsiveness to the community Item 38: In implementing the provisions of the Child and Family Services Plan, the state engages in ongoing consultation with tribal representatives, consumers, services providers, foster care providers, the juvenile court, and other public and private child- and family-serving agencies and includes the major concerns of these representatives in the goals and objectives of the Child and Family Services Plan. Item 39: The agency develops, in consultation with these representatives, annual reports of progress and services delivered pursuant to the Child and Family Services Plan. Item 40: The state’s services under the Child and Family Services Plan are coordinated with services or benefits of other federal or federally assisted programs serving the same population. Systemic factor 7: Foster and adoptive parent licensing, recruitment, and retention Item 41: The state has implemented standards for foster family homes and child care institutions that are reasonably in accord with recommended national standards. Item 42: The standards are applied to all licensed or approved foster family homes or child care institutions receiving Title IV-E or IV-B funds. Item 43: The state complies with federal requirements for criminal background clearances as related to licensing or approving foster care and adoptive placements and has in place a case-planning process that includes provisions for addressing the safety of foster care and adoptive placements for children. Item 44: The state has in place a process for ensuring the diligent recruitment of potential foster and adoptive families that reflect the ethnic and racial diversity of children in the state for whom foster and adoptive homes are needed. Item 45: The state has in place a process for the effective use of cross- jurisdictional resources to facilitate timely adoptive or permanent placements for waiting children. Appendix III: Dates on Which Site Visit States Reached CFSR Milestones Total number of outcomes and systemic factors not in substantial conformity Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contacts and Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Elizabeth Caplick and Catherine Roark made key contributions to this report. Amy Buck, Karen Burke, Jason Kelly, Stuart Kaufman, Luann Moy, and Jerome Sandau also provided key technical assistance. Related GAO Products Child Welfare: Improved Federal Oversight Could Assist States in Overcoming Key Challenges. GAO-04-418T. Washington, D.C.: January 28, 2004. D.C. Family Court: Progress Has Been Made in Implementing Its Transition. GAO-04-234. Washington, D.C.: January 6, 2004. Child Welfare: States Face Challenges in Developing Information Systems and Reporting Reliable Child Welfare Data. GAO-04-267T. Washington, D.C.: November 19, 2003. Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services. GAO-03-956. Washington, D.C.: September 12, 2003. Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could be Improved. GAO-03-809. Washington, D.C.: July 31, 2003. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care Cases. GAO-03-758T. Washington, D.C.: May 16, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Foster Care: States Focusing on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-03-626T. Washington, D.C.: April 8, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well- Being. GAO-01-191. Washington, D.C.: December 29, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 10, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS-99-32. Washington, D.C.: May 6, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 1999. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998.
In 2001, the Department of Health and Human Services' (HHS) Administration for Children and Families (ACF) implemented the Child and Family Services Reviews (CFSR) to increase states' accountability. The CFSR uses states' data profiles and statewide assessments, as well as interviews and an on-site case review, to measure state performance on 14 outcomes and systemic factors, including child well-being and the provision of caseworker training. The CFSR also requires progress on a program improvement plan (PIP); otherwise ACF may apply financial penalties. This report examines (1) ACF's and the states' experiences preparing for and conducting the statewide assessments and on-site reviews; (2) ACF's and the states' experiences developing, funding, and implementing items in PIPs; and (3) any additional efforts that ACF has taken beyond the CFSR to help ensure that all states meet federal goals related to children's safety, permanency, and well-being. ACF and many state officials perceive the CFSR as a valuable process and a substantial undertaking, but some data enhancements could improve its reliability. ACF staff in 8 of the 10 regions considered the CFSR a helpful tool to improve outcomes for children. Further, 26 of 36 states responding to a relevant question in our survey commented that they generally or completely agreed with the results of the final CFSR report, even though none of the 41 states with final CFSR reports released through 2003 has achieved substantial conformity on all 14 outcomes and systemic factors. Additionally, both ACF and the states have dedicated substantial financial and staff resources to the process. Nevertheless, several state officials and child welfare experts we interviewed questioned the accuracy of the data used in the review process. While ACF officials contend that stakeholder interviews and case reviews complement the data profiles, many state officials and experts reported that additional data from the statewide assessment could bolster the evaluation of state performance. Program improvement planning is under way, but uncertainties have affected the development, funding, and implementation of state PIPs. Officials from 3 of the 5 states we visited said ACF's PIP-related instructions were unclear, and at least 9 of the 25 states reporting on PIP implementation in our survey said that insufficient funding and staff were among the greatest challenges. While ACF has provided some guidance, ACF and state officials remain uncertain about PIP monitoring efforts and how ACF will apply financial penalties if states fail to achieve their stated PIP objectives. Since 2001, ACF's focus has been almost exclusively on the CFSRs and regional staff report limitations in providing assistance to states in helping them to meet key federal goals. While staff from half of ACF's regions told us they would like to provide more targeted assistance to states, and state officials in all 5 of the states we visited said that ACF's existing technical assistance efforts could be improved, ACF officials acknowledged that regional staff might still be adjusting to the new way ACF oversees child welfare programs.
Background The purpose of DOE’s stockpile surveillance program is to ensure, primarily through three types of tests, that the safety and reliability of nuclear weapons are maintained. Flight tests involve the actual dropping or launching of a weapon, which has had the nuclear components removed. Nonnuclear systems laboratory tests involve testing a weapon’s nonnuclear systems to detect defects due to handling, aging, manufacturing, or design. The nuclear and nonnuclear components laboratory tests involve destructive analysis to identify defects or failures in individual weapon components. Weapons are randomly selected for flight and nonnuclear systems laboratory tests from the stockpile each year. Weapons chosen for the nuclear and nonnuclear components laboratory tests are judgmentally selected from the weapons that have been selected for the other two tests. For testing purposes, DOE considers the active stockpile to consist of nine weapon types—three bombs and six missile warheads, each with unique capabilities. From 1958 to 1996, DOE’s stockpile surveillance program has tested about 14,000 weapons, with about 2,400 findings documented. Over 50 percent of these findings were considered “significant.” A significant finding is the identification of a defect or failure in a weapon system. A defect is an observable anomaly, while a failure is a flaw or malfunction in the weapon that would prevent the weapon from operating as intended. When a significant finding is disclosed, DOE may perform additional tests to confirm the finding, determine the cause of the problem, assess its impact on the stockpile, and recommend a corrective plan. Of the 2,400 findings, 370 were “actionable.” DOE defines an actionable finding as a finding that lowers the weapon’s reliability or for which some action is taken. About 1 in 3 actionable findings (118 findings) have resulted in retrofits and major design changes. The remainder required either process changes or no physical changes. When a weapon’s reliability is lowered because of a finding, the result is reported to the Department of Defense (DOD). DOE and the national nuclear laboratories have determined that they generally need to test about 40 to 44 weapons of each type in the stockpile over a 4-year period. According to DOE officials, over that 4-year time frame, the tests should consist of 8 to 12 flight tests per weapon type (an average of 2 or 3 tests per year) and 28 to 36 laboratory tests of nonnuclear systems (an average of 7 to 9 per year). Finally, from the weapons scheduled for testing each year, DOE designates components from certain weapons for laboratory tests. DOE considers five components to be key—the pit, the secondary, the detonator sets, the gas transfer system, and the high explosives. On average, for each weapon type, DOE believes that one pit, one secondary, two to five detonator sets, one or two gas transfer systems, and one high-explosive system should be tested each year. According to DOE officials, when a significant number of tests are cancelled or delayed, the Department lacks information on the reliability of the weapon. While lack of testing will not affect the reliability level assigned to a weapon (only a test finding can alter the reliability level), the lack of test information reduces DOE’s confidence in the assessed reliability of the weapon. Stockpile Surveillance Tests Are Behind Schedule DOE is currently behind schedule in conducting some flight tests, nonnuclear systems laboratory tests, and nuclear and nonnuclear components laboratory tests. For some tests, DOE is several years behind schedule. These schedule slippages are the result of a variety of factors, including an unapproved safety study, suspension of testing at some facilities, and the transfer of testing functions to new facilities. Flight Tests Flight tests involve the actual dropping or launching of a weapon from which the nuclear components have been removed. DOE uses specially designed equipment—referred to as telemetry packages—to test the integration and functioning of the weapon’s electrical and mechanical subsystems. Until November 1992, DOE planned to conduct a minimum of 3 flight tests per year—or 12 flight tests over a 4-year period—for bombs, InterContinental Ballistic Missiles, and Submarine-Launched Ballistic Missiles. According to DOE officials, in November 1992, DOE reduced its plan for testing Air Force InterContinental Ballistic Missiles from three tests per year to two—or eight tests over a 4-year period. DOE officials informed us that they made the reduction based on an evaluation of applicable existing test data and in preparation for the Air Force’s implementation of the START I and START II treaties. Under these treaties, the Air Force will have to reduce the number of warheads carried on missiles. The plan for testing bombs and Submarine-Launched Ballistic Missiles was not altered and remains at 3 per year, or 12 over a 4-year period. DOE officials told us that they believe the reduction in flight tests from three to two per year for InterContinental Ballistic Missiles represents an acceptable increase in the risk of having undetected problems in weapons. The officials explained that by flight testing three weapons per year, there is a 90-percent chance of discovering a “flight-unique” defect if the defect occurs in 18 percent of the weapons. By testing only two weapons per year, the risk increases. With two tests per year, the defect would have to occur in 22 percent of the weapons to have a 90-percent chance of discovering it. DOE officials believe that conducting fewer than two tests per year (or eight tests over a 4-year period) is a concern and a significantly increased risk to the program. Three weapon types—the W62, W78, and W88 warheads—have had, on average, fewer than two tests conducted per year over the past 4 years. Table 1 shows the three weapon types, DOE’s plan for testing, and the number of tests conducted over the past 4 years (fiscal years 1992 through 1995). The W62, a warhead used by the Air Force on the Minuteman III missile, has been flight-tested six times (of the eight planned) over the past 4 years. Two planned tests were not conducted because DOE’s Pantex facility had trouble preparing warheads for flight testing and could not deliver the test warheads to the Air Force in time for the test flights. The W78 warhead, also used on the Minuteman III, has had seven flight tests (of the eight planned) over the past 4 years. DOE and the national laboratory officials told us that a flight test with telemetry equipment was not conducted because the Department decided to use the available warhead test slot on the test missile for a nontelemetry DOE test of the W78. The W88 is a warhead used by the Navy on the Trident II missile. Only 3 W88 stockpile flight tests (of the 12 planned) were conducted during the 4-year period from fiscal year 1992 through 1995. Flight testing of W88 warheads taken from the stockpile was halted for more than 1 year because an important safety study required for disassembly and inspection of the warhead at DOE’s Pantex plant lacked approval. A Nuclear Explosive Safety Study is required for each weapon type before DOE’s Pantex Plant can disassemble and inspect a weapon selected for testing. Without disassembly and inspection capability, surveillance tests, including flight tests of sample warheads from the stockpile (the nuclear components must be removed and replaced by the telemetry equipment), cannot be conducted. DOE and national laboratory officials are not concerned about the reliability of the W88 warhead because they have collected considerable data over the past few years by testing W88 warheads that had never been placed in the stockpile. Because the W88 warhead is a relatively new weapon, DOE officials believe that the information from these “new material” tests provides good reliability data. Nonnuclear Systems Laboratory Tests Of the nine weapon types, only the W88 warhead is considered by DOE to be of concern in relation to nonnuclear systems laboratory tests. These tests involve testing the nonnuclear systems—such as the radar systems and fuzes—in the weapon to detect defects due to handling, aging, manufacturing, or design. DOE officials said the Department should have conducted about 28 laboratory tests, but over the past 4 years, only 15 (or 54 percent) tests were performed. According to DOE and national laboratory officials, the tests were not conducted because of the absence of an approved safety study at Pantex. DOE officials said that in this case, the lack of testing reduces their confidence in the weapon’s reported reliability. DOE officials told us that they could not quantify the decrease in confidence. Laboratory Tests of Nuclear and Nonnuclear Components From the weapons selected for testing each year, one weapon of each type is selected to have individual nuclear and nonnuclear components destructively tested. Although many other components are tested (such as cables and electrical components), according to DOE officials, the five key components tested are the pit, the secondary, the detonator assembly, gas transfer system, and high explosive. Testing of four of these key components has been behind schedule in recent years. Only the high explosives tests have been conducted on schedule. The pit is a part of the nuclear package that, until 1989, was manufactured and tested at DOE’s Rocky Flats facility in Colorado. According to DOE officials, the Department ideally tests one pit per year per weapon type. In December 1989, the Rocky Flats facility ceased production operations. At first, DOE believed that Rocky Flats would reopen; however, in 1992 DOE decided to move pit tests to the Los Alamos National Laboratory. This lapse created a backlog of up to 4 to 5 years, but testing is currently nearly back on schedule. The secondary is tested at DOE’s Y12 facility in Oak Ridge, Tennessee. Ideally, one secondary should be tested per weapon type per year. Few have been tested since September 1994, when Y12 was placed in a “stand-down” mode because of problems related to safety procedures that had been noted by the Defense Nuclear Facilities Safety Board. Most of these problems did not involve unsafe conditions, but were related to not following approved procedures. According to DOE officials, a 1-year backlog of secondaries to be tested currently exists. DOE’s Mound facility in Ohio tested detonator sets through 1994. At that time, responsibility for testing detonator sets was moved to DOE’s Los Alamos and Lawrence Livermore Laboratories. Ideally, DOE tests two to five detonator sets per year per weapon type. Los Alamos began testing in June 1996, and Lawrence Livermore is scheduled to begin testing later this year. In the meantime, a 1-1/2-year backlog of detonator sets to be tested exists. DOE’s Mound facility also tested gas transfer systems through 1994. Ideally, one or two gas transfer systems are tested per weapon type per year. Responsibility for testing gas transfer systems was moved to DOE’s Savannah River facility in South Carolina. Savannah River began testing some gas transfer systems earlier this year, but a 1-1/2-year backlog currently exists. According to DOE officials, the lack of nuclear component testing decreases DOE’s confidence in the reliability assessments of the weapons in the nuclear stockpile. DOE officials said that they could not estimate the degree to which confidence in the reliability assessments of the weapons had decreased because of the backlogs in nuclear components laboratory tests. However, the officials said that the confidence had not diminished to a point of concern. The officials explained that pits, secondaries, detonator assemblies, and gas transfer systems are long-lived items, and generally, testing could be suspended for 3 years without confidence diminishing to a point of concern. DOE’s Ability to Conduct Some Future Tests Is Uncertain DOE has taken actions to increase the number of stockpile surveillance tests but has not prepared detailed plans for returning the stockpile surveillance program to its schedule. Without such plans, it is difficult for us to assess the likelihood that stockpile surveillance tests will return to the schedule. Furthermore, we believe that issues and factors such as the availability of test missile launches, expiration of approved safety studies, or cessation of operations at test facilities could have an adverse effect on DOE’s future ability to remain on schedule. Flight Tests and Nonnuclear Systems Laboratory Tests For most weapon types, DOE has taken actions that may return flight tests and nonnuclear systems laboratory tests to the schedule in the short term. However, in the longer term, implementation of the START I and START II treaties, the availability of telemetry packages used in flight testing, and the expiration of safety studies could cause these testing programs to fall behind schedule. Based on the Air Force’s agreement to provide for sufficient flight tests on test missiles, DOE estimates that W78 warhead flight tests will be back on schedule by the end of fiscal year 1996. The W62 warhead is behind schedule for flight testing because DOE could not deliver the test warheads to the Air Force in time for the tests. DOE officials told us that this should not recur, but, as discussed later, DOE may not be able to maintain the W62 warhead flight test schedule in the long term because of limited inventories of testing equipment. The safety study that caused delays in the W88 warhead testing has been approved, and both flight testing and nonnuclear laboratory systems tests have been resumed. To get flight tests back on schedule, DOE plans to conduct six flight tests in fiscal year 1996 (as of July 1996, DOE had conducted three telemetry and one nontelemetry test during fiscal year 1996), four in 1997, three in fiscal year 1998, and three in fiscal year 1999. To get nonnuclear laboratory systems tests on schedule, DOE plans to consolidate 3 years of testing into 2 years. DOE estimates that flight tests will be back on schedule sometime during fiscal year 1999 and nonnuclear systems laboratory tests will be back on schedule in fiscal year 1998. In the longer term, tests of the W78 warhead—as well as the W62 and W87 warheads—could be a problem. DOE officials told us that when the START I and START II treaties are fully implemented, the Air Force may be limited in its ability to conduct flight tests. Air Force officials confirmed that providing for future InterContinental Ballistic Missile flight tests may be difficult because of limitations imposed by the START treaties. These treaties require a transition from Multiple Independent Reentry Vehicles to Single Reentry Vehicles. Until recently, multiple flight tests were routinely conducted on one missile firing. After the treaties are fully implemented, only one test warhead per missile will be allowed. A reduction in the number of warhead tests per flight reduces the overall number of tests that can be conducted because the number of missiles available for testing purposes is limited. Future flight tests of the W62 warhead could also be limited by a lack of telemetry packages. Initially, DOE had enough telemetry packages to test this warhead during its projected life. However, retirement of this warhead has been delayed, and DOE is running out of telemetry packages. Also, the company that produced the package has gone out of business. DOE is studying the possibility of using parts from W68 warhead telemetry packages (the W68 has been retired, but telemetry parts remain that may be recertified for use in the W62) to increase the number of telemetry packages available. If this is done, DOE could test W62 warheads for 4 years at the rate of two per year. DOE officials told us that a decision will be required in 1998 to determine if this warhead will remain in the stockpile long enough to make the redesign and purchase of new telemetry packages worthwhile. Finally, while the W88 warhead safety study has been approved, the expiration of other approved safety studies at Pantex could affect DOE’s future ability to conduct stockpile surveillance tests in the future. To conduct any of the three major types of stockpile surveillance tests, Pantex must be able to inspect the weapon, disassemble the weapon, reassemble the weapon, and replace the nuclear package with telemetry for the flight test. Without a valid safety study for each weapon type, Pantex cannot conduct any of these operations. The safety studies are valid for 5 years, and an extension can be granted for an additional 5 years. The safety study for the W78 warhead expired in April 1995 but has since been revalidated. Safety studies for the W87 and B83 warheads will expire within the next year. DOE does not anticipate a problem as revalidation of the studies is scheduled to occur before the old studies expire. Laboratory Tests of Key Components Although DOE has no formal written plans specifically for returning laboratory tests of key components to the schedule, DOE officials told us that activities have been undertaken and progress is being made toward eliminating the backlog of tests. Table 2 shows the type of component, the number of tests normally conducted for each component, and the approximate number of components in the backlog as of July 1996. Pit testing began at Los Alamos in fiscal year 1993. By conducting 19 tests per year, DOE officials said that the 4-to 5-year backlog that once existed will be eliminated by the end of this fiscal year. DOE officials also told us that about 10 pit tests were “written off.” This means that DOE determined that it was not necessary to conduct the tests because sufficient past data existed or because testing one or two pits out of a backlog of three or four for a specific weapon would, in its opinion, provide sufficient data. Regarding the secondary tests, Y12 is still in a stand-down mode. Tests of seven secondaries are currently being conducted under “special operations.” Special operations are defined as discrete activities or operations that can be performed before resuming normal activities within a nuclear facility. Completion of these seven tests is scheduled before the end of fiscal year 1996. DOE also is in the process of testing three secondaries at Los Alamos. (Los Alamos has the capability to test secondaries in very limited numbers.) However, about a 1-1/2-year backlog still exists. DOE plans to conduct a readiness assessment for restarting normal operations by October 1, 1996. Currently, DOE is considering conducting 15 tests of secondaries (at least one of each type of weapon in the active stockpile) during fiscal year 1997. This would put secondary testing back on schedule. Detonator set testing began at Los Alamos in June 1996 and will begin at the Lawrence Livermore Laboratory later this year. DOE plans to eliminate the backlog by the end of fiscal year 1997. This should not involve overtime or reallocation of resources. DOE officials explained that once the laboratories are set up to test the detonator sets, doing additional tests will require very little extra time. Gas transfer system testing began at Savannah River in 1996. Savannah River will conduct a phased approach to eliminate the backlog of tests one weapon type at a time. As a result, some weapon types will be back on schedule within a year while others will fall further behind. DOE officials believe that the Department will eliminate all backlogs sometime during fiscal year 2000. DOE does not have formal written plans describing how it will return component laboratory tests to the schedule. DOE officials at the Albuquerque Operations Office informed us that in the case of gas transfer systems and detonators, the Activity Transfer Plan prepared when testing responsibility was transferred from the Mound facility establishes the testing capability at the new locations. Beyond the plan, however, planning for reducing the backlogs and returning to the testing schedule is done informally. Officials representing all organizations involved in the testing meet periodically to resolve problems affecting the testing program. In this manner, DOE officials said that they reach agreement on what to do and how to do it. However, without formal documents detailing testing plans, costs, and schedules, it is difficult—if not impossible—to review the plans and assess their adequacy, determine the cost-effectiveness of the plans, or measure progress the test facilities are making. DOE Does Not Have Contingency Plans for Stockpile Testing In the past, DOE had more facilities and more alternatives for shifting functions and operations. However, in DOE’s current nuclear complex, if a particular facility cannot perform testing for an extended period, there is little redundant capability for stockpile surveillance testing. Without redundancy, planning for continued testing operations in the event of problems at one or more of the existing facilities takes on added importance. However, DOE does not have formal contingency plans for continuation of stockpile surveillance tests in the event that one or more of the testing facilities experienced serious operational problems and could not perform testing for an extended period of time. In the past, several facilities have been unable to conduct testing for extended periods of time. Most recently, as discussed previously, Y12 was unable to conduct surveillance tests because of procedural safety problems. When the stand-down occurred, DOE did not have a plan that established how or where surveillance tests should or could be resumed. As a result, secondary testing was halted until special operations began earlier this year at Y12, and DOE decided to test several secondaries at Los Alamos. In the meantime, a backlog of secondaries accumulated. Perhaps the most drastic example occurred when operations at Rocky Flats ceased in 1989. No contingency plan for testing existed, and in the time it took to make a decision on where testing should be conducted and complete the transfer arrangements, a 4- to 5-year backlog of pits waiting to be tested accumulated. DOE has a draft report that discusses alternate locations for conducting weapons-related activities. For example, DOE’s draft Stockpile Management Preferred Alternatives Report shows that for detonator-related functions, Los Alamos would be the alternative. DOE officials indicated, however, that this does not mean that these locations have surveillance testing capability available, although the facility or operations at the facility could possibly be modified to perform the function. In the event of a disruption of operations at a facility that would preclude testing, DOE officials said that they would use the Stockpile Management Preferred Alternatives Report to devise a specific plan. Depending on the nature of the problem at the original facility, the length and nature of the outage, and the specific weapon(s) involved, DOE would determine the best course of action. That course of action could be (1) to wait for the problem to be fixed at the site and resume normal operations at the original facility, (2) conduct operations at the original facility under special operations, or (3) alter an existing facility to assume surveillance operations. DOE officials said that they believe that developing a specific plan after the problems occur is the best course of action because of the wide range of problems that could occur and the variables related to outage length and potential remedies. Conclusions Confidence that the nation’s nuclear weapons are reliable is taking on added importance because these weapons are aging, and no new weapons are being produced to replace the existing weapons. As a result, the stockpile surveillance program’s role in assessing weapons’ reliability and ensuring confidence in the reliability takes on increased importance. DOE’s confidence in the reliability levels assigned to some nuclear weapons has been diminished because some needed tests have not been carried out. To ensure nuclear weapons’ reliability, it is important that DOE’s stockpile surveillance be maintained on schedule. However, without formal written plans detailing how DOE will increase the number of surveillance tests in order to return the program to its schedule, it is difficult to determine if DOE’s estimates on getting the surveillance testing back on schedule are reasonable and cost-effective. Furthermore, without contingency plans, DOE’s ability to respond to possible future major disruptions in its testing operations is uncertain. Recommendations We recommend that the Secretary of Energy direct that the Assistant Secretary for Defense Programs develop detailed, written plans to restore stockpile surveillance tests to the contingency plans for testing facilities to provide for continued testing operations in the event that a testing facility is shut down for an extended period of time. Agency Comments and Our Evaluation We provided a draft of this report to DOE for its review and comment. We met with officials from DOE’s Office of Nuclear Weapons Management and its Albuquerque Operations Office, Weapons Quality Division, including the Director of the Office of Nuclear Weapons Management, who agreed that the report was accurate and agreed with our conclusions and recommendations. During our discussions, the DOE officials stressed that they are making every effort to get the stockpile surveillance program on schedule and, over the past year, have made much progress toward that goal. In addition, DOE officials stressed that reliability of the nuclear weapons in the stockpile have not been adversely affected by a lack of testing. Scope and Methodology Our objectives in this review were to (1) provide information on the status of DOE’s stockpile surveillance program; (2) if the program is not on schedule, determine why it is not; and (3) provide information on the steps being taken to return the program to the schedule. To determine if DOE’s stockpile surveillance program is on schedule, we obtained statistics from DOE and DOD and compared those statistics with DOE’s test schedules. For weapon types or components that were behind schedule, we discussed with DOE and laboratory officials the reasons why they were behind schedule and the efforts being made to return to the schedule. We also discussed with DOE and DOD officials the prospects for problems in keeping future tests on schedule. We reviewed the safety study expiration and approval schedule for each weapon type and discussed with DOE officials the contingencies in the event a testing facility could not operate. We verified DOE’s statistical analysis of confidence levels and defect discovery probabilities. We conducted our review between April and July 1996 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this report. At that time, we will send copies of the report to the Secretary of Energy; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report include Bernice Steinhardt, Associate Director; William F. Fenzel, Assistant Director; Kenneth E. Lightner Jr., Evaluator; William M. Seay, Evaluator; and John D. Gentry, Evaluator. Victor S. Rezendes Director, Energy, Resources, and Science Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) Nuclear Weapons Stockpile Surveillance program, focusing on DOE efforts to get the testing program on schedule. GAO found that: (1) DOE is behind schedule in conducting flight tests, nonnuclear system laboratory tests, and nuclear and nonnuclear component laboratory tests; (2) these schedule slippages are a result of unapproved safety studies, suspended testing at certain nuclear facilities, and inappropriate transfer of testing functions; (3) DOE has reduced its plan for testing the Air Force's intercontinental ballistic missiles from three tests per year to two tests per year; (4) flight testing of W88 warheads is suspended until a safety study plan is approved; (5) DOE has taken actions to increase the number of stockpile surveillance tests, but DOE does not have formal contingency plans for continuous stockpile testing; (6) one DOE facility is unable to conduct surveillance tests due to procedural safety problems; (7) DOE uses the Stockpile Management Preferred Alternative Report to determine alternative locations for weapons testing depending on the nature of the problem at the original testing facility, length and time of the outage, and particular weapon involved; and (8) DOE prefers to develop a specific plan of action after testing problems occur to confront the wide range of problems and variables involved in the surveillance testing process.
Background VHA Central Office has responsibility for monitoring and overseeing both VISN and medical facility operations, including security precautions. Day- to-day management of medical facilities, including residential and mental health treatment units, is the responsibility of the VISNs. Residential Programs VA has 237 residential programs at 104 of its medical facilities. These programs provide residential rehabilitative and clinical care to veterans with a range of mental health conditions, including those diagnosed with post-traumatic stress disorder and substance abuse. VA operates three types of residential programs in selected medical facilities throughout its health care system: Residential rehabilitation treatment programs (RRTP). These programs provide intensive rehabilitation and treatment services for a range of mental health conditions in a 24 hours per day, 7 days a week structured residential environment at a VA medical facility. Domiciliary programs. In its domiciliaries, VA provides 24 hours per day, 7 days a week, structured and supportive residential environments, housing, and clinical treatment to veterans. Domiciliary programs may also contain specialized treatment programs for certain mental health conditions. Compensated work therapy/transitional residence (CWT/TR) programs. These programs are the least intensive residential programs and provide veterans with community-based housing and therapeutic work-based rehabilitation services designed to facilitate successful community reintegration. Inpatient Mental Health Units Most (111) of VA’s 153 medical facilities have at least one inpatient mental health unit for patients with acute mental health needs. These units are generally a locked unit or floor within each medical facility, and the size of these units varies throughout VA. Care on these units is provided 24 hours per day, 7 days a week, and consists of intensive psychiatric treatment designed to stabilize veterans and transition them to less intensive levels of care, such as RRTPs and domiciliary programs. Inpatient mental health units are required to comply with VHA’s Mental Health Environment of Care Checklist that specifies several safety requirements for these units, including several security precautions, such as the use of panic alarm systems and the security of nursing stations within these units. VA’s Two Reporting Streams for Safety Incidents Safety incidents, including sexual assaults, may be reported to senior leadership as part of two different streams—a management stream and a law enforcement stream. The management reporting stream—which includes reporting responsibilities at the VA medical facility, VISN, and VHA Central Office levels—is intended to help ensure that incidents are identified and documented for leadership’s attention. In contrast, the purpose of the law enforcement stream is to document incidents that may involve criminal acts so they can be investigated and prosecuted, if appropriate. VHA policies outline what information staff must report for each stream and define some mechanisms for this reporting, but medical facilities have the flexibility to customize and design their own site- specific reporting systems and policies that fit within the broad context of these requirements. (Fig. 1 summarizes the major steps involved in each stream.) Management reporting stream. Reporting responsibilities at each level for this stream are as follows. Local VA medical facilities. Local incident reporting is typically handled through a variety of electronic facility-based systems. It is initiated by the first staff member who observed or was notified of an incident, who completes an incident report in the medical facility’s electronic reporting system that is then reviewed by the medical facility’s quality manager. VA medical facility leadership is then notified, and is responsible for reporting serious incidents to the VISN. VISNs. VA medical facilities can report serious incidents to their VISN through two mechanisms—issue briefs that document specific factual information and “heads up” messages that allow medical facility leadership to provide a brief synopsis of the issue while facts are being gathered for documentation in an issue brief. VISN offices are typically responsible for direct reporting to the VHA Central Office. VHA Central Office. VISNs typically report all serious incidents to the VHA Office of the Deputy Under Secretary for Health for Operations and Management, which then communicates relevant incidents to other VHA offices, including the Office of the Principal Deputy Under Secretary for Health, through an e-mail distribution list. Law enforcement reporting stream. Responsibilities at each level are described below. Local VA police. Most VA medical facilities have a cadre of VA police officers, who are federal law enforcement officers charged with protecting the medical facility by responding to and investigating potentially criminal activities. Local policies typically require medical facility staff to notify the medical facility’s VA police of incidents that may involve criminal acts, such as sexual assaults. VA medical facility police also often notify and coordinate with local area police departments and the VA OIG when criminal activities or potential security threats occur. VA’s OSLE. This office is the department-level VA office responsible for developing policies and procedures for VA’s law enforcement programs at local VA medical facilities. VA OSLE receives reports of incidents at VA medical facilities through its centralized police reporting system. Additionally, local VA police are required to immediately notify VA OSLE of serious incidents, including reports of rape and aggravated assaults. VA’s Integrated Operations Center (IOC). The IOC, established in April 2010, serves as the department’s centralized location for integrated planning and data analysis on serious incidents. Serious incidents on VA property are reported to the IOC either by local VA police or the VHA Office of the Deputy Under Secretary for Health for Operations and Management. The IOC then presents information on serious incidents to VA senior leadership officials through daily reports and, in some cases, to the Secretary through serious incident reports. VA OIG. Federal regulation requires that all potential felonies, including rape allegations, be reported to VA OIG investigators. VHA policy reiterates this by specifying that the OIG must be notified of sexual assault incidents when the crime occurs on VA premises or is committed by VA employees. Typically, either the medical facility’s leadership team or VA police are responsible for reporting potential felonies to the VA OIG. Once a case is reported, VA OIG investigators can be the lead agency on the case or advise local VA police or other law enforcement agencies conducting the investigation. Nearly 300 Sexual Assault Incidents Reported to VA Police, but Many Were Not Reported to VHA or the VA OIG We found that there were nearly 300 sexual assault incidents reported to the VA police from January 2007 through July 2010—including alleged incidents that involved rape, inappropriate touching, forceful medical examinations, forced or inappropriate oral sex, and other types of sexual assault incidents. Many of these sexual assault incidents were not reported to officials within the management reporting stream and to the VA OIG. Nearly 300 Sexual Assault Incidents Reported to VA Police From January 2007 Through July 2010 We analyzed VA’s national police files from January 2007 through July 2010 and identified 284 sexual assault incidents reported to VA police during that period. These cases included incidents alleging rape, inappropriate touching, forceful medical examinations, oral sex, and other types of sexual assaults (see table 1). However, it is important to note that not all sexual assault incidents reported to VA police are substantiated. A case may remain unsubstantiated because an assault did not actually take place, the victim chose not to pursue the case, or there was insufficient evidence to substantiate the case. Due to our review of both open and closed VA police sexual assault incident investigations, we could not determine the final disposition of these incidents. In analyzing these 284 cases, we observed the following: Overall, the sexual assault incidents described above included several types of alleged perpetrators, including employees, patients, visitors, outsiders not affiliated with VA, and persons of unknown affiliation. In the reports we analyzed, there were allegations of 89 patient-on-patient sexual assaults, 85 patient-on-employee sexual assaults, 46 employee-on-patient sexual assaults, 28 unknown affiliation-on-patient sexual assaults, and 15 employee-on-employee sexual assaults. Regarding gender of alleged perpetrators, we also observed that of the 89 patient-on-patient sexual assault incidents, 46 involved allegations of male perpetrators assaulting female patients, 42 involved allegations of male perpetrators assaulting male patients, and 1 involved an allegation of a female perpetrator assaulting a male patient. Of the 85 patient-on- employee sexual assault incidents, 83 involved allegations of male perpetrators assaulting female employees and 2 involved allegations of male perpetrators assaulting male employees. Sexual Assault Incidents Are Underreported to VISNs, VHA Central Office, and the VA OIG VISN and VHA Central Office officials did not receive reports of all sexual assault incidents reported to VA police in VA medical facilities within the four VISNs we reviewed. In addition, the VA OIG did not receive reports of all sexual assault incidents that were potential felonies as required by VA regulation, specifically those involving rape allegations. VISNs and VHA Central Office Receive Limited Information on Sexual Assault Incidents VISNs and VHA Central Office leadership officials are not fully aware of many sexual assaults reported at VA medical facilities. For the four VISNs we spoke with, we examined all documented incidents reported to VA police from medical facilities within each network and compared these reports with the issue briefs received through the management reporting stream by VISN officials. Based on this analysis, we determined that VISN officials in these four networks were not informed of most sexual assault incidents that occurred within their network medical facilities. Moreover, we also found that one VISN did not report any of the cases they received to VHA Central Office. (See table 2.) To examine whether VA medical facilities were accurately reporting sexual assault incidents involving rape allegations to the VA OIG, we reviewed the 67 rape allegations reported to the VA police from January 2007 through July 2010 and compared these cases with all investigation documentation provided by the VA OIG for the same period. We found no evidence that about two-thirds (42) of these rape allegations had been reported to the VA OIG. The remaining 25 had matching VA OIG investigation documentation, indicating that they were correctly reported to both the VA police and the VA OIG. By regulation, VA requires that: (1) all criminal matters involving felonies that occur in VA medical facilities be immediately referred to the VA OIG and (2) responsibility for the prompt referral of any possible criminal matters involving felonies lies with VA management officials when they are informed of such matters. This regulation includes rape in the list of felonies provided as examples and also requires VA medical facilities to report other sexual assault incidents that meet the criteria for felonies to the VA OIG. However, the regulation does not include criteria for how VA medical facilities and management officials should determine whether or not a criminal matter meets the felony reporting threshold. We found that all 67 of these rape allegations were potential felonies because, if substantiated, sexual assault incidents involving rape fall within federal sexual offenses that are punishable by imprisonment of more than 1 year. In addition, we provided the VA OIG the opportunity to review summaries of the 42 rape allegations we could not confirm were reported to them by the VA police. To conduct this review, several VA OIG senior-level investigators determined whether or not each of these rape allegations should have been reported to them based on what a reasonable law enforcement officer would consider a felony. According to these investigators, a reasonable law enforcement officer would look for several elements to make this determination, including (1) an identifiable and reasonable suspect, (2) observations by a witness, (3) physical evidence, or (4) an allegation that appeared credible. These investigators based their determinations on their experience as federal law enforcement agents. Following their review, these investigators also found that several of these rape allegations were not appropriately reported to the VA OIG as required by federal regulation. Specifically, the VA OIG investigators reported that they would have expected about one-third (33 percent) of the 42 rape allegations to have been reported to them based on the incident summary containing information on these four elements. The investigators noted that they would not have expected approximately 55 percent of the 42 rape allegations to have been reported to them due to either the incident summary failing to contain these same four elements or the presence of inconsistent statements made by the alleged victims. For the remaining approximately 12 percent, the investigators noted that the need for notification was unclear because there was not enough information in the incident summary to make a determination about whether or not the rape allegation should have been reported to the VA OIG. VHA Guidance and Oversight Weaknesses May Contribute to the Underreporting of Sexual Assault Incidents Several factors may contribute to the underreporting of sexual assault incidents to VISNs, VHA Central Office, and the VA OIG—including VHA’s lack of a consistent sexual assault definition for reporting purposes; limited and unclear expectations for sexual assault incident reporting at the VHA Central Office, VISN, and VA medical facility levels; and deficiencies in VHA Central Office oversight of sexual assault incidents. VHA Does Not Have a Consistent Sexual Assault Definition for Reporting Purposes VHA leadership officials may not receive reports of all sexual assault incidents that occur at VA medical facilities because there is no VHA-wide definition of sexual assault used for incident reporting. We found that VHA lacks a consistent definition for the reporting of sexual assault through the management reporting stream at the medical facility, VISN, and VHA Central Office levels. At the medical facility level, we found that the medical facilities we visited had a variety of definitions of sexual assault targeted primarily to the assessment and management of victims of recent sexual assaults. Specifically, facilities varied in the level of detail provided by their policies, ranging from one facility that did not include a definition of sexual assault in its policy at all to another facility with a policy that included a detailed definition. At the VISN level, officials with whom we spoke in the four networks said they did not have definitions of sexual assault in VISN policies. Finally, while VHA Central Office does have a policy for the clinical management of sexual assaults, this policy is targeted to the treatment of victims assaulted within 72 hours and does not include sexual assault incidents that occur outside of this time frame. In addition, no definition of sexual assault is included in VHA Central Office reporting guidance. VHA Central Office, VISNs, and VA Medical Facilities’ Expectations for Reporting Are Limited and Unclear In addition to failing to provide a consistent definition of sexual assault for incident reporting, VHA also does not have clearly documented expectations about the types of sexual assault incidents that should be reported to officials at each level of the organization, which may also contribute to the underreporting of sexual assault incidents. Without clear expectations for incident reporting there is no assurance that all sexual assault incidents are appropriately reported to officials at the VHA Central Office, VISN, and local medical facility levels. We found that expectations were not always clearly documented, resulting in either the underreporting of some sexual assault incidents or communication breakdowns at all levels. VHA Central Office. An official from VHA’s Office of the Deputy Under Secretary for Health for Operations and Management told us that this office’s expectations for reporting sexual assault incidents were documented in its guidance for the submission of issue briefs. However, we found that this guidance does not specifically reference reporting requirements for any type of sexual assault incidents. As a result, VISNs we reviewed did not consistently report sexual assault incidents to VHA Central Office. VISNs. Officials from the four VISNs we reviewed did not include detailed expectations regarding whether or not sexual assault incidents should be reported to them in their reporting guidance, potentially resulting in medical facilities failing to report some incidents. For example, officials from one VISN told us they expect to be informed of all sexual assault incidents occurring in medical facilities within their network, but this expectation was not explicitly documented in their policy. We found several reported allegations of sexual assault incidents in medical facilities in this VISN—including three allegations of rape and one allegation of inappropriate oral sex—that were not forwarded to VISN officials. VA medical facilities. At the medical facility level, we also found that reporting expectations may be unclear. In particular, we identified cases in which the VA police had not been informed of incidents that were reported to medical facility staff. For example, we identified VA police files from one facility we visited where officers noted that the alleged perpetrator had been previously involved in other sexual assault incidents that were not reported to the VA police by medical facility staff. In these police files, officers noted that staff working in the alleged perpetrators’ units had not reported the previous incidents because they believed these behaviors were a manifestation of the veterans’ clinical condition. In addition, at this same medical facility, quality management staff identified five sexual assault incidents that had not been reported to VA police at the medical facility, despite these incidents being reported to their office. Oversight Deficiencies at VHA Central Office Contribute to the Underreporting of Sexual Assault Incidents We found weaknesses both in the way sexual assault incidents are communicated to VHA Central Office and in the way that information about such incidents is collected and analyzed for oversight purposes. Poor Communication About Sexual Assault Incidents Resulted in Incomplete Reporting Within VHA Central Office Currently, VHA Central Office relies primarily on e-mail messages to transfer information about sexual assault incidents among its offices and staff. (See fig. 2.) Under this system, VHA Central Office is notified of sexual assault incidents through issue briefs submitted by VISNs via e-mail to the VHA Office of the Deputy Under Secretary for Health for Operations and Management. Following review, the Director for Network Support forwards issue briefs to the Office of the Principal Deputy Under Secretary for Health for distribution to other VHA offices on a case-by-case basis, including the program offices responsible for residential programs and inpatient mental health units. Program offices are sometimes asked to follow up on incidents in their area of responsibility. We found that this system did not effectively communicate information about sexual assault incidents to the VHA Central Office officials who have programmatic responsibility for the locations in which these incidents occurred. For example, VHA program officials responsible for both residential programs and inpatient mental health units reported that they do not receive regular reports of sexual assault incidents that occur within their programs or units at VA medical facilities and were not aware of any incidents that had occurred in these programs or units. However, during our review of VA police files, we identified at least 18 sexual assault incidents that occurred from January 2007 through July 2010 in the residential programs or inpatient mental health units of the five VA medical facilities we reviewed. If the management reporting stream were functioning properly, these program officials should have been notified of these incidents and any others that occurred in other VA medical facilities’ residential programs and inpatient mental health units. Without the regular exchange of information regarding sexual assault incidents that occur within their areas of programmatic responsibility, VHA program officials cannot effectively address the risks of such incidents in their programs and units and do not have the opportunity to identify ways to prevent incidents from occurring in the future. In early 2011, VHA leadership officials told us that initial efforts, including sharing information about sexual assault incidents with the Women Veterans Health Strategic Health Care Group and VHA program offices, were underway to improve how information on sexual assault incidents is communicated to program officials. However, these improvements have not been formalized within VHA or published in guidance or policies and are currently being performed on an informal ad hoc basis only, according to VHA officials. VHA Does Not Systematically Monitor and Track Sexual Assault Incidents In addition to deficiencies in information sharing, we also identified deficiencies in the monitoring of sexual assault incidents within VHA Central Office. VHA’s Office of the Deputy Under Secretary for Health for Operations and Management, the first VHA office to receive all issue briefs related to sexual assault incidents, does not currently have a system that allows VHA Central Office staff to systematically collect or analyze reports of sexual assault incidents received from VA medical facilities through the management reporting stream. Specifically, we found that this office does not have a central database to store the issue briefs that it receives and instead relies on individual staff to save issue briefs submitted to them by e-mail to electronic folders for each VISN. In addition, officials within this office said they do not know the total number of issue briefs submitted for sexual assault incidents because they do not have access to all former staff members’ files. As a result of these issues, staff from the Office of the Deputy Under Secretary for Health for Operations and Management could not provide us with a complete set of issue briefs on sexual assault incidents that occurred in all VA medical facilities without first contacting VISN officials to resubmit these issue briefs. Such a limited archive system for reports of sexual assault incidents received through the management reporting stream results in VHA’s inability to track and trend sexual assault incidents over time. While VHA has, through its National Center for Patient Safety (NCPS), developed systems for routinely monitoring and tracking patient safety incidents that occur in VA medical facilities, these systems do not monitor sexual assaults and other safety incidents. Without a system to track and trend sexual assaults and other safety incidents, VHA Central Office cannot identify and make changes to serious problems that jeopardize the safety of veterans in their medical facilities. Serious Weaknesses Observed in Several Types of Physical Security Precautions Used in Selected Medical Facilities Physical precautions in the residential programs and inpatient mental health units at the medical facilities we visited included monitoring precautions used to observe patients, security precautions used to physically secure facilities and alert staff of problems, and staff awareness and preparedness precautions used to educate staff about security issues and provide police assistance. However, we found serious deficiencies in the use and implementation of certain physical security precautions at these facilities, including alarm system malfunctions and inadequate monitoring of security cameras. Several Types of Physical Security Precautions Are in Place in Selected Medical Facilities VA medical facilities we visited used a variety of physical security precautions to prevent safety incidents in their residential programs and inpatient mental health units. Typically, medical facilities had discretion to implement these precautions based on their own needs within broad VA guidelines. In general, physical security precautions were used as a measure to prevent a broad range of safety incidents, including sexual assaults. We classified these precautions into three broad categories: monitoring precautions, security precautions, and staff awareness and preparedness precautions. (See table 3.) Monitoring precautions. These measures were those designed to observe and track patients and activities in residential and inpatient settings. For example, at some VA medical facilities we visited, closed-circuit surveillance cameras were installed to allow VA staff to monitor areas and to help detect potentially threatening behavior or safety incidents as they occur. Cameras were also used to passively document any incidents that occurred. Security precautions. These precautions were those designed to maintain a secure environment for patients and staff within residential programs and inpatient mental health units and allow staff to call for help in case of any problems. For example, the units we visited regularly used locks and alarms at entrance and exit access points, as well as locks and alarms for some patient bedrooms. Another security precaution we observed was the use of stationary, computer-based, and portable personal panic alarms for staff. Staff awareness and preparedness precautions. These measures were designed to educate and prepare residential program and inpatient mental health unit staff to deal with security issues and to provide police support and assistance when needed. For example, there was a regular VA police presence within some residential programs we visited. Also, all medical facilities we visited had a functioning police command and control center, which program staff could contact for police support when needed. Significant Weaknesses Existed in the Use and Implementation of Certain Physical Security Precautions at Selected VA Medical Facilities While security precautions have been established in most cases to prevent patient safety incidents, including sexual assaults, these precautions had not been effectively implemented by VA medical facility staff in the five facilities we visited. During our review of the physical security precautions in use at the five VA medical facilities we visited, we observed seven weaknesses in these three categories. (See table 4.) Inadequate monitoring of closed-circuit surveillance cameras. We observed that VA staff in the police command and control center were not continuously monitoring closed-circuit surveillance cameras at all five of the VA medical facilities we visited. For example, at one medical facility, the system used by the residential programs at that medical facility could not be monitored by the police command and control center staff because it was incompatible with systems installed in other parts of the medical facility. According to VA police at this medical facility, the residential program staff did not consult with VA police before installing their own system. At another medical facility, where staff in the police office monitor cameras covering the residential programs’ grounds and parking area, we found that the police office was unattended part of the time. In addition, at the remaining three medical facilities we visited, staff in the police command and control centers assigned to monitor medical facility surveillance cameras had other duties, such as serving as telephone operators and police/emergency dispatchers. These other duties sometimes prevented them from continuously monitoring the camera feeds in the police command and control center. Although effective use of surveillance camera systems cannot necessarily prevent safety incidents from occurring, lapses in monitoring by security staff compromise the effectiveness of these systems. Alarm malfunctions. At least one form of alarm failed to work properly when tested at four of the five medical facilities we visited. For example, at one medical facility, we tested the portable personal panic alarms used by residential program staff and found that the police command and control center could not accurately pinpoint the location of the tester when an alarm was activated outside the building. At another medical facility that used stationary panic alarms in inpatient mental health units, residential programs, and other clinical settings, almost 20 percent of these alarms throughout the medical facility were inoperable. At an inpatient mental health unit in a third medical facility, three of the computer-based panic alarms we tested failed to properly pinpoint the location of our tester because the medical facility’s computers had been moved to different locations and were not properly reconfigured. Finally, at a fourth medical facility, alarms we tested in the inpatient mental health unit sounded properly, but staff in the unit and VA police responsible for testing these alarms did not know how to turn them off after they were activated. In each of the cases where alarms malfunctioned, VA staff were not aware the alarms were not functioning properly until we informed them. Inadequate documentation or review of alarm system testing. One of the five sites we visited failed to properly document tests conducted of their alarm systems for their residential programs, although testing of alarms is a required element in VA’s Environment of Care Checklist. Testing of alarm systems is important to ensure that systems function properly, and not having complete documentation of alarm system testing is an indication that periodic testing may not be occurring. In addition, three medical facilities reported using computer-based panic alarms that are designed to be self-monitoring to identify cases where computers equipped with the system fail to connect with the servers monitoring the alarms. Officials at all three of these medical facilities stated that due to the self-monitoring nature of these alarms, they did not maintain alarm test logs of these systems. However, we found that at two of these three medical facilities, these alarms failed to properly alert VA police when tested. Such alarm system failures indicate that the self-monitoring systems may not be effectively alerting medical facility staff of alarm malfunctions when they occur, indicating the need for these systems to be periodically tested. Alarms failed to alert both police and unit staff. In inpatient mental health units at all five medical facilities we visited, stationary and computer-based panic alarm systems we tested did not alert staff in both the VA police command and control center and the inpatient mental health unit where the alarm was triggered. Alerting both locations is important to better ensure that timely and proper assistance is provided. At four of these medical facilities, the inpatient mental health units’ stationary or computer-based panic alarms notified the police command and control centers but not staff at the nursing stations of the units where the alarms originated. At the fifth medical facility, the stationary panic alarms only notified staff in the unit nursing station, making it necessary to separately notify the VA police. Finally, none of the stationary or computer-based panic alarms used by residential programs notified both the police command and control centers and staff within the residential program buildings when tested. Limited use of portable personal panic alarms. Electronic portable personal panic alarms were not available for the staff at any of the inpatient mental health units we visited and were available to staff at only one residential program we reviewed. In two of the inpatient mental health units we visited, staff were given safety whistles they could use to signal others in cases of emergency, personal distress, or concern about veteran or staff safety. However, relying on whistles to signal such incidents may not be effective, especially when staff members are the victims of assault. For example, a nurse at one medical facility we visited was involved in an incident in which a patient grabbed her by the throat and she was unable to use her whistle to summon assistance. Some inpatient mental health unit staff with whom we spoke indicated an interest in having portable personal panic alarms to better protect them in similar situations. VA police staffing and workload challenges. At most medical facilities we visited, VA police forces and police command and control centers were understaffed, according to medical facility officials. For example, during our visit to one medical facility, VA police officials reported being able to staff just two officers per 12-hour shift to patrol and respond to incidents at both the medical facility and at a nearby 675-acre veteran’s cemetery. While this staffing ratio met the minimum standards for VA police staffing, having only two police officers to cover such a large area could potentially increase the response times should a panic alarm activate or other security incident occur on medical facility grounds. Also, we found that there was an inadequate number of officers and staff at this medical facility to effectively police the medical facility and maintain a productive police force. The medical facility had a total of 9 police officers at the time of our visit; according to VA staffing guidance, the minimum staffing level for this medical facility should have been 19 officers. Not all medical facilities we visited had staffing problems. At one medical facility, the VA police appeared to be well staffed and were even able to designate staff to monitor off-site residential programs and community-based outpatient clinics. Lack of stakeholder involvement in unit redesign. As medical facilities undergo remodeling, it is important that stakeholders are consulted in the design process to better ensure that new or remodeled areas are both functional and safe. We found that such stakeholder involvement on remodeling projects had not occurred at one of the medical facilities we visited. At this medical facility, clinical and VA police personnel were not consulted about a redesign project for the inpatient mental health unit. The new unit initially included one nursing station that did not prevent patient access if necessary. After the unit was reopened following the renovation, there were a number of assaults, including an incident where a veteran reached over the counter of the unit’s nursing station and physically assaulted a nurse by stabbing her in the neck, shoulder, and leg with a pen. Had staff been consulted on the redesign of this unit, their experience managing veterans in an inpatient mental health unit environment would have been helpful in developing several safety aspects of this new unit, including the design of the nursing station. Less than a year after opening this unit, medical facility leadership called for a review of the units’ design following several reported incidents. As a result of this review, the unit was split into two separate units with different veteran populations, an additional nursing station was installed, and changes were planned for the structure of both the original and newly created nursing stations—including the installation of a new shoulder- height plexiglass barricade on both nursing station counters. In conclusion, weaknesses exist in the reporting of sexual assault incidents and in the implementation of physical precautions used to prevent sexual assaults and other safety incidents in VA medical facilities. Medical facility staff are uncertain about what types of sexual assault incidents should be reported to VHA leadership and VA law enforcement officials and prevention and remediation efforts are eroded by failing to tap the expertise of these officials. These officials can offer valuable suggestions for preventing and mitigating future sexual assault incidents and help address broader safety concerns through systemwide improvements throughout the VA healthcare system. Leaving reporting decisions to local VA medical facilities—rather than relying on VHA management and VA OIG officials to determine what types of incidents should be reported based on the consistent application of known criteria—increases the risk that some sexual assault incidents may go unreported. Moreover, uncertainty about sexual assault incident reporting is compounded by VA not having: (1) established a consistent definition of sexual assault, (2) set clear expectations for the types of sexual assault incidents that should be reported to VISN and VHA Central Office leadership officials, and (3) maintained proper oversight of sexual assault incidents that occurred in VA medical facilities. Unless these three key features are in place, VHA will not be able to ensure that all sexual assault incidents will be consistently reported throughout the VA health care system. Specifically, the absence of a centralized tracking system to monitor sexual assault incidents across VA medical facilities may seriously limit efforts to both prevent such incidents in the short and long term and maintain a working knowledge of past incidents and efforts to address them when staff transitions occur. In addition, ensuring that medical facilities maintain a safe and secure environment for veterans and staff in residential programs and inpatient mental health units is critical and requires commitment from all levels of VA. Currently, the five VA medical facilities we visited are not adequately monitoring surveillance camera systems, maintaining the integrity of alarm systems, and ensuring an adequate police presence. Closer oversight by both VISNs and VHA Central Office staff is needed to provide a safe and secure environment throughout all VA medial facilities. To improve VA’s reporting and monitoring of allegations of sexual assault, we are making numerous recommendations—in a report that we issued last week. We recommended VA improve the reporting and monitoring of sexual assault incidents, including ensuring that a consistent definition of sexual assault is used for reporting purposes, clarifying expectations for reporting incidents to VISN and VHA leadership, and developing and implementing mechanisms for incident monitoring. To address vulnerabilities in physical security precautions at VA medical facilities, we recommended that VA ensure that alarm systems are regularly tested and kept in working order and that coordination among stakeholders occurs for renovations to units and physical security features at VA medical facilities. In responding to a draft of the report on which this testimony is based, VA generally agreed with the report’s conclusions and concurred with our recommendations. In addition, VA provided an action plan, which described the creation of a multidisciplinary workgroup to manage the agency’s response to many of our recommendations. According to VA’s comments, this workgroup will provide the Under Secretary for Health and his deputies with monthly verbal updates on its progress, as well as an initial action plan by July 15, 2011, and a final report by September 30, 2011. Chairwoman Buerkle, Ranking Member Michaud, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions either of you or other Members of the Subcommittee may have. Contacts and Acknowledgments For further information about this testimony, please contact Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Marcia A. Mann, Assistant Director; Emily Goodman; Katherine Nicole Laubacher; and Malissa G. Winograd. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
During GAO's recent work on services available for women veterans (GAO-10-287), several clinicians expressed concern about the physical safety of women housed in mental health programs at a Department of Veterans Affairs (VA) medical facility. GAO examined (1) the volume of sexual assault incidents reported in recent years and the extent to which these incidents are fully reported, (2) what factors may contribute to any observed underreporting, and (3) precautions VA facilities take to prevent sexual assaults and other safety incidents. This testimony is based on recent GAO work, "VA Health Care: Actions Needed To Prevent Sexual Assaults and Other Safety Incidents," (GAO-11-530) (June 2011). For that report, GAO reviewed relevant laws, VA policies, and sexual assault incident documentation from January 2007 through July 2010. In addition, GAO visited five judgmentally selected VA medical facilities that varied in size and complexity and spoke with the four Veterans Integrated Service Networks (VISN) that oversee them. GAO found that many of the nearly 300 sexual assault incidents reported to the VA police were not reported to VA leadership officials and the VA Office of the Inspector General (OIG). Specifically, for the four VISNs GAO spoke with, VISN and Veterans Health Administration (VHA) Central Office officials did not receive reports of most sexual assault incidents reported to the VA police. Also, nearly two-thirds of sexual assault incidents involving rape allegations originating in VA facilities were not reported to the VA OIG, as required by VA regulation. GAO identified several factors that may contribute to the underreporting of sexual assault incidents. For example, VHA lacks a consistent sexual assault definition for reporting purposes and clear expectations for incident reporting across its medical facility, VISN, and VHA Central Office levels. Furthermore, VHA Central Office lacks oversight mechanisms to monitor sexual assault incidents reported through the management reporting stream. VA medical facilities GAO visited used a variety of precautions intended to prevent sexual assaults and other safety incidents. However, GAO found some of these measures were deficient, compromising medical facilities' efforts to prevent sexual assaults and other safety incidents. For example, medical facilities used physical security precautions--such as closed-circuit surveillance cameras to actively monitor areas and locks and alarms to secure key areas. These physical precautions were intended to prevent a broad range of safety incidents, including sexual assaults. However, GAO found significant weaknesses in the implementation of these physical security precautions at the five VA medical facilities visited, including poor monitoring of surveillance cameras, alarm system malfunctions, and the failure of alarms to alert both VA police and clinical staff when triggered. Inadequate system configuration and testing procedures contributed to these weaknesses. Further, facility officials at most of the locations GAO visited said the VA police were understaffed. Such weaknesses could lead to delayed response times to incidents and seriously erode VA's efforts to prevent or mitigate sexual assaults and other safety incidents. GAO reiterated recommendations that VA improve both the reporting and monitoring of sexual assault incidents and the tools used to identify risks and address vulnerabilities at VA facilities. VA concurred with GAO's recommendations and provided an action plan to address them.
Background Surviving colorectal cancer is greatly enhanced when the disease is detected and treated early; however, only 38 percent of colorectal cancer cases are diagnosed at an early stage, according to ACS. To facilitate early diagnosis, ACS recommends regular colorectal cancer screening for certain individuals using at least one of four key tests: FOBT, flexible sigmoidoscopy, DCBE, and colonoscopy. These tests are used to find potential signs of colorectal cancer, including polyps—abnormal growths in a person’s colon—or blood in a person’s stool. FOBT is a laboratory test used to detect blood (that is otherwise not visible) in stool samples that are collected by patients at home. Using a flexible sigmoidoscopy, a physician can find and take samples of polyps in a patient’s lower colon and rectum. DCBE detects polyps by providing x-ray images of a patient’s entire rectum and colon. Finally, a colonoscopy allows a physician to find and take samples of polyps in a patient’s rectum and entire colon as well as remove most polyps found during the test. ACS, medical providers, and others have developed medical guidelines that outline the frequency at which colorectal cancer screening tests should be administered depending on an individual’s age and risk for developing the disease. For example, ACS guidelines recommend that, beginning at age 50, all average-risk individuals be screened annually using an FOBT; every 5 years using a flexible sigmoidoscopy or DCBE; or every 10 years using a colonoscopy. ACS guidelines also state that a combination of both FOBT and flexible sigmoidoscopy at the intervals indicated is the preferred screening method over either test alone, and that individuals at high or increased risk for developing the disease should be screened more frequently. Furthermore, ACS believes that patients and providers should jointly choose the appropriate tests and testing strategy based on patient risk factors and the varying accuracy, cost, and discomfort of the tests, among other factors. ACS’s colorectal cancer screening guidelines are also similar to those endorsed by the American Gastroenterological Association and the American Medical Association. The Medicare program covers all four screening tests following guidelines similar to those developed by ACS. The United States Preventive Services Task Force (USPSTF) also strongly recommends that clinicians screen men and women 50 years of age or older for colorectal cancer. However, it specifies the frequency with which tests should be administered only for FOBT, for which annual testing is suggested. Similar to ACS, USPSTF recommends that the choice of tests and testing strategy be based on a variety of factors including patient preferences and medical contraindications. Fewer than half of individuals age 50 and over surveyed in a 2002 national study reported receiving a colorectal cancer test for screening or diagnostic purposes. FOBT was used by approximately 45 percent of respondents age 50 and over, with less than half of these respondents having had their last test within the past year. The survey also found that a sigmoidoscopy or colonoscopy test was used by just under 50 percent of respondents age 50 and over at some point in their lives. Reasons that individuals do not obtain a colorectal cancer screening test may include a lack of patient education, a general reluctance to be tested, or a physician’s lack of time to discuss or educate patients about screening. Private health insurance is offered in two primary markets—the group and individual markets. The group market includes health plans offered by employers to employees. An employer may provide coverage for its employees either by purchasing the coverage from a health insurer (fully insured coverage) or by funding its own health plan (self-funded coverage). Within the group market, small employers typically purchase coverage from insurers, while larger employers are more likely to self-fund their coverage. Although the federal government is a large employer— with over 2.7 million employees in 2002—it provides health coverage for its employees through health insurance carriers that participate in FEHBP. About 161 million individuals received health coverage from the group market in 2002. The individual market includes health plans sold by insurers to individuals who do not receive coverage through an employer. About 16.8 million Americans received health coverage from the individual market in 2002. Private health plans are subject to various state and federal requirements, depending upon the market segments in which they are offered and the manner in which the plans are funded. The fully insured health coverage offered by small employers is subject to state insurance requirements, which can include mandated coverage for preventive health services and other benefits. Individual market coverage purchased by individuals from insurers is also subject to state insurance requirements. The self-funded coverage typically offered by larger employers is generally not subject to state insurance regulation, but only to federal requirements, none of which are related to preventive health services. OPM is responsible for regulating, and contracting with, private health insurers to offer health benefit plans to federal employees, pursuant to the Federal Employees Health Benefits Act. While private health insurers are generally subject to the applicable laws in their respective states, by federal law, the terms of any FEHBP contract negotiated by OPM, which relate to coverage or benefits, preempt any inconsistent state or local law or regulation. To assure a consistent set of benefits among the national plans, OPM routinely preempts state regulation, but generally does not do so for the local plans, according to an OPM official. Twenty States Had Laws That Require Private Health Insurance Plans to Cover Colorectal Cancer Screening Tests Twenty states had laws requiring private health insurance plans to cover colorectal cancer screening tests as of May 2004. In 19 of these states, the laws generally applied to group or individual health plans, and required coverage of all four tests—FOBT, flexible sigmoidoscopy, DCBE, and colonoscopy—typically consistent with ACS guidelines. However, the law in Wyoming had limitations. It was more limited in scope, applying to group and managed care plans and not explicitly requiring coverage of each of the four screening tests according to ACS guidelines. Table 1 shows the scope of state laws and appendix II provides a detailed summary of each of the 20 state laws. Majority of Health Plans Reviewed Covered the Four Colorectal Cancer Screening Tests The majority of health insurance plans we reviewed provided coverage for the four key colorectal cancer screening tests. These included health plans that were sold to small employers and individuals in states without laws requiring colorectal cancer screening coverage, were offered by large employers across the United States, and were offered to federal employees through FEHBP. Among plans that covered fewer than four of the tests, DCBE and colonoscopy were least likely to be covered. In States Without Colorectal Cancer Screening Test Laws, Most of the Small Employer and Individual Plans Reviewed Provided Coverage for All Four Tests In 10 states without laws requiring private health insurance coverage of colorectal cancer screening tests, most of the small employer plans we reviewed—16 of 19—covered all four colorectal cancer screening tests. The remaining 3 plans covered FOBT or FOBT and flexible sigmoidoscopy, but not DCBE or colonoscopy. Among the 14 individual plans we reviewed, 10 covered all four colorectal cancer tests for screening purposes. The remaining 4 plans did not offer screening coverage for any of the tests. (See table 2.) Most Large Employer Plans Reviewed Covered All Four Colorectal Cancer Screening Tests Twenty-four of the 35 large employer plans we reviewed, or approximately two-thirds of these plans, covered all four colorectal cancer tests for screening purposes. Seven of the 35 plans covered only one of the colorectal cancer screening tests: FOBT or flexible sigmoidoscopy. Neither DCBE nor colonoscopy was covered by any of the large employer plans that provided limited test coverage. Four of the health plans offered by the large employers did not cover any of the colorectal cancer tests for screening purposes. (See table 3.) Over Half of FEHBP Plans Covered All Four Colorectal Cancer Screening Tests Seventy-seven of the 143 FEHBP plans covered all four screening tests for colorectal cancer in 2004. Among the 17 national FEHBP plans, 12 covered all four tests, and 5 covered FOBT, flexible sigmoidoscopy, and colonoscopy, but not DCBE. (See table 4.) About 70 percent of the over 8 million FEHBP enrollees and their dependents were covered through the national plans in 2003. Among the 126 local FEHBP plans, 65 plans either provided coverage for the four colorectal cancer screening tests as confirmed through a review of their brochures or follow-up with selected plan officials, or were located in states that required this coverage. The brochures for the remaining 61 plans indicated coverage of at least FOBT and flexible sigmoidoscopy, but did not explicitly identify whether the additional tests were covered for screening purposes. According to an OPM official, plans may cover tests that are not explicitly referenced in the brochures. We contacted 8 local plans and confirmed that brochure language was not definitive. According to the plan representatives, each of the 8 plans covered at least one test in addition to the two specified in the brochure. External Comments and Our Evaluation Representatives of ACS and AHIP provided comments on a draft of this report. ACS commented that the report overstates coverage, for example by stating that coverage is common or by not placing greater emphasis on plans that covered few or none of the colorectal cancer tests for screening purposes. In contrast, AHIP commented that the report overstates the lack of coverage, for example by highlighting the number of plans that covered fewer than four tests rather than the number of plans that covered at least one test. Recognizing that our findings are subject to varying interpretations, we attempted to report them neutrally and to not overly emphasize the coverage that did or did not exist. ACS and AHIP also commented on the scope of our report and limitations to our study methods, as discussed below, and provided technical comments, which we incorporated as appropriate. ACS Comments ACS suggested that we did not sufficiently address several methodological limitations in our report. In particular, ACS stated that we used small samples, did not conduct an analysis of nonrespondents, surveyed only the health plans with the most members where insurers or employers offered more than one plan, and did not independently verify the responses of the insurers and employers we contacted. We agree that our study methods are subject to limitations, which we disclosed in our draft report. We reviewed samples of health plans that would provide credible evidence of coverage levels in each market segment, recognizing that the results would not be generalizable to all health plans. While we believe our relatively high response rates of between 71 and 95 percent diminished the need for a detailed analysis of nonrespondents, we acknowledged the possibility of selection bias in the draft report. Similarly, although we did not examine every plan offered by each insurer and employer, we focused on those plans that covered the greatest number of enrollees to best illustrate the coverage most widely available to consumers. In terms of verifying survey responses, we did not have ready access to the documents that could have provided verification of employer and insurer responses to our questions. Insurer underwriting manuals may provide such verification, but are considered proprietary by insurers and not shared externally. Documents readily available to us, such as plan brochures, do not indicate coverage of every medical test or procedure under every possible circumstance, and thus could not be used to verify insurer or employer responses to our questions. Our draft report noted that we did not independently verify reported responses. In response to ACS’s comments that we did not sufficiently address our study limitations, we modified the final report to more prominently highlight certain limitations of our methodology. ACS further commented that we did not highlight the differences between the higher coverage rates we found based on the self-reported data provided by employers and insurers and the lower coverage rates we found based on our review of FEHBP brochures, suggesting that the differences indicate the potential for self-reported data to overstate plan coverage. As the draft report noted, we found that FEHBP brochures did not specify every medical test or procedure covered under every circumstance and thus the brochures may understate coverage actually available. This fact was confirmed by several plan and OPM officials and is consistent with our own previous reviews of health plan brochures. Among the 17 national plans, for which we were able to follow up with plan officials in each instance where the brochure language was not exhaustive, most covered tests not mentioned in their brochures. Moreover, through our follow-up with eight local plans we determined that each plan covered at least one test in addition to those listed in their brochures. ACS also commented that we did not assess the quality of prior studies of colorectal cancer screening coverage rates and consider the study results in our report. While our draft report acknowledged that prior studies have been conducted, we did not elaborate on them because, as ACS noted, each was subject to certain limitations. Evaluating the quality of prior studies was beyond the scope of our work. AHIP Comments AHIP commented that the draft report did not sufficiently address the low rate at which Americans actually receive colorectal cancer screening tests in spite of relatively high coverage rates among health plans, suggesting that factors other than insurance coverage are responsible for the low screening rate. The draft report’s background section noted colorectal cancer screening rates and certain factors cited by other researchers that influence these rates. However, an assessment of the factors influencing screening utilization rates, beyond the extent of health insurance coverage, was outside the scope of this report. AHIP also suggested that the report include a discussion of the factors that drive the benefit package decisions made by employers and consumers in selecting health plans, noting that such decisions are necessarily influenced by cost, individual circumstances, and other factors. We agree that many factors influence the choice of benefits consumers or employers select when choices are available, but examining these factors was beyond the scope of this study. AHIP further commented that we emphasized the ACS colorectal cancer screening guidelines, but not those set forth by USPSTF, which they suggested are also highly regarded. We used the ACS guidelines as a complete framework for presenting our findings because the guidelines indicate a frequency for each test. While USPSTF guidelines include the four tests specified by ACS, they only indicate the frequency with which the tests should be administered for FOBT. Nevertheless, we modified the report to add reference to the USPSTF guidelines. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested congressional committees and members and make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7118 or Randy DiRosa at (312) 220-7671 if you or your staff have any questions. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To examine the extent to which the four key colorectal cancer tests are covered for screening purposes by private health insurance plans, we reviewed the extent to which state laws require such coverage, and we reviewed the extent of coverage among selected small employer and individual plans in states without such laws, a sample of large employer plans, and coverage within FEHBP plans. We conducted our work from October 2003 through June 2004 according to generally accepted government auditing standards. State Laws Requiring Colorectal Cancer Screening Test Coverage To identify states that had laws that require private health insurance plans to cover colorectal cancer screening tests, we reviewed the laws in each state as of May 2004, and consulted with state officials to clarify the laws as necessary. We did not include state regulations or policies applicable to insurance plans in our review. Small Employer and Individual Health Plans To examine coverage of colorectal cancer screening tests in small employer and individual health plans, we identified the largest health insurers in 10 of the states without existing or pending colorectal cancer screening laws by using information compiled by CDC, the National Association of Insurance Commissioners, Blue Cross and Blue Shield Association, and the National Conference of State Legislatures. This assessment was completed in November 2003. We selected five states based on their large population—Florida, Massachusetts, Michigan, New York, and Wisconsin—and randomly chose five additional states— Arizona, Arkansas, Colorado, Louisiana, and Maine. To identify the largest health insurers in these states, we contacted insurance regulators in each state and asked them to identify the two largest small employer health insurers and the two largest individual health insurers in terms of covered lives, premiums collected, or—in the absence of quantitative data—their best judgment. We contacted the insurers identified to obtain information about the extent to which their health plan with the most members covered colorectal cancer screening tests. We posed a series of questions related to insurers’ coverage of four colorectal cancer tests—FOBT, flexible sigmoidoscopy, DCBE, and colonoscopy—for screening purposes.Further, we asked insurers about their health plans’ coverage restrictions, including those related to age, frequency, family history, personal history, and plan authorization. We received responses to our questions from 18 of the 19 small employer insurers we contacted (95 percent), and 14 of the 17 individual insurers we contacted (82 percent). Large Employer Health Plans To examine coverage of colorectal cancer screening tests in large employer health plans, we randomly selected 50 companies from the 2002 Fortune 500 list.One company was subsequently removed from the sample because it filed for bankruptcy protection after the list was published and no longer had any U.S. employees. Thus, the final sample included 49 companies. We contacted health plan benefits administrators or human resources staff in each of these companies. We made at least three attempts to obtain a response from each company in our sample, including contacting the company’s government affairs or chief executive office to request participation in certain instances. Similar to insurers offering small employer and individual health plans, participating employers answered questions related to their largest plan’s coverage of the four key colorectal cancer tests for screening purposes and restrictions related to this screening test coverage. We received responses from 35, or 71 percent, of the companies we contacted. Three plans offered by large employers reported covering one or more of the four colorectal cancer tests for screening purposes, but also required that a member have a family or personal history of the disease in order to receive coverage for the screening test. Because these requirements were inconsistent with our definition of screening test coverage, we characterized these plans as not covering the relevant tests for screening purposes. FEHBP Plans To identify coverage policies of health plans offered through FEHBP, we reviewed 2004 coverage brochures maintained on the OPM website. When a plan offered multiple benefit options, we counted each option as a separate plan. When the same plan was offered in multiple locations but with the same benefits, we counted it as one plan. Our review of 2004 FEHBP brochures identified 143 distinct benefit plans. We identified the extent to which each of the four tests was explicitly listed as a covered benefit for screening purposes for each plan. We then discussed our interpretation of the brochure language with OPM representatives. In addition, we contacted representatives of each of the 5 national plans and 8 of the 61 local plans whose brochures did not explicitly indicate the coverage of each of the four tests. The local plans were selected judgmentally from different geographic areas of the country. We discussed with plan officials our interpretation of their brochure language, and revised our analysis based on these discussions. Limitations We relied on self-reported information from officials of the health plans offered to small employers, individuals, and large employers, and did not independently verify their responses. Further, although we achieved relatively high response rates of between 71 and 95 percent for our review of coverage in the small employer, individual, and large employer market segments, we may nonetheless have encountered selection bias. That is, insurers and large employers with more colorectal cancer screening benefits could have been more likely to participate in our survey than those with fewer colorectal cancer screening test benefits. In addition, we surveyed a small number of small employer, individual, and large employer health plans, precluding our ability to generalize the findings beyond these health plans. Nevertheless, our findings illustrate the colorectal cancer screening test benefits of approximately 4 million individuals covered under the small employer, individual, and large employer plans we reviewed, and more than 8 million individuals covered under FEHBP. Appendix II: State Laws Requiring Private Health Insurance Coverage of Colorectal Cancer Screening Tests Table 5 shows state colorectal cancer screening laws in place as of May 2004. As indicated, 20 states have laws that require private insurance coverage of colorectal cancer screening tests. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Susan Anthony, Christine DeMars, Iola D’Souza, Sari B. Shuman, and Behn M. Kelly made key contributions to this report. Related GAO Products Private Health Insurance: Federal and State Requirements Affecting Coverage Offered by Small Businesses. GAO-03-1133. Washington, D.C.: September 30, 2003. Medicare: Most Beneficiaries Receive Some but Not All Recommended Preventive Services. GAO-03-958. Washington, D.C.: September 8, 2003. Medicare: Use of Preventive Services is Growing but Varies Widely. GAO-02-777T. Washington, D.C.: May 23, 2002. Medicare: Beneficiary Use of Clinical Preventive Services. GAO-02-422. Washington, D.C.: April 12, 2002. Medicare: Few Beneficiaries Use Colorectal Cancer Screening and Diagnostic Services. GAO/T-HEHS-00-68. Washington, D.C.: March 6, 2000.
Colorectal cancer is the second leading cause of cancer deaths in the United States. Its mortality can be reduced through early detection and treatment. Four key tests are used to detect the cancer--fecal occult blood test (FOBT), flexible sigmoidoscopy, double-contrast barium enema (DCBE), and colonoscopy. Private health insurance plans generally cover these tests to diagnose cancer; however, the extent to which plans cover the tests for screening purposes--where no symptoms are evident--is less clear. Congress is considering legislation that would require coverage of the tests for screening purposes among all private health insurance plans. GAO was asked to (1) identify the state laws that require private health insurance coverage of these screening tests; and (2) determine the extent to which the tests are covered among small employer, individual, large employer, and federal employee health plans. GAO summarized state laws that require coverage of the tests. GAO examined test coverage among a sample of the largest 19 small employer and 14 individual plans in 10 states without laws requiring the coverage, and among 35 large employer plans nationally. The findings cannot be generalized beyond these plans. GAO also reviewed brochures for 143 federal employee health plans. Twenty states had laws in place as of May 2004 requiring private insurance coverage of colorectal cancer tests for screening purposes. In 19 of these states, the laws generally applied to insurance sold to small employers and individuals, and required coverage of all four tests--FOBT, flexible sigmoidoscopy, DCBE, and colonoscopy. The law in 1 of the states was more limited in scope, applying to group and managed care plans and not explicitly requiring coverage of each of the four screening tests according to American Cancer Society (ACS) guidelines. Most, but not all, health plans offered by the insurers and employers GAO reviewed covered all four colorectal cancer tests for screening purposes. Over four-fifths of the small employer plans (16 of 19) covered all of the tests, whereas 1 plan covered only FOBT and flexible sigmoidoscopy and 2 plans covered only FOBT. Almost three-quarters of the individual plans (10 of 14), covered all of the tests, and the remaining 4 plans covered none of the tests. Approximately two-thirds of the large employer plans (24 of 35) covered all four of the tests. Among the remaining 11 plans, 5 covered only FOBT, 2 covered only flexible sigmoidoscopy, and 4 covered none of the tests. Over half of the plans offered to federal employees covered each of the four tests. Finally, among all plans that covered at least one but fewer than four tests, DCBE and colonoscopy were least likely to be covered. In commenting on a draft of this report, ACS suggested that the report overstated the extent of coverage and did not sufficiently highlight the methodological limitations of the study. In contrast, America's Health Insurance Plans (AHIP) commented that the report overstated the lack of coverage. Moreover, AHIP commented that the report did not address the low rate at which Americans actually receive colorectal cancer screening tests regardless of insurance coverage, suggesting that factors other than health insurance coverage are responsible for low screening rates. Recognizing that the findings are subject to varying interpretations, GAO attempted to report them neutrally. Although the draft report disclosed the methodological limitations of the study, in response to ACS comments, GAO more prominently highlighted certain of the limitations. Finally, whereas the draft report noted the screening utilization rates, assessing the factors responsible for them was beyond the scope of this study.
Background DHS Processes to Screen, Care for, and Transfer UAC Interviewing and screening. After Border Patrol, OFO, or ICE apprehends UAC, a CBP or ICE official interviews each child and collects personal information such as the child’s name, country of nationality, and age. DHS officials evaluate each child to determine if he or she meets certain criteria signifying that additional steps may have to be taken to ensure the child is safe from harm—a process known as screening. According to CBP policy, Border Patrol agents and OFO officers are to use CBP Form 93 and Form I-213, Record of Deportable/Inadmissible Alien, among other forms, to document that they collected personal information about each UAC and that each child is screened as required. In addition, TVPRA requires that, except in exceptional circumstances, UAC are transferred to the care and custody of HHS within 72 hours of determining a child is a UAC. However, TVPRA also provides special rules for UAC from Canada and Mexico who are apprehended at a land border or POE. On a case-by-case basis for UAC from Canada and Mexico, DHS may allow the child to withdraw his or her application for admission and return to his or her country of nationality or last habitual residence—referred to as repatriation for purposes of this report—without further removal proceedings if the officers screen the UAC within 48 hours of being apprehended and determine that 1. the UAC is not a victim of a severe form of trafficking in persons,2. there is no credible evidence that the UAC is at risk of being trafficked 3. the UAC does not have a fear of returning to his or her country owing to a credible fear of persecution, and 4. the UAC is able to make an independent decision to withdraw the application for admission to the United States and voluntarily return to his or her country of nationality or last habitual residence. For Mexican and Canadian UAC who do not meet all four of these screening criteria, TVPRA requires DHS to follow the same process established for UAC from other countries and transfer them to HHS within 72 hours of determining they are UAC, absent exceptional circumstances. Figure 1 shows CBP’s screening process for UAC. Care and custody. While UAC are in DHS custody, DHS is required to meet certain minimum requirements for care and custody, as stipulated by TVPRA, the Flores Agreement, and DHS regulations. DHS must meet the following requirements for care and custody, at a minimum: maintain custody of UAC in the least restrictive setting appropriate to the children’s age and special needs, provided that such setting is consistent with the need to protect the juveniles’ well-being and that of others, as well as with any other laws, regulations, or legal requirements; provide safe and sanitary facilities for UAC, as well as adequate temperature control and ventilation; provide UAC with access to drinking water and food as appropriate, emergency medical care, and toilets and sinks; generally separate UAC from unrelated adults and adequately provide specialized training for care and custody of UAC to DHS personnel who have substantive contact with UAC; and transfer UAC to HHS within 72 hours of apprehension, except in exceptional circumstances. Transfer of UAC. When ORR receives notice that DHS plans to transfer custody of UAC, ORR searches its database of shelters throughout the United States to locate one with space for a child before DHS transfers custody of UAC. When ORR confirms that space at a shelter has been reserved for an unaccompanied alien child, the office is to coordinate with Border Patrol, OFO, and ERO to transport the child to the assigned shelter. ERO provides long-distance travel for UAC within the United States via commercial airlines, charter aircraft, or bus. In some areas, CBP transports UAC to shelters that are within the local commuting area. For UAC repatriated to Mexico or Canada, CBP is generally responsible for transporting UAC to POEs, where UAC custody is transferred to Mexican or Canadian officials. DHS Apprehended Over 200,000 UAC during Fiscal Years 2009 through 2014 DHS apprehended more than 5 million adult and juvenile aliens during fiscal years 2009 through 2014, of which approximately 201,700 (4 percent) were UAC. Table 1 shows the number of UAC apprehensions by DHS component during fiscal years 2009 through 2014. Over 90 percent of DHS’s apprehensions of UAC during fiscal years 2009 through 2014 were made by Border Patrol agents along the southwest border of the United States with Mexico. Of the 186,233 UAC whom Border Patrol apprehended, over 75 percent were apprehended in two Border Patrol sectors—about 52 percent in Rio Grande Valley, Texas, and about 25 percent in Tucson, Arizona. OFO apprehended 10,447 UAC during fiscal years 2012 through 2014.locations of UAC apprehensions by CBP during fiscal years 2009 through 2014. CBP Has Developed Policies and Procedures for Screening UAC, but Has Not Consistently Implemented TVPRA Requirements CBP Has Developed Policies and Procedures, Including a Standardized Tool, to Help Ensure Screening of UAC as Required In March 2009, CBP issued a memorandum containing policies and procedures that Border Patrol agents and OFO officers are required to follow when screening UAC regardless of nationality. Consistent with TVPRA requirements, the memorandum states that Border Patrol and OFO are to transfer all UAC from countries other than Canada or Mexico to HHS shelters. In addition, the memorandum establishes separate procedures to screen UAC from Canada and Mexico to determine whether they should be transferred to an HHS shelter or allowed to return to these two countries. Specifically, agents and officers are required to assess the four criteria set forth in TVPRA for each unaccompanied Canadian or Mexican child when determining whether to allow that child to voluntarily return to these countries (that is, to be repatriated). Further, CBP’s March 2009 memorandum requires that Border Patrol agents and OFO officers use CBP’s Form 93 to document that they conducted the required screening for all UAC against criteria set forth in TVPRA. For example, the Form 93 directs interviewing agents and officers to ask questions to determine whether the children have a fear of returning to their country of nationality or last habitual residence and if the UAC may be victims of trafficking. Form 93 also provides agents and officers with indicators of potential human trafficking. In addition, agents and officers are to use the Form I-213 to document, among other things, key results of the screening process, such as any claims of fear expressed by the UAC. According to our observations and interviews with Border Patrol agents and OFO officers at 13 of the 15 CBP facilities and POEs we visited, agents and officers generally interview UAC and complete these forms at computer terminals in open areas of the facilities. Although CBP agents and officers are to complete a Form 93 for all UAC, the children’s responses are only statutorily required for the purpose of determining whether to permit Canadian and Mexican UAC to voluntarily return, as TVPRA requires DHS to transfer all non-Canadian and non-Mexican UAC to an HHS shelter. Table 2 describes the purpose of these forms and their roles in the screening process. On the basis of the information Border Patrol agents and OFO officers gather during the interviews and, in particular, on the basis of the children’s responses to questions on the Form 93, agents and officers are to determine whether to allow UAC from Canada and Mexico to return to their countries of origin. Table 3 shows the number of Canadian and Mexican UAC who were apprehended by CBP from fiscal years 2012 through 2014 and repatriated or transferred to HHS. CBP Transferred Most Non-Canadian and Non- Mexican UAC to HHS as Required, but OFO Repatriated Some Who Were Apprehended at Airports Our analysis of UAC apprehension data for fiscal years 2009 through 2014 shows that CBP transferred the majority of non-Canadian and non- Mexican UAC to HHS, as required by TVPRA and CBP’s March 2009 memorandum. Specifically, CBP’s data indicate that Border Patrol transferred 99 percent of such UAC whom agents apprehended from fiscal years 2009 through 2014, and OFO transferred about 93 percent of such UAC apprehended at land POEs from fiscal years 2012 through Further, officials at 13 of the CBP facilities and land POEs we 2014.visited stated that non-Mexican UAC are always transferred to HHS, indicating that the officials were aware of the policy. In addition, for all 5 non-Mexican UAC we observed being interviewed by Border Patrol agents during our visits, we noted that agents determined that these UAC were to be transferred to HHS, as required. Although CBP transferred the majority of non-Canadian and non-Mexican UAC apprehended at land borders and land POEs to HHS, OFO did not transfer all such UAC apprehended at air POEs as required by TVPRA. Specifically, our analysis of CBP data shows that OFO repatriated about 45 percent (459 of 1,037) of such UAC apprehended at air POEs from fiscal years 2012 through 2014. OFO headquarters officials stated that they were aware that all non-Canadian and non-Mexican UAC are to be transferred to HHS, including those apprehended at air POEs, and initially stated that the officers at air POEs had entered incorrect data into OFO’s automated system, such as coding an accompanied child as an unaccompanied child. We analyzed a nongeneralizable random sample of Form I-213s for 19 cases in which OFO apprehended non-Canadian and non-Mexican UAC at air POEs and repatriated them. In general, we found that the documentation indicated that these were not data entry errors. Rather, the documentation indicated that these UAC who were repatriated, for example, had attended or were seeking to attend school without the correct visa, or seeking employment without the proper visa. On the basis of our analysis, OFO officials stated that officers who screen UAC at air POEs may not fully understand TVPRA requirements or recall the details of their training on TVPRA because OFO officers at these locations apprehend far fewer UAC than those at land POEs. In addition, OFO officials stated that some of the cases in the database where OFO returned non-Mexicans and non-Canadians (162 of 459) were refusals of admission based on the child’s request to be admitted into the United States under the Visa Waiver Program. According to OFO officials and CBP guidance, UAC may be allowed to return to their home countries in this circumstance, but are first screened to assess the risk of trafficking or credible fear.officials stated that the officers had given thoughtful consideration to the outcomes of the UAC and generally contacted the children’s parents and worked through the appropriate consular office before repatriating them. However, OFO officials acknowledged that TVPRA requires OFO to transfer all non-Canadian and non-Mexican UAC to HHS shelters, regardless of the type of POE. These officials stated that developing face- to-face training for OFO officers at air POEs focused on TVPRA requirements would help improve those officers’ knowledge of the requirements. As of April 2015, OFO officials and attorneys from CBP’s Office of Chief Counsel stated that they had begun discussing developing such a training based on our data analysis. However, they did not provide documentation of specific actions they have taken or plan to take to address the issues we identified. Developing and implementing training for OFO officers at airports who have substantive contact with UAC could better position those officers to comply with TVPRA requirements. CBP Has Not Consistently Implemented Its Screening Policies for Mexican UAC and Cannot Ensure It Is Meeting TVPRA Requirements CBP has policies for screening Mexican UAC. However, CBP has not consistently implemented these policies related to assessing (1) UAC’s ability to make independent decisions, (2) UAC’s credible fear of persecution, (3) whether UAC were victims of a severe form of trafficking, and (4) any credible evidence of UAC’s risk of being trafficked upon return to their countries. Ability to make an independent decision. It is unclear if Border Patrol agents and OFO officers are assessing whether Mexican UAC can make an independent decision, as required by TVPRA, because (1) the Form 93 does not include indicators or questions on how to implement the requirement regarding independent decisions, and (2) Border Patrol agents did not consistently document, as required, the basis for their determinations of independent decision-making ability among Mexican UAC. First, the Form 93 instructs the agent or officer interviewing UAC to determine whether they meet any of the TVPRA criteria and provide signatures attesting to this determination. However, the form does not provide indicators or suggested questions for agents and officers to use to assess, in accordance with CBP policy, whether UAC from Canada and Mexico are able to make an independent decision to withdraw their application for admission to the United States and return to their countries of origin.should generally consider UAC who are 14 years of age or older to be able to make an independent decision about returning to their countries of nationality or last habitual residence and UAC under the age of 14 to be generally unable to make an independent decision. The memorandum also lists a number of exceptions. For example, it states that these age presumptions may be overcome based on factors such as the child’s intelligence, education level, and familiarity with the U.S. immigration process. CBP’s 2009 memorandum states that agents and officers On the basis of our interviews with Border Patrol agents and OFO officers in Arizona, Texas, and California in July, September, and October 2014, respectively, we found that not all agents and officers were aware that CBP’s 2009 memorandum states that UAC under the age of 14 are presumed generally unable to make an independent decision. Further, Border Patrol agents and OFO officers at 5 of the 15 facilities we visited stated that they repatriate UAC under the age of 14 if the Mexican consulate is able to locate a family member in Mexico, and other Border Patrol officials stated that they seek approval from the Mexican consulate before returning any Mexican UAC. However, it is not the responsibility of the Mexican government to assess whether Mexican UAC who enter the United States merit international protection outside of Mexico. Rather, TVPRA provides DHS with the authority to make the determination as to whether a Mexican UAC can make an independent decision to withdraw his or her application for admission and return to Mexico when the UAC has met specified screening criteria, and provides no role for the Mexican government with respect to this decision. Further, our analysis of CBP apprehension data indicates that agents and officers may not have understood or followed CBP policy regarding the ability of UAC younger than 14 years old to make an independent decision. Specifically, we found that CBP repatriated about 95 percent of all Mexican UAC whom agents and officers apprehended during fiscal years 2009 through 2014, and 93 percent of Mexican UAC under the age of 14 during this time period. Further, eight of nine Border Patrol sectors along the southwest border repatriated at least 80 percent of Mexican UAC under age 14 from fiscal years 2009 through 2014. In addition, from fiscal years 2012 through 2014, OFO repatriated about 50 percent of Mexican UAC and 65 percent of Canadian UAC under age 14. According to CBP headquarters officials, the Form 93 is a sufficient tool to conduct screening and Border Patrol officials and, in particular, these officials stated that the signatures on the Form 93 confirm that agents and officers considered whether the UAC were able to make an independent decision about whether to return to Canada or Mexico. CBP officials also stated that agents and officers use their training and experience to make independent decision determinations and are capable of making these determinations without a set of specific questions on the Form 93. However, based on our interviews and our review of CBP apprehension data, it is unclear if agents and officers assessed Canadian and Mexican UAC’s ability to make an independent decision before signing the Form 93 because there was no written documentation of this assessment other than the signatures. In addition, other organizations have also identified limitations to CBP’s implementation of the independent decision-screening requirement. For example, a June 2014 UNHCR report identified concerns that CBP as a whole was not assessing whether Mexican UAC are able to make an independent decision to withdraw their applications for admission to the United States and return to Mexico. Specifically, UNHCR found that there are no questions or instructions built into CBP’s process for CBP officials to assess UAC’s ability to make the decision to return to Mexico. Further, UNHCR reported that, when presented with hypothetical examples of a child who is not able to make an independent decision about returning home and questioned about whether they would handle the processing of such a Mexican differently, a number of agents responded that the children would be returned to Mexico like the other Mexican UAC. In addition, the nongovernmental organization Appleseed reported in 2011 that its interviews with Border Patrol agents and OFO officers in the field indicated that no uniform guidelines or standards existed on how to ascertain whether a child is capable of making an independent decision to return to Mexico, and that this absence of direction led to inconsistent practices across different regions of the southwest border. Further, Appleseed reported that the repatriated Mexican UAC it interviewed seemed to have little understanding of what might happen to them if they did not agree to return to Mexico. CBP officials responsible for outreach with these organizations stated that while they have had discussions with nongovernmental organizations about the Form 93 and UAC screening, there were no plans, as of April 2015, to revise the form to include additional indicators or questions related to the ability of UAC to make an independent decision. Standards for Internal Control in the Federal Government provides that agencies should design and implement continuous built-in components of operations—in this case, indicators or questions agents and officers should ask related to independent decision making—to provide reasonable assurance of meeting agency missions, goals, and objectives. Revising the Form 93 to include indicators or questions that agents and officers should ask to better assess a child’s ability to make an independent decision would help ensure agents and officers obtain the necessary information to determine whether UAC can independently make a decision to withdraw their application prior to repatriation, in compliance with TVPRA requirements and CBP policy. Second, although CBP repatriated most Mexican UAC under the age of 14 from fiscal years 2009 through 2014, we found that Border Patrol agents did not consistently document the basis for their decisions, as required. CBP’s 2009 memorandum specifically instructs agents and officers to document the basis for all determinations regarding independent decisions on the Form I-213. Our analysis of a random sample of 180 Border Patrol case files of UAC apprehended in fiscal year 2014 showed that agents did not document the rationales for their conclusions related to UAC’s independent decision-making ability. Specifically, on the basis of our review of Forms I-213 from our sample of cases, which included children of all ages, we estimate that none of the 15,531 forms for Mexican UAC nationwide from fiscal year 2014 included documentation of the agents’ basis for their determinations regarding the UAC’s ability to make an independent decision. In particular, in one case we reviewed, the Border Patrol agent documented that the unaccompanied alien child was “emotionally distraught” and “unable to speak clearly,” but determined that the child should be allowed to return to Mexico. Because the agent did not document the reason for deciding that the child should be repatriated, it was unclear whether or how the agent took into account the child’s physical and mental state, which, according to the 2009 memorandum, is one factor to consider when assessing UAC’s ability to make an independent decision. In addition, two of the five Border Patrol agents we observed conducting interviews in the Rio Grande Valley sector told us that the Mexican UAC they interview and repatriate generally have not understood the options available to them that would allow them to remain in the United States while they await removal proceedings. As discussed above, a child’s familiarity with the U.S. immigration process is to be a factor that agents and officers consider when determining the ability of the UAC to make an independent decision. A Border Patrol headquarters official responsible for the UAC program stated that agents typically document the basis for the independent decision assessment only if the agent determines that the unaccompanied alien child is unable to make an independent decision because agents have typically not documented the decision in the past. Further, he stated that this course of action follows the spirit of the 2009 UAC policy memorandum. CBP officials also stated that agents and officers may not have adequately documented screening decisions because they were more focused on working to transfer custody of UAC to HHS as quickly as possible, particularly during the summer of 2014. However, the 2009 memorandum specifically states that Border Patrol agents and OFO officers are to document the basis for all determinations regarding independent decisions. Moreover, our review of Form I-213s from our sample of cases included apprehensions that occurred throughout fiscal year 2014, which accounts for time periods with high and low apprehension rates as well as sectors that did not experience a surge in UAC. Ensuring that Border Patrol agents document the basis for their determinations regarding independent decision making would better position CBP to ensure that such determinations are appropriate. Credible fear of persecution. We found that Border Patrol agents and OFO officers are not making consistent decisions about whether Mexican UAC have a credible fear of persecution if they return to Mexico. According to TVPRA and CBP’s 2009 policy memorandum, CBP is to transfer to HHS all Canadian and Mexican UAC who have a fear of returning to their countries because of a credible fear of persecution. To implement this policy, agents and officers are to ask four credible fear determination questions on the Form 93, which solicit information from the UAC about why they left their home countries, if they have any fear of returning, or if they believe they would be harmed if returned. However, we found that agents’ and officers’ understanding varies as to what types of fear expressed by UAC and what types of responses to these fear questions should warrant transferring the UAC to HHS. In particular, our analysis of Border Patrol case files showed that agents made different screening decisions when presented with similar responses to the credible fear questions.claimed fear because her grandmother had abused her, and the agent transferred the child to HHS. However, in two other cases we reviewed, sisters told the interviewing agent that their aunt had mistreated them and the agent repatriated these UAC. In another case, a child claimed that he was fearful of gangs and transnational criminal organizations in his hometown and the interviewing agent decided to transfer him to HHS. However, one child claimed fear of gangs and another claimed fear of violence in Mexico, and the interviewing agents decided to repatriate both of these UAC. We did not discuss UAC screening at two facilities we visited in Arizona. At the third remaining facility, CBP officials stated that they had never determined that a Mexican unaccompanied alien child had credible fear. some agents and officers these DHS officials interviewed stated that they needed to ask more questions than those on the Form 93 to know if fear claimed by UAC warranted protection (that is, being transferred to HHS). In addition, according to a June 2014 UNHCR report, more than half of the 96 CBP officials interviewed by the UNHCR officials in 2012 and 2013 stated that it was not their job to assess a child’s fear of returning to his or her country; instead, UNHCR reported that CBP officials said that they process the case so that the child’s claim can be heard by an appropriate adjudicator. However, some CBP officials told UNHCR officials that their definition of fear is that it must be fear of persecution or harm inflicted directly by the government and that UAC who fear gangs or cartels do not have fear of government persecution and are therefore returned home. GAO/AIMD-00-21.3.1. persecution would better help them assess and protect Mexican UAC consistent with TVPRA requirements and CBP policy, as well as consistently across the agency. Victim of trafficking. CBP’s implementation of the TVPRA requirement related to assessing whether UAC are at risk of a severe form of trafficking in persons also has limitations. First, the Form 93 and the 2009 memorandum do not provide instructions on how agents and officers are to apply the questions and indicators on the Form 93 to assess a child’s trafficking risk. Second, Border Patrol agents did not document the rationales for some of their decisions related to whether Mexican UAC were victims of a severe form of trafficking in persons. Regarding CBP’s trafficking guidance, we found inconsistencies in the information Border Patrol agents obtained from UAC for the same trafficking questions. The Form 93 provides examples of trafficking indicators and 12 questions to assist agents and officers in their assessments of whether UAC from Canada and Mexico have been trafficked. For example, one trafficking question on the Form 93 asks Border Patrol agents and OFO officers to determine if the UAC are engaged in labor. We analyzed a random sample of 180 Form 93s for Mexican UAC apprehended by Border Patrol in fiscal year 2014. On the basis of our analysis, we estimate that Border Patrol agents found 3 percent of these UAC were engaged in labor—a trafficking indicator— because they were acting as smugglers. However, we estimate that in another 4 percent of the apprehensions during this time period, the Border Patrol agents identified the UAC as smugglers but recorded on the Form 93 that they were not engaged in labor. According to Border Patrol agents in south Texas, agents have previously identified smugglers who were trafficking victims, and documenting that UAC are smugglers can be an important step in assessing if CBP should allow the children to return to Mexico. According to our review of the case files, Border Patrol agents did not always document whether Mexican UAC were smugglers. As a result, we do not know whether agents’ inconsistent classification of UAC smugglers as engaged in labor was a pervasive issue. However, given the potential consequences of child trafficking, it is important for agents to consistently apply the Form 93 trafficking questions for all UAC. In addition, we found instances where interviewers did not apply the trafficking indicator from the Form 93 correctly. For example, according to the form, one indicator of trafficking is when a child is isolated from others, and the agent or officer is prompted to ask if the child has been able to freely contact friends or family. However, we estimate that, in 4 percent of the cases, Border Patrol agents incorrectly documented whether the UAC were able to contact someone while in custody at the Border Patrol station after they had been apprehended (instead of during their journey).charge of the UAC program, to determine if the child is a victim of trafficking, the interviewing agent would need information on whether the child was allowed to contact friends or family before, not after, apprehension. According to a Border Patrol headquarters official in Other groups have also reported that CBP has not provided Border Patrol agents and OFO officers with sufficient guidance on how to implement the Form 93 screening tool with regard to trafficking. For example, Appleseed reported that the form lacks depth or detail sufficient to draw out the information for an agent to evaluate whether a child has been trafficked. In addition, UNHCR reported that the form does not provide the necessary guidance for the interviewing agents and officers in a clear, accurate, and user-friendly manner. For example, UNHCR found that one agent, who was observed conducting screening, did not understand that some trafficking questions applied to the period of time before an unaccompanied alien child’s apprehension. UNHCR reported that, with some guidance on the substance of what the agent is screening for, the agent could more effectively screen UAC for protection needs. According to CBP officials, agents and officers are not authorized to make trafficking determinations, as agents refer UAC who may be victims of trafficking to either ICE or U.S. Citizenship and Immigration Services. These officials also stated that the trafficking questions on the Form 93 are intended as good questions to prompt UAC responses, but that agents and officers ultimately use their training and experience to make trafficking decisions. Although agents and officers must use their discretion, prior to repatriating a UAC from Mexico or Canada, CBP policy and TVPRA require them to assess whether the UAC has been a victim of a severe form of trafficking in persons or is at risk of being trafficked upon return, and we found inconsistencies in how agents applied the trafficking questions designed to conduct that assessment. Standards for Internal Control in the Federal Government provides that effective communications should occur in a broad sense, with information flowing down, across, and up the organization to help ensure the agency meets its objectives.responsible for developing detailed practices to fit the agency’s operations and ensure that the practices are built into, and an integral part of, operations. Developing and implementing guidance on how Border Patrol agents and OFO officers are to implement the TVPRA requirement to transfer to HHS all Canadian and Mexican UAC who are victims of a severe form of trafficking in persons—such as the purpose of each trafficking question on the Form 93 and how to interpret UAC’s responses to the questions—would better ensure agents and officers can make consistent and informed screening decisions. Further, these standards state that management is Second, Border Patrol agents did not document the rationale for some of their decisions related to whether Mexican UAC were victims of a severe form of trafficking in persons, as required. According to the Form 93, if a trafficking indicator is present during the screening process, the interviewing Border Patrol agent or OFO officer is to ask age-appropriate questions that will help identify the key elements of a trafficking scenario. Moreover, agents and officers are to use the Form I-213 to document the circumstances involving any potential trafficking. On the basis of the sample of Form 93s that we reviewed, we project that 7,832 of the about 15,500 (50 percent) Mexican UAC whom Border Patrol apprehended in fiscal year 2014 had at least one trafficking indicator present. However, agents determined that these 7,832 UAC were not trafficking victims but did not document any follow-up questions the agents may have asked or UAC responses to those questions to alleviate any trafficking concerns. For example, we estimate that in 45 percent of cases, the Border Patrol agent indicated that the child did not have possession of his or her identification documents but determined the child was not a trafficking victim. In the cases we reviewed, the agent did not document any responses to follow-up questions he or she may have asked, even though the Form 93 instructs the interviewing agent or officer to determine who has control of the documents. According to Border Patrol agents in one sector, not having an identification document should not be an indicator of someone being trafficked. However, the Form 93 states that the lack of possession or control of documents is a trafficking indicator and provides a suggested question specific to that situation. In other cases, Border Patrol agents determined UAC did not appear to be victims or potential victims of trafficking even though agents documented that there were trafficking indicators for these children, such as being isolated from others, engaging in labor, having restricted movements, being threatened with harm, being recruited for one purpose but forced to engage in some other job, or providing coached responses. As a result, it is unclear whether Border Patrol agents correctly applied CBP’s policies. CBP officials stated that they may not have adequately documented screening decisions during the surge of UAC in the summer of 2014 because they were more focused on working to transfer custody of UAC to HHS as quickly as possible. Also, Border Patrol officials stated that it is not necessary to document responses to follow-up questions because (1) agents have latitude to use their judgment based on training and experience to make trafficking assessments based on the totality of responses to the trafficking questions; (2) agents and supervisors discuss the circumstances of each case, as appropriate; and (3) supervisors review and approve agents’ screening decisions. While an agent’s judgment is an important aspect of the screening process, it is unclear if agents are following TVPRA and CBP policy requirements to only repatriate Mexican UAC if CBP determines they have not been victims of a severe form of trafficking in persons. Moreover, on the basis of our review of Form 93s from our sample of cases, we estimate that Border Patrol agents did not complete the trafficking section of the Form 93 for 7 percent (1,077) of Mexican UAC they repatriated in fiscal year 2014. Standards for Internal Control in the Federal Government provides that internal controls and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination.trafficking indicators whom Border Patrol repatriated, asking follow-up questions of UAC when trafficking indicators are present and documenting UAC responses would better position CBP to ensure that UAC who may be victims of a severe form of trafficking in persons are not repatriated. Further, recording the rationale for agents’ decisions to repatriate UAC who demonstrate a trafficking indicator would allow CBP to better assess on an agency-wide basis whether agents’ decisions were justified and consistent with TVPRA requirements and CBP policy. Given the rate of Mexican UAC with Credible evidence of risk of being trafficked upon return. The Form 93 states that the trafficking questions will assist agents and officers in determining if the UAC may be victims of trafficking, but the form does not include indicators or questions for assessing whether there is credible evidence that the UAC are at risk of being trafficked upon return to their countries of origin—a screening requirement under TVPRA and CBP policy. CBP officials stated that the guidelines established in Form 93 outline indicators and suggested questions that could help show that UAC are victims of trafficking whether they are going to be repatriated or were subject to it prior to apprehension. However, none of the 12 trafficking questions prompts the interviewing agents and officers to ask the UAC about, for example, conditions in their homes or countries of origin that could help them assess whether the child would be a potential victim if repatriated. In addition, agents and officers at five facilities we visited stated that they focus on indicators of current trafficking victims—such as signs of abuse or fear of an adult in their group—rather than indicators of potential trafficking should the child return to his or her country of origin. Further, during our observations of five UAC interviews, Border Patrol agents did not ask any questions pertaining to the risk of a child being trafficked upon return to his or her country of origin. CBP officials also stated that before CBP would allow a Mexican unaccompanied alien child to return, Mexican consulate officials interview each child and question him or her regarding possible abuse or trafficking. If any of these factors are present, consulate officials notify CBP. However, Mexican consulate officials in two of the three states we visited stated that they do not influence CBP’s TVPRA screening decisions. Moreover, as previously stated, it is not the responsibility of the Mexican government to assess whether Mexican UAC who enter the United States merit international protection outside of Mexico. Other organizations have reported on similar limitations in CBP’s Form 93. For example, a June 2014 UNHCR report found that the form did not provide instruction on whether UAC are potential trafficking victims. UNHCR reported that indicators for when a child may be at risk of being trafficked if returned to his or her country may be different from indicators of current trafficking victims, and that CBP agents and officers may need to ask different questions. UNHCR officials reported a lack of awareness among CBP officials about how to identify UAC who might be at risk of trafficking if returned to their countries of nationality. Similarly, Appleseed reported in 2011 that the Form 93 trafficking questions do not probe what Mexican UAC can expect to encounter if they are returned to Mexico. GAO/AIMD-00-21.3.1. ensure agents and officers obtain the necessary information to address all TVPRA screening criteria and protect UAC. CBP Does Not Have Assurance That Agents and Officers Have Completed Required UAC Training CBP does not know the extent to which Border Patrol agents and OFO officers, who have substantive contact with UAC, have completed required UAC training. TVPRA requires DHS to provide specialized training to all personnel who have substantive contact with UAC. According to CBP officials, in October 2012, CBP combined two virtual learning courses—one on TVPRA requirements and another on the care of UAC—and made the course mandatory. Since that time, Border Patrol agents and OFO officers have been required to complete this training annually. The course aims to familiarize agents and officers with the provisions of TVPRA, the screening of UAC, and other issues pertaining to the processing and care of UAC. However, our analysis indicates that CBP does not have reliable data on whether agents and officers completed the training. In particular, CBP officials stated that CBP’s training system tracks the number of agents and officers who have completed the course but it does not reflect the total number of those who should have taken the course. Furthermore, we interviewed CBP training, Border Patrol, and OFO officials regarding how agents and officers document completion of UAC training, and reviewed additional information these officials provided about these data. These officials stated that CBP’s training system does not contain accurate data on which agents and officers are required to take the mandated training or those who did not take it. Further, tracking completion of the UAC training course is a long-standing issue, as the DHS Office of Inspector General (OIG) reported in September 2010 that CBP has not developed a tracking system to verify that personnel responsible for processing and detaining UAC have completed the mandatory annual training. CBP officials stated that it is the responsibility of local supervisors to ensure that all of their agents and officers who have substantive contact with UAC complete required training. According to CBP training officials, some agents and officers may be exempt from the training for a particular year, for example, if they are on a detail assignment overseas or on extended administrative leave. Also, Border Patrol and OFO headquarters officials stated that some supervisors are better than others at monitoring their agents’ and officers’ training requirements and may prioritize operational assignments over required training. For example, Border Patrol officials reported that the increase in apprehensions in fiscal years 2013 and 2014 could have led to below-optimal training completion percentages. Similarly, OFO officials stated that some officers were unable to complete the UAC training because operational needs took precedence. However, TVPRA requires all federal personnel who have substantive contact with UAC to receive specialized training, and completion of the annual UAC virtual learning course is a CBP requirement. Further, a March 2008 Border Patrol memorandum states that employees should be given adequate time to complete the course— about 1 hour in length—during their normal tour of duty. In addition, according to best practices in assessing training efforts, agencies should develop tracking and other control mechanisms to ensure that all employees receive appropriate training. Determining which agents and officers are required to complete the annual UAC training, and ensuring that they have done so, as required, would help CBP to meet training requirements under TVPRA and CBP policies and guidance. CBP and ICE Have Developed Policies That Are Consistent with UAC Care Requirements, but Could Improve Data Collection, Interagency Coordination, and Repatriation Efforts CBP and ICE Policies Generally Reflect Care Requirements for UAC, and CBP Follows These Policies in Practice CBP and ICE have policies in place to implement the Flores Agreement and care for UAC, and CBP generally follows these policies. Under the Flores Agreement, DHS is generally required to, among other things, ensure the safety and well-being of the UAC; separate UAC from unrelated adults; adequately supervise UAC to protect them from others; and provide access to food, water, toilets, and sinks. CBP and its components, Border Patrol and OFO, have several policy documents that provide guidance for how agents and officers are to implement the Flores Agreement requirements to care for UAC. These documents are generally consistent with and address Flores Agreement requirements to care for UAC, and in some cases include more rigorous care requirements. For example, the Flores Agreement requires adequate supervision of UAC, but Border Patrol policy requires direct and constant supervision of UAC. ICE also has several policies that provide guidance on UAC care; these policies address all required elements of care, and in some cases include more rigorous care requirements. At the 15 CBP facilities (11 Border Patrol and 4 OFO) we visited in July, September, and October 2014, we interviewed officials responsible for caring for UAC and observed the extent to which agents and officers at each facility were addressing the following eight required elements of care in Border Patrol and OFO policies: place UAC in the least restrictive setting appropriate for their age and separate UAC from unrelated adults and by gender, directly supervise UAC, keep facilities adequately ventilated and at appropriate temperatures, provide access to working and sanitary toilets and sinks, provide drinking water, provide food regardless of time in custody, provide access to medical treatment. In addition, at the 11 Border Patrol facilities we visited, we evaluated the implementation of the following UAC care elements specific only to Border Patrol policy: provide clean blankets and mattresses, conduct physical checks of UAC hold rooms at regular intervals, and provide access to telephones. We did not include ICE facilities in this analysis because UAC are rarely held in ICE facilities for more than a short period of time as they await transportation to an ORR shelter, according to ICE officials. seven of the eight elements of care at the time of our visits.according to our observations and interviews at the 11 Border Patrol facilities, agents were generally providing care consistent with the three Border Patrol-specific requirements at the time of our visit. Figure 5 shows a typical Border Patrol holding room in a station in south Texas. The photograph on the left shows a toilet and sink in the rear of the holding cell behind a half wall, and the photograph on the right shows a 5- gallon water jug that provided UAC with access to drinking water. In July 2014, we visited CBP facilities in Arizona, at which time Border Patrol, OFO, ORR, and FEMA officials, among others, were caring for more than 900 UAC at the Nogales Placement Center (NPC). According to CBP officials, DHS had transferred most of these children to Arizona from south Texas—where Border Patrol agents and OFO officers had apprehended them—because CBP holding facilities in south Texas were overflowing with UAC and HHS shelters did not have the capacity at the time to accept all of the UAC whom agents and officers were apprehending each week. Figure 6 shows part of the NPC, including a number of portable toilets. At the NPC, we observed Border Patrol agents and OFO officers helping U.S. government public health volunteers provide basic care to UAC, including helping children as young as 2 and 3 years old eat and bathe. At the time of our visit, Border Patrol officials in Arizona were supplementing food supplied by a FEMA contractor with food purchased separately using Border Patrol funding. We also observed toiletries, toys, and other supplies that Border Patrol agents and OFO officers told us had been purchased with personal funds. In addition, we observed Border Patrol agents and OFO officers playing games with UAC. For the remaining required element of care for Border Patrol and OFO— the requirement to separate UAC from unrelated adults and by gender— officials told us and we observed that mitigating circumstances made it challenging to meet the policy requirements at all times. Specifically, officials at 7 of 15 CBP facilities did not always meet the requirement to separate UAC from unrelated adults and by gender at the time of our visits. At these 7 facilities, we observed or were told that female and young male UAC would at times be placed in hold rooms with families. Border Patrol and OFO personnel told us that because of limited space in most of these facilities, they made decisions on how to segregate adults and children based on risk level. For example, some agents and officers stated that because of facility layouts, open areas are not a safe solution for lack of space, and determined that female and young male UAC were safer in hold rooms with families than in open areas with no physical barriers between UAC and adult males. The DHS OIG has reported that CBP has generally met care requirements, but identified some isolated problems. Specifically, in 2010, the DHS OIG reported that CBP’s care of UAC met the requirements of the Flores Agreement. Inspections by OIG from July, August, and October 2014 also found that most facilities that OIG officials visited were adhering to care requirements, but there were isolated problems with the quantity of food provided, sanitation, and the temperature of facilities found during the July 2014 inspections. The subsequent inspections in August and October 2014 found that CBP had addressed these isolated problems. However, other organizations have reported on inconsistent compliance with the requirements for UAC care. For example, DHS’s Office for Civil Rights and Civil Liberties conducted an investigation of selected Border Patrol and OFO facilities in south Texas during July 2014—during the height of the increase in UAC apprehensions—and found issues pertaining to sanitation and holding room conditions. Specifically, in October 2014, DHS Office for Civil Rights and Civil Liberties officials told us that, according to preliminary findings, some facilities had unsanitary conditions in holding room restroom areas, and some of the holding rooms lacked bathroom tissue, drinking cups, and readily available food. In addition, as of April 2015, a complaint was filed in court concerning CBP’s adherence to specific provisions of the Flores Agreement related to care; in particular, plaintiffs have alleged that CBP has not met the requirement to hold UAC in facilities that are safe and sanitary and that are consistent with the concern for the particular vulnerability of minors. Motion to Enforce Settlement of Class Action, Flores v. Reno, No. 85-4544 (C.D. Cal. Feb. 2, 2015). DHS Does Not Collect Complete or Reliable Data on Care of UAC or Their Time in the Department’s Custody Within DHS, CBP and ICE have not collected complete or reliable data documenting (1) the care provided to UAC while in DHS custody and (2) the length of time UAC are in DHS custody. Documentation of care provided to UAC. CBP does not collect complete or reliable data on actions Border Patrol agents and OFO officers take to care for UAC; therefore, DHS does not have reasonable assurance that it is meeting its care requirements for UAC. Since 2008, Border Patrol and OFO have required agents and officers to document certain care actions for UAC, such as physical checks and meals. However, in 2010, DHS OIG found that Border Patrol and OFO were not documenting care as required and recommended that CBP develop uniform documentation requirements for UAC care. Border Patrol and OFO have taken some steps to develop and implement automated data systems that can record and maintain data on UAC care. In particular, in 2012 and 2013, Border Patrol issued policies requiring all sectors and stations to fully capture information on UAC care in its automated system, which allows agents to record care provided to UAC across 20 different actions that agents might take. In fiscal year 2014, Border Patrol fully implemented the automated system, which also allows agents to document the physical movements of UAC, including when UAC are transferred to another station. We analyzed available record-level data on care provided to 55,844 UAC apprehended by Border Patrol agents nationwide from January through September 2014, and found that data were not complete or reliable because agents did not routinely or accurately record actions in the system, as required by Border Patrol policy. Our analysis of data showed inconsistent data entry across sectors. For example, data showed the following: Agents documented 14 of the 20 care actions for less than half of the 55,844 UAC (the remaining 6 actions were documented for more than 50 percent of the UAC). Agents at 31 of 92 (34 percent) Border Patrol facilities nationwide did not document meals provided for at least half of the UAC in custody. Agents at one station along the U.S. southwest border did not document meals provided for 9,201 of 13,574 UAC (68 percent) in custody. Of the 9,201 UAC that had no meal documented, 72 percent were in custody at that station for 12 hours or more. Agents at 50 of 92 facilities (54 percent) did not document physical checks for at least half of the UAC in custody; those at 19 of 92 facilities did not document physical checks for any UAC in custody. Agents at one station along the southwest U.S. border did not document physical checks for 4,694 of 5,229 (90 percent) UAC in custody. For the 4,694 UAC for whom agents did not document a physical check, 3,663 (78 percent) were in custody at that station for 12 hours or more. There were 94 total facilities nationwide that had custody of at least 1 unaccompanied alien child from January through September 2014. However, 2 of the facilities are not included in this set of analyses because the records from those facilities were not completed. multiple entries were errors.may not have adequately recorded care actions because they were more focused on working to transfer custody of UAC to HHS as quickly as possible. Border Patrol officials in headquarters and California also told us that the multiple entries are likely a result of user error caused by lack of training or accidental entry, and officials in California and Texas told us that system error, such as the system freezing or not updating quickly, may also play a role. For example, data showed the following: Border Patrol officials stated that agents Sixty-two percent (34,703) of the records we reviewed contained at least 1 likely error, and for each of those case files, the typical case had 12 likely errors. Seventy of 94 facilities in the data had at least one care action recorded multiple times for a single child within a 10-minute period. At 29 of these 70 facilities, at least 25 percent of the actions recorded were likely errors, and at 10 of these 70 facilities, at least half of the actions recorded were likely errors. For example, 56 percent of all care actions that agents in the Rio Grande Valley sector documented consisted of the same action entered more than once within a 10- minute period. In particular, 49 percent of these likely errors were recorded within 2 minutes or less of one another. In 1 case in the Rio Grande Valley sector, a meal for the same child was recorded 11 times within 11 minutes. In fiscal year 2012, OFO implemented an automated database that allows officers to record care actions such as meals and medical care. However, unlike Border Patrol, as of April 2015, OFO did not yet have a policy requiring officers to use the automated database to record care provided to UAC, and OFO headquarters officials stated that officers at most POEs do not use the automated database. For example, at a POE we visited in south Texas, OFO officers told us that they document most custodial actions in the narrative of Form I-213 as opposed to recording care actions in the database. In contrast, at least one large land POE along the southwest U.S. border has been using the database to document care provided to UAC; however, OFO headquarters and port officials were unable to view data from the database for this POE until September 2014, when OFO upgraded its database to provide this capability. OFO officials said they have discussed developing a data review process and making use of the automated database mandatory, but officials did not have an expected implementation date or documentation of any plans as of February 2015. Standards for Internal Control in the Federal Government states that agencies should document transactions and events as they occur and in an accessible manner, and that management should review Because agents and officers do documentation to assess performance.not routinely document their actions taken to care for UAC, Border Patrol and OFO have limited data available to ensure they are meeting the requirements for care of UAC as set forth in TVPRA, the Flores Agreement, and agency policy. Furthermore, data that Border Patrol and OFO have collected since fiscal year 2014 are not reliable or available. Requiring OFO officers to record UAC care actions in an automated manner and ensuring that Border Patrol agents record UAC care actions in their automated system, as required, would help CBP to assess and better ensure compliance with all care requirements. Documentation of UAC time in DHS custody. Border Patrol generally collects complete and reliable information on the dates and times during which UAC come into and leave its custody, but OFO and ICE do not collect such information. Therefore, DHS cannot accurately determine how long UAC have been in OFO and ICE custody, and if the length of time in custody before transfer to HHS is within the 72-hour limit established by TVPRA. Border Patrol’s policy instructs agents to record the times that UAC enter custody—or are “booked in”—and leave their custody—or are “booked out.” Border Patrol’s system provides automated date and time stamps for each action that agents enter into the system when UAC are in their custody. Our analysis of fiscal year 2014 data shows that Border Patrol agents used the “transfer” field in over 99 percent of cases to document when UAC left Border Patrol custody. Using this system, agents can also differentiate the type of transfer, including UAC transfers to another Border Patrol facility or a POE, ICE, or HHS shelter. As a result, Border Patrol can produce regular reports to identify the number of UAC who have been in DHS custody greater than 72 hours. According to CBP officials, this information is disseminated to FEMA, DHS senior leadership, and additional stakeholders. OFO’s data system automatically generates a book-in date and time when officers enter a UAC apprehension in the system. In July 2014, OFO officials told us that their system did not have a field to record when UAC left OFO custody. OFO has a “time out” field in its system, but as of April 2015, officers are not required to use it. OFO officials told us they have discussed requiring a book-out date and time; however, they do not have time frames or documentation of any plans for implementation of the updates. An ICE user manual describes how to book in and book out subjects, including UAC, and ICE officials stated that they cover the proper use of these fields during monthly conference calls and annual training. However, ICE does not have a policy that requires officers to record book-in or book-out dates and times for UAC in its custody. Further, ICE headquarters officials told us officers often enter book-out dates and times into the system a day or more after UAC have been transferred to HHS because officers may wait to enter the data until after they travel back to their office. Therefore, ICE officials stated that this delayed data entry affects the precision of the book-out times recorded in the system and makes them unreliable. Further, officials in one ICE field office told us that they do not use the book-out field when a UAC leaves ICE custody. In addition, ICE officials told us that they use book-in and book-out records to count time in ICE custody, but they recognize that the limitations in the data make these calculations imprecise. The Flores Agreement requires information on all UAC who were placed in removal proceedings and remained in DHS custody more than 72 hours in the prior 6-month period, among other things, to be submitted to plaintiffs’ counsel twice yearly.reports compiled by ICE and submitted to plaintiffs’ counsel by DOJ showed that a total of 1,280 UAC were in DHS custody for more than 72 hours. However, our analysis of the reports revealed data quality issues. For instance, the date of DHS’s transfer of the UAC to ORR is missing for 13 percent of the 1,280 cases, so the time in DHS custody for these children was not fully documented in the reports. Further, our analysis identified that the apprehension date for 19 percent of UAC apprehended by Border Patrol was recorded incorrectly by ICE in these reports. ICE ERO officials told us that because ICE’s data system does not provide reliable data on time in custody, the reports are compiled manually using weekly reports from ICE field offices—a method that ICE officials told us is also prone to data entry errors. From fiscal years 2009 through 2013, the Standards for Internal Control in the Federal Government states that agencies should document transactions and events as they occur and in an accessible manner, and that management should review documentation to assess performance. Requiring officers to accurately record book-in and book-out times would help ensure that OFO and ICE have data necessary to determine UAC’s time in custody, and would better equip DHS to provide accurate data in reports to DOJ and ensure that it is complying with requirements to limit UAC’s time in custody to 72 hours, apart from the case of exceptional circumstances. Interagency Process to Refer and Transfer UAC from DHS to HHS Shelters Is Inefficient and Does Not Have Clearly Defined Roles and Responsibilities GAO/AIMD-00-21.3.1. ICE, which sends it on to ORR from certain locations, or directly to ORR officials in headquarters who are responsible for identifying available shelter beds. ORR officials check their data system to find available beds. Once ORR has identified and assigned a shelter bed for a child, an ORR official sends an e-mail to CBP or ICE, often with a list of several children who have been placed in shelters. After CBP or ICE officials examine the e-mails to identify which UAC are in their respective custody, they coordinate to transport the child to the shelter. Figure 7 depicts the referral and placement process as of April 2015, and identifies points in the process where DHS and HHS officials told us that potential errors or miscommunications have, at times, affected the efficiency of the referral and placement process. UAC referral and placement process relies on e-mails and manual data entry. ORR officials in headquarters, as well as Border Patrol, OFO, and ICE officials in Arizona, Texas, and California, told us that the referral and placement process is inefficient and time and resource intensive. Each agency uses its own data system to manage and track UAC, and because the data systems do not automatically communicate information with one another, the agencies rely on e-mails and duplicative manual data entry when coordinating the placement of UAC in shelters. In some cases, ORR sent e-mails concerning the shelter placements for multiple UAC to an entire Border Patrol sector, and the POEs within it, making the e-mails time-consuming to search. For example, in September 2014, we observed a team of Border Patrol agents in Texas who had been reassigned from other duties to search through placement e-mails for all UAC in the sector to determine if any UAC in their custody had been placed in a shelter. HHS headquarters officials stated that ORR sends shelter placement information to the recipients indicated by DHS. Table 4 shows that this increase in apprehensions resulted in higher numbers of UAC transferred to HHS. The reliance on e-mail and associated manual data entry also makes the referral and placement process vulnerable to error and possible delay in the transfer of UAC from DHS to HHS. We analyzed e-mail exchanges between ORR and Border Patrol in the Rio Grande Valley sector to place 593 UAC in shelters during a 3-day period in July 2014, during the influx of UAC, and 135 UAC during a 3-day period in November 2014, after the influx had ended. We identified miscommunications between agencies, as well as errors during both time periods, including UAC who had to be redesignated to different shelters after initial placements and UAC who were assigned to multiple shelters simultaneously. For example, in the e-mails we examined, we found that ORR staff recorded one UAC’s personal information incorrectly, which resulted in a Texas shelter holding an empty bed for 14 days while the child had been already placed in a shelter in Pennsylvania. The incorrect entry of this child’s personal information also meant that ORR and DHS had to exchange multiple e- mails to resolve the discrepancy between their records. DHS components use different methods to refer UAC to HHS. DHS and ORR have identified the lack of compatible data systems as a challenge in the UAC referral and shelter placement process and, in fiscal year 2014, began exploring ways to automate communications to address inefficiencies. In summer 2014, ORR began allowing CBP and ICE to enter information directly into ORR’s UAC data system when referring UAC for placement in shelters. As of March 2015, ICE was using ORR’s system, while Border Patrol was developing an alternative approach, and OFO was using the process described in figure 5. According to ORR officials, as well as ICE officials in Rio Grande Valley and San Diego, ICE inputs referral information directly into ORR’s system. However, ICE officials stated that using ORR’s system directly has increased data entry requirements without reducing reliance on e-mail and phone communication. ICE officials also told us that, despite the increased data entry requirements, they are continuing to use ORR’s system. Border Patrol and ORR officials stated that efforts are under way to automate most of the steps in the referral process between these agencies. However, as of May 2015, Border Patrol is not entering referral information directly into ORR’s data system because Border Patrol officials said they do not want agents to have to enter the same data into its two separate systems (as ICE is doing) or train all of the agents on how to operate ORR’s system. Border Patrol and ORR have been unable to move forward with a fully automated referral and placement process because DHS has not accredited the security of ORR’s data system. In March 2015, CBP officials stated that DHS is waiting to begin its security accreditation process until HHS accredits ORR’s system. ORR officials told us in January and April 2015 that they are also working to update their data system to meet HHS security requirements. In the interim, Border Patrol and ORR are developing a temporary automated process that allows Border Patrol’s system to automatically fill out the referral form when agents create a new apprehension record. Agents are to send an encrypted referral form to ORR via e-mail, and ORR is to import the information directly into its system without ORR staff having to manually enter it. Border Patrol officials said that eliminating the duplicate data entry in the referral process will increase efficiency for Border Patrol. According to Border Patrol officials, automating most of the referral process will decrease UAC processing time by 5 minutes per case, which would have saved a total of about 5,000 hours of agents’ time during fiscal year 2014. Border Patrol officials stated that they would like to pilot the alternative system in one sector in May 2015 before fully implementing it in all sectors, but did not have a time frame for when they would have a fully automated way to communicate with ORR’s data system because of the security accreditation requirement. OFO headquarters officials said that they were unaware that their officers could access ORR’s system and are not involved in or planning any efforts to link OFO’s data system with ORR’s system. In May 2015, ORR officials stated that there are no efforts under way with OFO to link the two data systems. OFO headquarters officials said they have not received feedback from the field about the current referral process. However, OFO officials in the Rio Grande Valley told us that the placement process has cost them time and resources. For example, one POE on the U.S. southwest border set up a port-wide e- mail specifically for ORR placements, which all officers on duty check regularly and which officials reported took significant amounts of time. However, we observed that these port officials continued to handle large quantities of e-mails despite having no UAC in custody at the time of our visit in September 2014. These officials also reported that placement errors had caused them to transport children to shelters that would not accept them, and that they were not always informed of travel itineraries. Roles, responsibilities, and procedures are not documented. The roles and responsibilities of DHS components are not consistent during the referral and placement process, and DHS points of contact for ORR vary across Border Patrol sectors and ICE and OFO areas of operation. For example, the ICE San Diego field office coordinates placements with ORR on behalf of all Border Patrol stations and POEs in its area of operations, except for two stations in the El Centro sector. However, in the Rio Grande Valley and Tucson stations and POEs, CBP handles referral and placement communications with ORR. Border Patrol officials stated that they have a designated UAC coordinator for every sector along the southern border who is responsible for UAC placement and other tasks. However, ORR sends sector-wide placement e-mails on multiple UAC; CBP officials in the Rio Grande Valley, which has the highest volume of UAC apprehensions, told us that Border Patrol agents and OFO officers spend significant amounts of time searching e-mails for critical information. Also, in e-mail exchanges we analyzed, ORR officials sent one e-mail with placements for 7 UAC to the wrong ICE and CBP region. We also found multiple instances in which different ORR officials sent duplicate placement e-mails for the same UAC. The inefficiencies in the placement process for UAC have been a long-standing challenge for DHS and HHS. In 2005, the DHS OIG found that, despite an initial effort to create a memorandum of understanding that would outline roles and responsibilities, the two departments had not delineated their respective organizational functions regarding UAC. In particular, the DHS OIG reported that clear roles and responsibilities across DHS components would help provide a point of contact for entities outside of DHS. Moreover, in 2008, after the HHS OIG recommended that ORR develop a memorandum of understanding with DHS on the shelter placement process, ORR stated that it was coordinating with DHS on a joint manual.departments never created such a manual. However, ORR officials told us that, for unknown reasons, the In addition, ORR officials told us that from 2013 to 2014, ORR and DHS were developing a memorandum of understanding to outline agency responsibilities in the UAC process, but efforts were set aside during the increase in UAC apprehensions in fiscal year 2014 and efforts to develop such a document have not been renewed. In 2013, DHS developed an internal concept of operations to be used in the event of another large increase in UAC apprehensions; however, DHS policy officials told us that this document was never approved or implemented. As of January 2015, DHS has been working under an interim concept of operations while a DHS working group develops a land migration contingency plan that will include procedures for potential future surges in UAC apprehensions. However, according to DHS policy officials, the plan will not address procedures and responsibilities for DHS components to refer and transfer UAC to HHS during normal operating conditions. DHS and HHS have participated in interagency efforts to handle and plan for a potential surge in the apprehension of UAC who require transfer to HHS. In June 2014, the DHS Secretary directed FEMA to take the lead in managing interagency coordination for UAC issues as the head of the Unified Coordination Group, which includes CBP, ICE, the Department of Defense, and HHS. FEMA officials told us that, during the influx in 2014, they identified the UAC placement process as a challenge and helped facilitate ORR placement operations in Arizona as a temporary measure. As of April 2015, FEMA leads the Unified Coordination Group, which has created an emergency response plan for the Rio Grande Valley region that outlines steps to rapidly increase federal capacity if the number of UAC reaches extremely high levels again. However, the plan does not address the interagency referral and placement process that is in place during routine operations, nor does it address operations outside of the Rio Grande Valley. Best practices of high-performing organizations include fostering collaboration both within and across organizational boundaries to achieve results; moreover, federal programs contributing to the same or similar results should collaborate to ensure that program efforts are mutually reinforcing, and should clarify roles and responsibilities for their joint and individual efforts. Further, agencies should work together to establish a shared purpose and shared goals; develop joint strategies or approaches that complement one another and work toward achieving shared goals; and ensure the compatibility of the standards, policies, procedures, and data systems to be used. We have reported on a range of mechanisms agencies can use to implement these practices for collaboration. These include interagency agreements and memorandums of understanding, as well as collaboration technologies such as shared databases and web portals that help facilitate collaboration. DHS and HHS have utilized some of these mechanisms under emergency operations during the large increase in UAC apprehensions in fiscal year 2014. However, DHS and HHS do not have a documented interagency process that clearly defines the roles and responsibilities of each DHS component and ORR for UAC referrals and placements during normal operations. Having documented procedures and defined roles and responsibilities could prevent miscommunication and errors in disseminating placement decisions, reduce time spent on the referral and placement process, and help ensure that UAC are transferred from DHS to HHS within 72 hours under normal operations. DHS-Negotiated Repatriation Arrangements with Mexico Reflect Some but Not All TVPRA Requirements DHS has entered into local arrangements with Mexican consulates to ensure the safe and humane repatriation of Mexican nationals, including UAC. However, these arrangements do not reflect minimum TVPRA requirements for agreements with Canada and Mexico with respect to the repatriation of UAC. TVPRA requires that State negotiate agreements with contiguous countries for the repatriation of children. These agreements are to be designed to protect children from severe forms of trafficking in persons and must, at minimum, provide that (1) no child shall be returned unless to appropriate officials, including child welfare officials where available; (2) no child shall be returned outside of reasonable business hours; and (3) border personnel of countries who are parties to the agreements are to be trained in the terms of the agreements. Officials from State’s Bureau of Western Hemisphere Affairs and Office of the Legal Advisor told us that the department has not entered into agreements regarding the repatriation of UAC with Mexico or Canada. Instead, DHS has negotiated and signed repatriation arrangements at the national and local levels with Mexico. According to officials, State has not acted in a formal advisory capacity during the negotiation and implementation of these arrangements in the past because they more directly relate to DHS’s operations. A senior official in the Bureau of Western Hemisphere Affairs told us that State’s role is generally to facilitate cooperation between the government of Mexico and DHS, as necessary. For example, bureau officials participate in a bilateral, interagency repatriation technical working group, which meets every other month to review repatriation practices. A senior official in the Bureau of Western Hemisphere Affairs told us that through this group, they are able to provide advice and input on DHS-negotiated arrangements and their implementation, such as hours during which UAC can be repatriated. The officials told us that they do not review the local repatriation arrangements to determine if they satisfy TVPRA requirements, but stated that it would be possible to provide such input through State’s role in the working group. In 2004, DHS negotiated a memorandum of understanding with Mexico to ensure the safe and humane repatriation of all Mexican nationals, including UAC. This document contains a set of principles and practices that serve as the basis for 30 local repatriation arrangements negotiated and agreed upon by DHS and Mexican consulates throughout the United States from 2006 through 2009, which remain in effect. As these arrangements and the memorandum of understanding represented the primary agreements with Mexican officials for UAC repatriation, we reviewed them to determine whether they reflected minimum requirements for agreements with contiguous countries regarding UAC repatriation. Our analysis of the memorandum of understanding and local repatriation arrangements indicates that they reflect some, but not all, repatriation requirements in TVPRA. Specifically, the memorandum states that UAC should be returned during daylight hours, where possible to appropriate family welfare officials, but it does not identify appropriate officials or contain any provisions for the training of border personnel in the arrangements. Further, our analysis of the 30 DHS-negotiated local repatriation arrangements shows that fewer than one-third contained provisions directing that no UAC be returned unless to appropriate employees or officials, for example, by providing titles of Mexican officials to whom Mexican nationals are to be returned; and only one arrangement identified child welfare representatives. In terms of prohibiting the return of UAC outside of reasonable business hours, fewer than half of arrangements identified the hours during which UAC could be returned to Mexico. Among those that identified the hours, the allowable hours for repatriation varied widely. In most cases, UAC could not be repatriated later than 6 p.m., but according to one arrangement, UAC can be repatriated until 10 p.m., and another permits UAC to be repatriated at any hour. According to officials from DHS’s policy office, reasonable business hours depend on the availability of Mexican government officials, with whom DHS officials in the field coordinate, to accept UAC. Last, none of the local repatriation arrangements addresses the requirement that border personnel be trained in the terms of the repatriation arrangements. As of February 2015, a DHS policy official said that DHS was updating all 30 arrangements, but that the department did not have plans to make any substantive changes to the provisions affecting the return of UAC. DHS policy officials told us that although the local repatriation arrangements were not initially negotiated with the intent to fulfill TVPRA requirements because many were negotiated before TVPRA was enacted, the department considers these arrangements to broadly reflect TVPRA requirements. However, the officials stated that TVPRA repatriation requirements are not directly outlined in all local repatriation arrangements and acknowledged that none of them addresses the training requirement. The officials told us that they have not explicitly included TVPRA requirements in the arrangements because they did not deem it necessary to include specific U.S. legislative mandates in bilateral arrangements. Further, DHS headquarters officials told us that they defer to officials in the field when reviewing arrangements and do not generally request changes to a provision unless U.S. or Mexican officials identify a problem that needs to be addressed. However, given that DHS’s local repatriation arrangements serve as the primary agreements governing the repatriation of Mexican UAC, revising the arrangements to reflect minimum requirements of TVPRA, in consultation with State, would help better position DHS to ensure that minimum legislative requirements for agreements designed to protect children from severe forms of trafficking in persons are being met. Total DHS Costs Associated with UAC Are Unknown, and HHS Costs Were Over $2 Billion for Fiscal Years 2009 through 2014 DHS Began Tracking Some Costs Associated with UAC in 2014 Total DHS costs associated with UAC apprehension, custody, and care are unknown for fiscal years 2009 through 2014. Prior to mid-fiscal year 2014, CBP did not collect data on costs specifically associated with UAC. According to Border Patrol and OFO officials, differentiating UAC costs from costs for accompanied children and adults is difficult because, for example, certain duties such as interviewing UAC are considered part of normal operations. Although ICE also did not collect costs specifically associated with UAC prior to fiscal year 2014, ICE estimated its costs for UAC custody, transfer, and repatriation were approximately $41 million for fiscal years 2009 through 2013. In May 2014, following the DHS Secretary’s memorandum establishing a contingency plan for the large increase in apprehensions of UAC occurring at that time, CBP and ICE implemented project codes to begin tracking some UAC costs. Costs for these agencies from February through September 2014 totaled approximately $97 million—$67 million (69 percent) for CBP and about $30 million (31 percent) for ICE. Of CBP’s UAC project code costs, about 44 percent was for contracted services; about 23 percent was for construction, rent, and utilities for its UAC processing centers; and about 22 percent was for personnel travel, salaries, overtime, and benefits. Of ICE’s UAC project code costs, about 76 percent was to transport UAC and 18 percent was for ICE salaries, benefits, and overtime. Table 5 shows UAC costs charged to DHS project codes from February through September 2014. To provide additional insight into CBP’s UAC costs, we analyzed project code costs for each CBP office. Specifically, from March through September 2014, CBP charged $67 million to the project codes—about $46 million (69 percent) for Border Patrol, about $16 million (24 percent) for the Office of Administration, about $3.5 million (5 percent) for OFO, and about $1.5 million (2 percent) for other CBP offices. Of Border Patrol’s UAC project code costs, about 70 percent was for contracted services, including food, medical, sanitation, and decontamination, as well as FEMA flight assistance with UAC transport during the large UAC increase, and about 26 percent was for personnel travel, salaries, overtime, and benefits. Of CBP’s Office of Administration’s UAC costs, about 92 percent was for construction, rent, and utilities for UAC processing facilities and about 8 percent was for contracted services, such as temporary air conditioning, waste removal, and custodial services. Of OFO’s UAC costs, about 76 percent was for personnel travel, salaries, overtime, and benefits and about 18 percent was for equipment, supplies, and materials. Table 6 shows additional details of the costs charged to UAC project codes for CBP’s Border Patrol, Office of Administration, OFO, and other offices from March through September 2014. The project codes are generally used to capture UAC costs that CBP and ICE officials can easily distinguish from costs for accompanied children and adults; however, such UAC costs vary by type across DHS components. For example, Border Patrol tracked costs for UAC custody, care, and transportation such as food, medical services, showers, and other contracts during the large UAC increase, as well as personnel salaries, benefits, and travel expenses for those assigned to provide UAC care and custody. Border Patrol also implemented a second project code to track costs for law enforcement activities to interdict or disrupt the flow of UAC into the United States. However, according to Border Patrol officials, they do not track UAC costs for general activities (such as overhead) because these costs cannot be easily differentiated from costs for other detainees. For OFO, the project code captures costs incurred specifically for UAC, including personnel overtime, procurements, and other direct costs, such as temporary duty travel for personnel to assist with the UAC increase. In general, according to OFO officials, their project code does not capture UAC operational costs, such as employee salaries and overhead, because OFO cannot easily distinguish these costs from costs for adults and accompanied children. For ICE, the project code captures costs for UAC care, transport, and detention as well as personnel, such as salaries and travel to escort UAC. According to CBP and ICE officials, CBP and ICE personnel are required to use the project codes when UAC costs are easily distinguished from costs for other detainees. However, OFO allows its program managers to decide whether it is practical to use the UAC project code. Therefore, DHS costs charged to the project codes represent the minimum UAC costs to the department from February through September 2014. HHS Total UAC Costs and Average Daily Cost per Bed in Basic Shelters Have Increased since Fiscal Year 2009 HHS ORR’s costs for its UAC program totaled over $2 billion from fiscal years 2009 through 2014, with about $910 million (45 percent) of the costs incurred during fiscal year 2014. ORR’s UAC program costs consist of three categories: shelter costs, services to UAC, and administrative costs. Shelter costs include care and custody of UAC and account for about 84 percent of total program costs for fiscal years 2009 through 2014. Shelter costs vary by shelter type and location, and UAC length of stay. Services to UAC include medical care, legal services, background checks, and home assessment/post-release services provided by HHS or HHS-contracted organizations, and account for about 12 percent of overall program costs for fiscal years 2009 through 2014. Administrative costs include ORR UAC program staff salaries and benefits, as well as travel and supplies, and account for about 4 percent of overall program costs for fiscal years 2009 through 2014. ORR’s UAC costs increased about 600 percent from fiscal year 2009 to fiscal year 2014, primarily, because of the increase in the number of UAC apprehended and transferred to ORR shelters. As shown in table 7, total costs steadily increased during fiscal years 2009 through 2013, with a sharper increase of about 142 percent from fiscal year 2013 to fiscal year 2014. In fiscal years 2012 and 2014, ORR incurred a total of over $260 million in additional costs for temporary shelter beds to accommodate the unexpected increases in UAC.additional shelter grants and memorandums of agreement with Department of Defense facilities. Table 8 shows the total costs for UAC in temporary beds during fiscal years 2012 and 2014. Conclusions Every year, DHS apprehends tens of thousands of UAC, some of whom may be vulnerable to trafficking or other forms of abuse. TVPRA requires that DHS establish policies and programs to ensure that UAC in the United States are protected from traffickers and others seeking to victimize or engage such children in harmful activity. Developing and implementing training for OFO officers at airports, who have substantive contact with UAC, could better position those officers to comply with TVPRA requirements. DHS is also responsible for screening UAC and either safely repatriating them or referring them to an HHS shelter. Revising the Form 93 to include indicators or questions for all TVPRA screening criteria that Border Patrol agents and OFO officers are to assess before repatriating a child, and developing written guidance on how they are to implement the trafficking and credible fear criteria, would better ensure that agents and officers have the necessary information to determine outcomes for UAC that are consistent and in accordance with TVPRA requirements and CBP policy. In addition, ensuring that Border Patrol agents consistently document the rationales for their decisions regarding the independent decision and trafficking criteria would allow CBP management to assess on an agency-wide basis whether these decisions, which account for case-by-case factors, were justified and consistent with TVPRA and CBP policy. Determining which agents and officers are required to complete the annual UAC training and ensuring that they have done so, as required, would help CBP to meet training requirements under TVPRA and CBP policies and guidance. In addition to screening responsibilities, Border Patrol agents and OFO officers are also required under TVPRA, the Flores Agreement, and CBP policy to protect and care for UAC while they are in DHS custody. In doing so, requiring and ensuring that agents and officers routinely record care actions provided to UAC in an automated manner, and accurately record the length of time the children spend in DHS custody, would better enable DHS managers to ensure they are caring for UAC in accordance with the law, including the 72-hour limit for a child to be in DHS custody once determined to be a UAC, except in exceptional circumstances. Furthermore, developing a documented interagency referral and transfer process with defined roles and responsibilities, as well as procedures to disseminate placement decisions, for each agency involved could better enable agencies to find shelters for UAC in an efficient and effective manner, and with minimal errors. Additionally, as DHS revises the local repatriation agreements it has with Mexico that govern the repatriation of UAC, ensuring that the revised agreements reflect all provisions of TVPRA concerning the repatriation process, in consultation with State, could better ensure that Mexican children are repatriated safely. Recommendations for Executive Action To better ensure that DHS complies with TVPRA requirements for training, screening, and transferring UAC to HHS, we recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to take the following six actions: develop and implement TVPRA training for OFO officers at airports who have substantive contact with UAC; revise the Form 93 to include indicators or questions that agents and officers should ask UAC to better assess (1) a child’s ability to make an independent decision to withdraw his or her application for admission to the United States and (2) credible evidence of the child’s risk of being trafficked if returned to his or her country of nationality or last habitual residence; provide guidance to Border Patrol agents and OFO officers that clarifies how they are to implement the TVPRA requirement to transfer to HHS all Mexican UAC who have fear of returning to Mexico owing to a credible fear of persecution; develop and implement guidance on how Border Patrol agents and OFO officers are to implement the TVPRA requirement to transfer to HHS all Canadian and Mexican UAC who are victims of a severe form of trafficking in persons; ensure that Border Patrol agents document the basis for their decisions when assessing screening criteria related to (1) an unaccompanied alien child’s ability to make an independent decision to withdraw his or her application for admission to the United States, and (2) whether UAC are victims of a severe form of trafficking in persons; and determine which agents and officers who have substantive contact with UAC, complete the annual UAC training, and ensure that they do so, as required. To help ensure that CBP has complete and reliable data needed to ensure compliance with care requirements under the Flores Agreement and CBP policies, we recommend that the Commissioner of U.S. Customs and Border Protection take the following two actions: require that OFO officers record care provided to UAC in an automated manner, and ensure that Border Patrol agents record care provided to UAC in Border Patrol’s automated system, as required. To help ensure that DHS has complete and reliable data needed to ensure compliance with the UAC time-in-custody requirement under TVPRA and for required reports on UAC time in custody under the Flores Agreement, we recommend that the Secretary of Homeland Security take the following two actions: require OFO officers to record data in their automated system when UAC leave OFO custody in order to track the length of time UAC are in OFO custody, and require ICE officers to record accurate and reliable data in their automated system when UAC leave ICE custody in order to track the length of time UAC are in ICE custody. To increase the efficiency and improve the accuracy of the interagency UAC referral and placement process, we recommend that the Secretaries of Homeland Security and Health and Human Services jointly develop and implement a documented interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, for all agencies involved in the referral and placement of UAC in HHS shelters. To ensure that minimum legislative requirements to protect UAC from severe forms of trafficking in persons are in repatriation agreements with Mexico and are met, we recommend that the Secretary of Homeland Security, in coordination with the Secretary of State, ensure that TVPRA requirements for these agreements are reflected in local repatriation arrangements as DHS renegotiates these arrangements with Mexico. Agency Comments and Our Evaluation We provided a draft of this report to DHS, HHS, DOJ, and State for their review and comment. DHS and HHS provided formal, written comments, which are reproduced in full in appendixes V and VI, respectively. DHS and HHS also provided technical comments on our draft report, which we incorporated as appropriate. DOJ and State did not have formal comments on our draft report; DOJ provided technical comments, which we incorporated as appropriate. DHS concurred with our 12 recommendations and described actions underway or planned to address them. In particular, DHS indicated, among other things, that CBP will develop training on identifying and screening UAC for OFO officers at airports who have substantive contact with UAC, issue guidance to field personnel emphasizing TVPRA transfer procedures for Mexican UAC who have fear of returning to Mexico owing to a credible fear of persecution, and issue guidance clarifying TVPRA transfer procedures for UAC who are nationals or habitual residents of Canada or Mexico who are victims of a severe form of trafficking in persons. In commenting on our draft report, DHS also stated that CBP has established a working group to review and revise elements of the TVPRA-related questions on the Form 93 to better assess a child’s ability to make decisions and the risk of trafficking if the child is returned, as appropriate. DHS also stated that CBP will explore adding a mechanism in its automated system to document an unaccompanied alien child’s ability to make an independent decision to withdraw his or her application for admission to the United States. DHS indicated, among other things, that OFO will work with CBP’s Office of Information and Technology to make technological changes to its automated system to enable it to record appropriate care actions for all UAC and record when UAC enter into, and are transferred from, OFO’s custody. DHS also indicated that ICE plans to develop guidance, including time frames for data entry, on proper UAC book-in and book-out procedures for recording UAC’s time in ICE custody in its automated system. Regarding our recommendation that DHS and HHS jointly develop and implement a documented interagency process for the placement and referral of UAC in HHS shelters, DHS indicated that DHS’s Office of Policy will convene a meeting with appropriate agency officials to initiate a plan for close coordination, and that staff will work toward an institutionalized coordinating framework. Regarding our recommendation that DHS, in coordination with State, ensure that TVPRA requirements are reflected in agreements with the government of Mexico, DHS indicated that the departments have begun negotiations with Mexico on options for including in the arrangements additional and specific references to TVPRA requirements. DHS also noted, and we acknowledge, that the extent to which proposed changes to the agreements are accepted will be dependent on the binational negotiations with Mexico. These and other actions that DHS indicated are planned or under way should help address the intent of our recommendations if implemented effectively. HHS concurred with our recommendation that DHS and HHS jointly develop and implement a documented interagency process for the placement and referral of UAC in HHS shelters. HHS stated that the department will fully support efforts to document the interagency process used in the UAC referral and placement process. We are sending copies of this report to interested congressional committees, the Secretaries of Homeland Security, Health and Human Services, and State, as well as the Attorney General of the United States. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Appendix I: Eligibility of Unaccompanied Alien Children for Federal Public Benefits With certain exceptions, unaccompanied alien children (UAC) are generally not eligible to receive federal public benefits because these children lack lawful immigration status in the United States and are not considered qualified aliens. According to the Department of Health and Human Services’ (HHS) Office of Refugee Resettlement (ORR) officials, UAC cannot receive federal benefits because they do not meet the definition of a qualified alien as defined in the Personal Responsibility and Work Opportunity Reconciliation Act of 1996. ORR officials stated that, as a result, UAC in HHS custody are not eligible for the Temporary Assistance for Needy Families program, Supplemental Nutrition Assistance Program, supplemental security income, Medicaid, or other federal benefit programs. However, under specific statutory exceptions, applicable to aliens generally, UAC may be eligible for certain federal public benefits regardless of alienage, even though they do not meet the definition of “qualified alien.” The exceptions are as follows: emergency medical assistance, short-term noncash in-kind emergency disaster relief, public health assistance for immunizations and treatment of programs such as soup kitchens that deliver in-kind services at the community level, and U. S. Department of Agriculture’s school meals programs. ORR officials stated that after UAC transition out of ORR custody and are placed with sponsors, eligibility for federal benefits would generally depend on whether or not the sponsors pursue immigration status on behalf of the UAC. benefits provided to children after they leave ORR custody. If ORR is unable to locate sponsors for UAC, the UAC remain in ORR shelters until they turn 18 years of age and are then treated as adult aliens. Appendix II: Objectives, Scope, and Methodology Our objectives were to determine the (1) extent to which the Department of Homeland Security (DHS) has developed and implemented policies and procedures to ensure that all unaccompanied alien children (UAC) are screened as required and (2) DHS and the Department of State (State) developed and implemented policies and procedures to ensure that UAC are cared for as required while in DHS custody and during repatriation; and (3) costs associated with apprehending, transporting, and caring for UAC in DHS and Department of Health and Human Services (HHS) custody during fiscal years 2009 through 2014. To determine the extent to which DHS developed and implemented policies and procedures to ensure that all UAC are screened as required, we reviewed U.S. Customs and Border Protection (CBP) policies, procedures, and training to screen UAC, including the March 2009 memorandum Implementation of the William Wilberforce Trafficking Victims Protection Reauthorization Act (TVPRA), the CBP Form 93 UAC Screening Addendum—the primary tool Border Patrol agents and Office of Field Operations (OFO) officers use to make trafficking and credible fear determinations, and document screening assessments—and CBP’s human trafficking and UAC virtual learning course. Our scope did not include the screening of UAC apprehended by U.S. Immigration and Customs Enforcement (ICE) because ICE does not conduct TVPRA screenings. We obtained available fiscal year 2011 through fiscal year 2014 Border Patrol and OFO data on the percentage of agents and officers who completed the annual UAC training. On the basis of interviews with CBP officials and written responses these officials provided to explain how they document completion of UAC training, we determined that the data were not sufficiently reliable to report training completion rates. We assessed CBP training efforts against TVPRA training requirements and training best practices. We also analyzed fiscal year 2009 through 2014 Border Patrol and fiscal year 2012 through 2014 OFO UAC apprehension data—the most recent years for which complete UAC data were available—to determine the outcome of the screening process for UAC from Canada and Mexico, as well as UAC from other countries. In addition, we obtained and analyzed Border Patrol data on UAC who claimed fear of return to their country for fiscal year 2014, the most recent year for which data were available. Further, we analyzed Border Patrol’s, OFO’s, and ICE’s fiscal years 2009 through 2014 apprehension data to determine whether DHS transferred UAC to an HHS shelter or repatriated them, as well as demographic trends, such as the age, gender, and country of origin of UAC.reliability of the data by (1) reviewing related documentation, such as data fields, the database schema, and database training materials; (2) interviewing CBP officials responsible for ensuring data quality; (3) reviewing the data for missing data or obvious errors; (4) comparing selected data fields with information from UAC case files and HHS data; and (5) tracing certain data fields to source documents. During our We assessed the assessment, we found some inconsistencies in the disposition, or outcome, data field when conducting internal checks of the data. We rounded this information to the nearest hundred for reporting purposes. We found Border Patrol and OFO’s apprehension data to be sufficiently reliable for the purposes of determining whether CBP transferred UAC to an HHS shelter or repatriated them to their countries of nationality or last habitual residence. We also found the DHS apprehension data to be sufficiently reliable for the purpose of reporting UAC demographic trends, such as age, gender, and country of nationality. Further, we reviewed 21 OFO and 20 Border Patrol nongeneralizable, randomly selected cases of non-Canadian and non-Mexican UAC apprehended at land borders and ports of entry (POE) during fiscal year 2014 that CBP databases indicated were repatriated to determine if the disposition for each child was correct. Because neither Border Patrol nor OFO has complete information on screening decisions in its database stored in an aggregate manner, we analyzed fiscal year 2014 case files of Mexican UAC Border Patrol apprehended. We selected this population because Mexican UAC apprehended by Border Patrol account for 90 percent of the Canadian and Mexican UAC whom CBP apprehended that year. Specifically, we drew a stratified random sample of 180 Mexican UAC from a study population of 15,531 Mexican UAC who were recorded in Border Patrol’s database as having been apprehended in fiscal year 2014. We selected these 180 UAC with probabilities proportionate to the number of Mexican UAC from two strata defined by the outcome of the UAC screening process—repatriation to Mexico or transfer to an HHS shelter. With this random sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any unaccompanied alien child. The sample size for the stratum of UAC that Border Patrol repatriated to Mexico was 97 out of the 14,931 that the Border Patrol database show were repatriated in fiscal year 2014. The sample size for the stratum of UAC that Border Patrol transferred to HHS was 84 of the 600 Mexican UAC who Border Patrol’s database showed were transferred to HHS that year. In reviewing the case file information for these UAC, we found that the outcome listed for one child did not match the outcome information in the CBP database, and we excluded that case. As a result, the sample for the stratum of UAC that Border Patrol transferred to HHS decreased from 84 to 83 cases. We analyzed case file information, including CBP Form 93s to determine if CBP screening complies with relevant TVPRA requirements and CBP policies and procedures. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the case file review have margins of error at the 95 percent confidence level of plus or minus 10 percentage points or less. All numerical estimates other than percentages in this report are presented along with their margins of error at the 95 percent confidence level. Further, we visited 11 Border Patrol facilities and four land POEs for a total of 15 CBP facilities in three regions—Arizona (July 2014), south Texas (September 2014), and southern California (October 2014)—to, among other things, observe DHS screening operations and interview Border Patrol agents, OFO officers, and Mexican consular officials regarding implementation of UAC screening policies and procedures. We selected Arizona and south Texas because they have historically had the most UAC apprehensions by Border Patrol, while southern California has had the most UAC encounters by OFO. The locations were also chosen for geographic variability. During our visit to south Texas, we observed the screening of 8 UAC by Border Patrol agents. The results from our visits to these three regions cannot be generalized; however, the visits provided us with first-hand observations on UAC screening practices, and insights regarding how Border Patrol agents and OFO officers implement screening policies and procedures. In addition, we interviewed headquarters Border Patrol, OFO, and Office of Training and Development officials to discuss CBP’s UAC screening policies, procedures, and training. We compared CBP’s UAC screening efforts discussed during these visits and interviews and the screening information obtained from our analyses of CBP information with TVPRA screening requirements and CBP policy, as well as standards in Standards for Internal Control in the Federal Government. In addition, we reviewed studies on the screening of Mexican UAC completed by the nongovernmental organization Appleseed in 2011 and the United Nations High Commission for Refugees (UNHCR) in June 2014. Our analysis included reviewing the reports’ methodologies and discussing the reports with both the organizations, as well as with CBP. Further, we reviewed a September 2010 DHS Office of Inspector General (OIG) report on the treatment of UAC in CBP custody, which discussed CBP’s UAC training efforts. Our analysis included reviewing the report’s methodology. We also met with the DHS Office for Civil Rights and Civil Liberties officials who conducted visits to south Texas CBP facilities in July 2014 to investigate civil rights complaints submitted by UAC and discussed their methodology and findings regarding the screening of UAC. As a result of our review and analysis, we determined that the conclusions in these studies and their results were valid and reasonable for use in our report. To determine the extent to which DHS and State developed and implemented policies and procedures to ensure that UAC are cared for as required while in DHS custody and during repatriation, we reviewed CBP and ICE policies and procedures on how to care for UAC, which included the March 2009 memorandum Implementation of the TVPRA, Border Patrol’s June 2008 Hold Rooms and Short Term Custody memorandum, and the August 2008 OFO Directive, Secure Detention, Transport, and Escort Procedures. We also evaluated ICE policies, such as ICE Directive 11087.1, Operations of Holding Facilities, September 2014, among others. We compared these against TVPRA and Flores Agreement requirements related to care. Further, we reviewed DHS UAC training documents, including CBP’s human trafficking virtual learning course and ICE’s UAC and Field Office Juvenile Coordination training. In addition, we reviewed September 2005 and September 2010 DHS Office of Inspector General reports on DHS’s responsibilities for juveniles and the treatment of UAC in CBP custody respectively, which discussed DHS and CBP’s care of UAC. During our site visits to Arizona, southern California, and south Texas, we observed the care of UAC at 15 CBP facilities and discussed care of UAC with officials at each facility. The results from our visits are specific to the care observed at facilities in these three regions at a specific point in time and cannot be generalized; however, the visits provided us with first-hand observations of the care provided to UAC and the conditions of facilities in which UAC are held during various operating conditions, including surge and normal operations. To ensure that we observed and discussed care systematically at each facility, we developed a care checklist with the eight elements of care required by Border Patrol and OFO policy as well as three additional elements required only by Border Patrol policy. We evaluated these elements of care at each facility we visited based on the totality of our observations and interviews. For example, we observed and discussed with relevant officials the presence or absence of elements, such as access to drinking water, food, toilets, and sinks. On the basis of our observations and interviews, we determined that CBP had implemented its care policies at these facilities if at least 80 percent of the facilities we visited were generally providing care consistent with policy requirements at the time of our visits. However, to assess CBP implementation of the requirement that UAC be “held in the least restrictive setting appropriate for their age and special needs,” we evaluated our observations and interviews to determine whether there was (1) no less restrictive environment available, and if there was a less restrictive environment, why the child was not placed in the least restrictive setting and (2) evidence that agents and officers considered requisite factors, including the child’s age, special needs, and particular vulnerability as a minor. However, we did not discuss every element at each facility we visited in Arizona because we developed the checklist after the visit to Arizona. We analyzed several data sets of information on DHS’s care of UAC. OFO and ICE do not collect data on the care of UAC in their custody in an accessible format, so we analyzed only Border Patrol data on the care of UAC in Border Patrol custody. We examined the Border Patrol policy on documentation of care, OBP 50/10.9-C Use of the e3 Juvenile Detention Module, April 2012, which requires agents to document all care of UAC in their automated system, the e3 Detention Module. Border Patrol officials told us they could not provide us with complete care data prior to fiscal year 2014 because they did not fully implement their data system until then. We analyzed data on the care of 55,905 UAC in Border Patrol custody from January through September 2014. To assess the reliability of these data, we interviewed Border Patrol officials in headquarters and in Arizona, Texas, and California, who are responsible for managing or entering the data, and examined the data for completeness and potential errors. Because our initial analysis of the data showed potential errors, and because Border Patrol officials told us that they were unsure about the consistency with which agents in the field used Border Patrol’s automated system, we determined that the Border Patrol custody data were not sufficiently reliable for the purposes of determining if Border Patrol agents cared for UAC as required. However, we determined that the data were sufficiently reliable for the purposes of determining how consistently agents used the automated system to document care of UAC and the frequency and type of potential errors, if any, in the data. To determine how consistently agents entered care information into Border Patrol’s automated system, we analyzed the completeness of the custody data by cross-checking the data with Border Patrol apprehension data from January through September 2014. We measured if, and how often, meals and welfare checks were entered for each child in custody. We used meals and welfare checks because both actions are required by policy for all UAC in custody, and must be performed at frequent enough intervals that agents should perform each action at least once for every child in custody, according to policy. We also analyzed how frequently agents used each care action that is available in the system. We also interviewed Border Patrol officials in Arizona, Texas, and California on their use of the automated system. In our initial data analysis, we found that agents were entering the same care action multiple times in a short time period. Therefore, in our full data analysis, we measured the frequency of these potential errors in the data by measuring how often a care action was entered more than once within a short time period. When agents entered an action more than once, they did not record them simultaneously, and so we did not identify these actions as duplicates and definite errors. However, these actions, such as meals, phone calls, or showers, were unlikely to have taken place more than once in a short time period. Therefore, we measured the frequency with which actions were entered within three time intervals: 10 minutes, 5 minutes, and 2 minutes. We selected these intervals because, while it is possible that an action could, on occasion, be performed more than once within a 10-minute period, it is unlikely that any of the actions would be performed more than once within a 2-minute interval. Thus, the 2-minute interval most strongly suggests error. However, because the volume of likely errors made it prohibitive for us to verify if each likely error was in fact an error, our analysis identifies those actions that we determined to be likely errors. We also interviewed Border Patrol officials in Texas, California, and headquarters to understand why these likely errors may be occurring. We compared these care efforts against the Flores Agreement, DHS policies, and standards in Standards for Internal Control in the Federal Government. With regard to time in custody, Border Patrol provided us with the dates and times that UAC were booked in to Border Patrol custody for UAC in custody from January 2014 through September 2014, which also contained information on book outs and transfers. OFO provided us with the dates and times that UAC were booked in and ICE provided book in dates, but neither DHS component was able to provide the dates and times that UAC were booked out. Additionally, we examined documentation about system requirements regarding time in custody for all components, and interviewed Border Patrol, OFO, and ICE officials about the reliability of the time-in-custody fields in their systems. On the basis of our analysis of Border Patrol data, and interviews with Border Patrol officials, we found Border Patrol time-in-custody data to be reliable. On the basis of interviews with OFO and ICE officials, we found the OFO and ICE time-in-custody data to be unreliable. As a result, we could not report the time UAC spent in DHS custody. We also reviewed reports listing the UAC that were in DHS custody longer than the statutorily required 72 hours, which ICE compiles and submits to the Department of Justice every 6 months to be submitted to plaintiffs’ counsel as required by the Flores v. Reno Stipulated Settlement Agreement. Our analysis of the reports from 2009 to 2013 showed missing fields, dates that indicated children were in custody for less than 72 hours, and apprehension dates that occurred after the transfer to HHS’s Office of Refugee Resettlement (ORR). Additionally, for UAC apprehended by Border Patrol, we compared the dates of apprehension documented in the reports with Border Patrol’s apprehension data and found that many of the apprehension dates were entered incorrectly by ICE in the reports. We also interviewed ICE officials in headquarters and the field about the reports, and officials at the Department of Justice who receive the reports. According to our analysis and our interviews, we found the reports unreliable for determining the number of UAC that were in DHS custody for longer than 72 hours. We compared these efforts against the TVPRA requirement for transferring UAC and standards in Standards for Internal Control in the Federal Government. To determine the effectiveness of the process to transfer UAC from DHS to HHS, we examined e-mails between ORR intake and Border Patrol in the Rio Grande Valley sector to place UAC in shelters. We analyzed 111 e-mails to place 593 UAC from July 6 to 8, 2014, and 65 e-mails to place 135 UAC from November 17 to 19, 2014. We chose these dates so we could analyze communications during the surge of UAC (in July) and after the surge of UAC (November), and we chose the Rio Grande Valley sector because (1) over 70 percent of all UAC apprehended by Border Patrol in fiscal year 2014 were apprehended in that sector, and (2) Rio Grande Valley officials had an in-box e-mail account set up for ORR placement e-mails that allowed the officials to easily extract them. We analyzed the e-mails for cancellations, redesignations, and multiple placements and, on the basis of the rationales in the e-mails, identified and categorized the reasons for cancellations, redesignations and multiple placements. Our conclusions are not generalizable but provide important insight into the placement process. We observed and discussed the interagency transfer process in Arizona, Texas, and California, and interviewed DHS Policy, Federal Emergency Management Agency (FEMA), Border Patrol, OFO, ICE, and ORR officials in headquarters about the interagency process. In addition, we reviewed September 2005 and September 2010 DHS OIG reports on DHS’s responsibilities for juveniles and the treatment of UAC in CBP custody respectively, which discussed roles and responsibilities for UAC within DHS, and the interagency process to transfer UAC to HHS. Our analysis included reviewing the reports’ methodologies. We also reviewed a 2008 HHS OIG report on ORR’s efforts that discussed interagency coordination with DHS for UAC and reviewed the report’s methodology. Additionally, we analyzed plans developed by DHS to respond to a UAC influx, which included provisions for interagency coordination, such as an interim 2015 concept of operations developed for internal use by DHS. We also reviewed an emergency response plan created by the Unified Coordination Group, which includes both DHS and HHS. We compared these efforts at interagency coordination, and information contained in the IG reports about the prior efforts to coordinate, against best practices and We also visited two HHS mechanisms for interagency collaboration.shelters in Arizona and spoke with grantees and shelter employees at these sites to better understand the transfer process. To understand the repatriation process, we interviewed Border Patrol, OFO, and Mexican consular officials in Arizona, Texas, and California, as well as DHS policy and ICE officials in Washington and officials from the Department of State’s Bureau of Western Hemisphere Affairs. We also analyzed all 30 DHS-negotiated local repatriation arrangements and the 2004 memorandum of understanding between DHS and Mexico regarding the repatriation of Mexican nationals. We compared provisions for UAC in the arrangements with TVPRA requirements for the content of agreements with contiguous countries regarding the repatriation of UAC. Where there were not provisions in the arrangements specific to UAC, but were provisions that would reasonably apply to UAC, we evaluated the more general provisions against these TVPRA requirements; for example, provisions that applied to all aliens being repatriated. To identify costs associated with apprehending, transporting, and caring for UAC in DHS custody, we obtained and analyzed financial data from CBP and ICE for fiscal year 2014. Because CBP and ICE did not collect data on UAC costs prior to fiscal year 2014, we analyzed available financial data collected using UAC project codes from February through To determine the September 2014 to determine DHS’s UAC costs.extent to which UAC costs were identified, we reviewed policies and procedures for tracking UAC costs and interviewed CBP and ICE officials. Additionally, we analyzed UAC transportation cost estimates developed by ICE to determine the estimated UAC transportation costs from fiscal years 2009 through 2013. These estimates were derived by ICE using fiscal year 2013 average costs and applied throughout fiscal years 2009 through 2013, adjusting for average annual inflation. We assessed the reliability of the cost data by (1) reviewing related documentation, such as financial statement audits and prior GAO work; (2) interviewing CBP and ICE officials responsible for ensuring cost data quality; and (3) looking for missing cost items or obvious errors. We determined that CBP’s and ICE’s financial project code costs were sufficiently reliable for the purposes of determining a minimum UAC cost for apprehending, transporting and caring for UAC while in DHS custody. We also determined that ICE’s transport cost data were sufficiently reliable for the purposes of determining an estimate of ICE’s UAC transport costs for fiscal years 2009 through 2013. To identify costs associated with UAC in HHS custody, we obtained and analyzed financial summary data for HHS ORR’s UAC program. Specifically, we analyzed UAC program cost data reported by ORR as of April 23, 2015—the most recent data available—to determine the total shelter, administrative, and UAC services costs. These data include summary information on end-of-year obligations by cost category compiled by ORR for fiscal years 2009 through 2014. We assessed the reliability of the data by (1) reviewing related documentation, such as prior GAO work; (2) comparing data against data in published sources, such as ORR’s Annual Report to Congress; and (3) interviewing ORR officials knowledgeable about the data. We asked the officials about the reliability of their data—including questions about the purpose for which the data were collected, the source of the data, and how the data were compiled. We determined that ORR’s financial summary data for its UAC program were sufficiently reliable for the purposes of reporting total program costs based on end-of-year obligations. To determine the average cost per bed for basic shelters, we analyzed cost data for fiscal years 2009 through 2014 for UAC shelters by type of shelter, which was compiled by ORR and reported as of April 23, 2015— the most recent data available. We also analyzed the average monthly funded bed capacity for basic shelters reported by ORR as of April 7, 2015—the most recent data available. Since bed capacity fluctuates throughout the year, we averaged the average monthly funded bed capacity data to determine the average annual number of beds. We then calculated the average annual cost per bed for basic shelters by dividing the shelter costs by the average annual number of beds for basic shelters. For the average daily costs, we divided the average annual cost per bed by 365 days. According to ORR officials, actual cost per bed varies based on shelter type, location, actual number of beds, and actual number of days for which beds were contracted. We assessed the reliability of the data by (1) reviewing related documentation, such as prior GAO work; (2) comparing data against data in published sources, such as ORR’s Annual Report to Congress; and (3) interviewing ORR officials who were knowledgeable about the data. We asked them data reliability questions, including questions about the purpose for which the data were collected, the source of the data, and how the data were compiled. We determined that ORR’s shelter costs and average monthly funded bed capacity data for basic shelters were sufficiently reliable for the purposes of reporting the average cost per bed for basic shelters. On the basis of information provided by HHS, we are unable to report on the average cost per bed for secure and therapeutic shelters because the annual cost and bed capacity data were not comparable across these shelter types. As a result, we focused on average cost per bed in basic shelters. In addition to analyzing UAC costs for DHS and HHS, we analyzed grant funding documentation for 5 of HHS ORR’s 34 UAC shelter grantees for fiscal year 2013. We selected the 5 grantees to reflect a variety of shelters based on the number of beds, dollar amount of the grant, type of shelter, and geographic location. We analyzed the federal financial report (White House Office of Management and Budget Standard Form 425) filed at the end of fiscal year 2013 for each grant to determine the total amount authorized and expended for each grantee. Further, we analyzed each grantee’s funding application (White House Office of Management and Budget Standard Forms 424 and 424A) to determine the budgeted costs for each cost category. In some cases, a grantee may have had multiple grants or multiple funding requests within a single grant, which we included in our analysis. Additionally, we obtained data, reported by ORR as of April 7, 2015—the most recent data available—for the total number of beds by shelter type for fiscal year 2013 for each of the 5 grantees. We assessed the reliability of the data by comparing the data with shelter types identified in source documentation and interviewing ORR officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of determining the maximum funded bed capacity for each sampled grantee. We conducted this performance audit from May 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Summary Statistics for Unaccompanied Alien Children Apprehended by the Department of Homeland Security during Fiscal Years 2009 through 2014 Within the Department of Homeland Security (DHS), U.S. Customs and Border Protection’s (CBP) U.S. Border Patrol and Office of Field Operations (OFO), and the U.S. Immigration and Customs Enforcement (ICE) apprehend, process, detain, and temporarily care for unaccompanied alien children (UAC)—individuals less than 18 years old with no lawful immigration status and no parent or legal guardian in the United States available to provide care and physical custody. Border Patrol apprehends UAC at U.S. borders between ports of entry (POE) and OFO apprehends UAC at POEs. ICE apprehends UAC within the Border Patrol United States at locations other than borders or POEs.accounted for 186,233 (about 90 percent) of DHS’s UAC apprehensions during fiscal years 2009 through 2014, and Border Patrol apprehended over 75 percent of these UAC in two sectors—about 52 percent in Rio Grande Valley, Texas, and about 25 percent in Tucson, Arizona. Figure 8 shows the number of UAC apprehensions by Border Patrol on a year-by- year basis in its Rio Grande Valley and Tucson sectors and all other sectors for fiscal years 2009 through 2014. Figure 8 also illustrates that the Rio Grande Valley’s share of all Border Patrol apprehensions continually increased and accounted for most of the increase in Border Patrol apprehensions in the past 6 years. OFO has apprehended fewer UAC than Border Patrol during fiscal years 2012 through 2014, and about one-third of OFO’s apprehensions occurred in its San Diego, California, field office, which includes the San As shown in figure 9, the number of UAC Ysidro and Otay Mesa POEs.apprehensions in the Tucson, Arizona, and Laredo, Texas, field offices continually increased each year from fiscal year 2012 through 2014. As shown in table 10, most of the increase in DHS apprehensions of UAC from fiscal year 2009 through 2014 has come from three Central American countries: El Salvador, Guatemala, and Honduras. UAC from Mexico continued to account for a significant number of apprehensions— between 12,000 and 19,000 a year—during this 6-year time period. However, starting in fiscal year 2013, the total number of UAC from these three Central American countries surpassed the number of UAC from Mexico and, in fiscal year 2014, far surpassed the number of UAC from Mexico. During fiscal years 2009 through 2014, UAC from Guatemala, Honduras, and El Salvador whom DHS apprehended were generally younger than UAC from Mexico. Specifically, as shown in figure 10, over 25 percent of UAC from Honduras and El Salvador, and 12 percent of UAC from Guatemala, were younger than 14 years old compared with 8 percent for UAC from Mexico. Further, the percentage of younger UAC whom DHS apprehended has increased in numbers and as a percentage of total apprehensions since fiscal year 2009. For example, figure 11 shows that the percentage of apprehended UAC under the age of 14 increased from 11 percent in fiscal year 2009 to 23 percent in fiscal year 2014. In addition, the composition of UAC males and females changed during fiscal years 2012 through 2014. Specifically, as shown in figure 12, the percentage of male UAC decreased from 82 percent in fiscal year 2012 to 70 percent in fiscal year 2014, while the percentage of female UAC apprehended by DHS increased from 18 percent in fiscal year 2011 to 30 percent in fiscal year 2014. Appendix IV: Costs for Unaccompanied Alien Children at Five Shelter Grantees In addition to analyzing costs associated with apprehending and caring for unaccompanied alien children (UAC) for the Department of Homeland Security (DHS) and the Department of Health and Human Services (HHS), we analyzed financial records for 5 of HHS’s Office of Refugee Resettlement’s (ORR) 34 UAC shelter grantees for fiscal year 2013. We selected the 5 grantees to reflect a variety of shelters based on the number of beds, dollar amount of the grant, type of shelter, and Types of shelter costs include administrative; geographic location.shelter personnel salaries, benefits, training and travel; operational expenses, such as building maintenance and utilities; and services and supplies provided to UAC, such as food, clothing, first aid, and education. Additionally, some grantees incur costs associated with foster care, such as foster parent reimbursements and training. Table 11 shows budgeted costs by cost category for each of the 5 sampled grantees as well as consolidated actual costs based on each grantee’s year-end financial status reports. As shown in the table, total budgeted and actual costs varied among the 5 grantees from about $3 million to about $86 million for fiscal year 2013. Labor costs (personnel salaries and fringe benefits) for each grantee accounted for 58 to 66 percent of overall budgeted costs, and supplies such as clothing, household, and educational materials accounted for 2 to 8 percent of overall budgeted costs. During fiscal year 2013, there were three “mega grantees” that operated shelters in multiple states and had more than one type of shelter, whereas other grantees operated only one type of shelter. As shown in table 12, the number of beds per grantee varied from 30 to over 1,400 in our sample. Shelter locations in our sample also varied from the larger “mega grantees” concentrated in the southwest and Illinois to smaller grantees in New York and Virginia. Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the Department of Health and Human Services Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kathryn Bernet (Assistant Director); Tracy Abdo; Katherine Blair; Chuck Bausell, Jr.; Frances Cook; Joseph E. Dewechter; Michele Fejfar; Eric Hauswirth; Paul Hobart; Susan Hsu; Catherine Hurley; Connor Kincaid; Sasan J. Najmi; and Janet Temko-Blinder made key contributions to this report.
From fiscal years 2009 through 2014, DHS apprehended more than 200,000 UAC, and the number of UAC apprehended in fiscal year 2014 (about 74,000) was more than four times larger than that for fiscal year 2011 (about 17,000). On the journey to the United States, many UAC have traveled thousands of miles under dangerous conditions. The Violence Against Women Reauthorization Act of 2013 included a provision for GAO to, among other things, review how DHS cares for UAC. This report examines, among other things, the extent to which DHS has developed policies and procedures to (1) screen all UAC as required and (2) care for all UAC as required. GAO reviewed TVPRA and other legal requirements, DHS policies for screening and caring for UAC, fiscal year 2009 through 2014 apprehension data on UAC, and 2014 Border Patrol UAC care data. GAO also randomly sampled and analyzed case files of Mexican UAC whom Border Patrol apprehended in fiscal year 2014. GAO interviewed DHS and HHS officials in Washington, D.C., and at Border Patrol and OFO facilities in Arizona, California, and Texas selected on the basis of UAC apprehension data. Within the Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP) has issued policies and procedures to evaluate, or screen, unaccompanied alien children (UAC)—those under 18 years old with no lawful immigration status and no parent or legal guardian in the United States available to provide care and physical custody—as required by the Trafficking Victims Protection Reauthorization Act of 2008 (TVPRA). However, CBP's Border Patrol agents and Office of Field Operations (OFO) officers who screen UAC have not consistently applied the required screening criteria or documented the rationales for decisions resulting from screening. Specifically, under TVPRA, DHS is to transfer UAC to the Department of Health and Human Services (HHS), but may allow UAC from Canada and Mexico to return to their home countries, that is, to be repatriated, if DHS determines that UAC (1) are not victims of a severe form of trafficking in persons, (2) are not at risk of trafficking upon return, (3) do not have a fear of returning due to a credible fear of persecution, and (4) are able to make an independent decision about returning. GAO found that agents made inconsistent screening decisions, had varying levels of awareness about how they were to assess certain screening criteria, and did not consistently document the rationales for their decisions. For example, CBP policy states that UAC under age 14 are presumed generally unable to make an independent decision, but GAO's analysis of CBP data and a random sample of case files from fiscal year 2014 found that CBP repatriated about 93 percent of Mexican UAC under age 14 from fiscal years 2009 through 2014 without documenting the basis for decisions. Providing guidance on how CBP agents and officers are to assess against UAC screening criteria could better position CBP to meet legal screening requirements, and ensuring that agents document the rationales for decisions would better position CBP to review the appropriateness of these decisions. DHS has policies in place to implement UAC care requirements, such as providing meals, and GAO's observations and interviews at 15 CBP facilities indicate that CBP generally provided care consistent with these policies at the time of GAO's visits. However, DHS does not collect complete and reliable data on care provided to UAC or the length of time UAC are in DHS custody. GAO analyzed available data on care provided to nearly 56,000 UAC apprehended by Border Patrol in fiscal year 2014 and found that agents documented 14 of 20 possible care actions for fewer than half of the UAC (the remaining 6 actions were documented for more than 50 percent of the UAC). Also, OFO has a database to record UAC care, but officers at most ports of entry do not do so. Developing and implementing processes to help ensure agents and officers record UAC care actions would provide greater assurance that DHS is meeting its care and custody requirements. Further, the interagency process to refer and transfer UAC from DHS to HHS is inefficient and vulnerable to errors because it relies on e-mails and manual data entry, and documented standard procedures, including defined roles and responsibilities, do not exist. DHS and HHS have experienced errors, such as assigning a child to two shelters at once, and holding an empty bed for 14 days at a shelter while HHS officials had placed the child elsewhere. Jointly developing a documented interagency process with defined roles and responsibilities could better position DHS and HHS to have a more efficient and effective process to refer, transfer, and place UAC in shelters.
Background Three general types of Internet pharmacies sell prescription drugs directly to consumers. First, some Internet pharmacies operate much like traditional drugstores, selling a wide range of prescription drugs and requiring consumers to submit a prescription from their physicians before their orders are filled. In some instances, these Internet pharmacies are affiliated with traditional chain drug stores. Second, other Internet pharmacies may sell a more limited range of drugs, often specializing in certain lifestyle medications, such as those that treat sexual dysfunction or assist in weight control. These Internet pharmacies typically require consumers to fill out an online medical history questionnaire in place of a traditional examination by a physician, and issue a prescription after a physician affiliated with the pharmacy reviews the questionnaire. Still other Internet pharmacies dispense drugs without a prescription. In the United States, the practice of pharmacy is regulated by state boards of pharmacy, which establish and enforce standards intended to protect the public. State boards of pharmacy also license pharmacists and pharmacies. To legally dispense a prescription drug, a licensed pharmacist working in a licensed pharmacy must be presented a valid prescription from a licensed health care professional. The requirement that drugs be prescribed and dispensed by licensed professionals helps ensure patients receive the proper dose, take the medication correctly, and are informed about warnings, side effects, and other important information about the drug. Under the Federal Food, Drug, and Cosmetic Act (FDCA), as amended, FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported drugs. To do so, FDA establishes standards for the safety, effectiveness, and manufacture of drugs that must be met before they are approved for the U.S. market. To gain approval, a drug manufacturer must demonstrate that a drug is safe and effective, and that the manufacturing methods and controls that will be used in the specific facility where it will be manufactured meet FDA standards. The same drug manufactured in another facility not approved by FDA—such as a foreign- made version of an approved drug—may not be sold legally in the United States. Drugs are subject to other statutory and regulatory standards relating to purity, labeling, manufacturing, and packaging. Failure to meet these standards could result in a drug being considered adulterated or misbranded and therefore illegal for sale, which could result in FDA enforcement action. The FDCA requires that drugs be dispensed with labels that include the name of the prescriber, directions for use, and cautionary statements, among other things. A drug is considered misbranded if its labeling or container is misleading, or if the label fails to include required information. Prescription drugs dispensed without a prescription are also considered misbranded. In addition, if a drug is susceptible to deterioration and must, for example, be maintained in a temperature-controlled environment, it must be packaged and labeled in accordance with regulations and manufacturer standards. Drugs must also be handled to prevent adulteration, which may occur, for example, if held under unsanitary conditions leading to possible contamination. FDA-approved drugs manufactured in foreign countries, including those sold over the Internet, are subject to the same requirements as domestic drugs. Further, imported drugs may be denied entry into the United States if they “appear” to be unapproved, adulterated, or misbranded, among other things. While the importation of such drugs may be illegal, FDA has allowed individuals to bring small quantities of certain drugs into the United States for personal use under certain circumstances. Internet pharmacies pose challenges for regulators. State boards of pharmacy in many states have reported difficulty identifying Internet pharmacies located outside of their borders and have limited ability and authority to investigate and act against pharmacies that do not comply with state pharmacy laws when they are identified. In 2000, nearly half of the state boards had identified consumer complaints against Internet pharmacies or reported problems with Internet pharmacies not complying with state pharmacy laws. Additionally, state medical boards have reported receiving complaints about physicians prescribing drugs over the Internet without performing an examination of the patient. Federal agencies have taken steps to stop the illegal sales of prescription drugs and other substances by Internet pharmacies. For example, FDA has taken enforcement actions against Internet pharmacies; the Department of Justice has prosecuted Internet pharmacies and physicians for dispensing medications without a valid prescription; and DEA has investigated Internet pharmacies for illegal distribution of controlled substances. Most of the Targeted Prescription Drugs Were Purchased from Multiple Internet Pharmacies Without Providing a Prescription We were able to obtain the majority of prescription drugs we targeted for purchase from a wide variety of domestic and foreign Internet pharmacies without providing a prescription. Five U.S. and all 18 Canadian pharmacies from which we obtained drug samples required a patient-provided prescription, whereas the remaining 24 U.S. and all 21 other foreign pharmacies from which we obtained samples either provided a prescription based on an online medical questionnaire or had no prescription requirement. Although we obtained samples of most of the drugs we targeted for purchase, some drugs, such as those with special safety restrictions and narcotics, were available from fewer sources or were more difficult to obtain. Samples of 11 of 13 Targeted Drugs Obtained from Internet Pharmacies We obtained 1 or more samples of 11 of the 13 drugs we targeted, both with and without a patient-provided prescription. In total, we placed 90 orders—each with a different Internet pharmacy in the United States, Canada, and other foreign countries—and received 68 samples. Drug samples we received from other foreign pharmacies came from Argentina, Costa Rica, Fiji, India, Mexico, Pakistan, Philippines, Spain, Thailand, and Turkey. Most of the drugs—45 of 68—were obtained without a patient- provided prescription. These included drugs for which physician supervision is of particular importance due to the possibility of severe side effects, such as Accutane, or the high potential for abuse and addiction, such as the narcotic painkiller hydrocodone. (See table 2.) Although most of the samples we received were obtained without a patient- provided prescription, prescription requirements varied. Five U.S. and all 18 Canadian pharmacies from which we obtained drug samples required the patient to provide a prescription. The remaining 24 U.S. pharmacies generally provided a prescription based on a general medical questionnaire filled out online by the patient. Questionnaires requested information on the patient’s physical characteristics, medical history, and condition for which drugs were being purchased. Several pharmacy Web sites indicated that a U.S.-licensed physician reviews the completed questionnaire and issues a prescription. The other foreign Internet pharmacies we ordered from generally had no prescription requirements, and many did not seek information regarding the patient’s medical history or condition. The process for obtaining a drug from many of these pharmacies involved only selecting the desired medication and submitting the necessary billing and shipping information. (See table 3.) The Availability and Ease of Purchase Varied by Drug While we obtained samples of most of the drugs we targeted for purchase on the Internet, certain drugs were more widely available and easier to purchase than others. The top selling drugs Celebrex (a pain reliever), Lipitor (a cholesterol-lowering drug), Viagra (a medication for male sexual dysfunction), and Zoloft (an antidepressant) were available from multiple pharmacies. We placed 10 orders for each of these four drugs with little difficulty. Other drugs were available from fewer sources or were more difficult to obtain. Some of our orders for drugs with special safety restrictions were more closely scrutinized. For example, one order we placed for Accutane was declined by a U.S. pharmacy. Accutane is an acne medication that may cause birth defects and serious mental disturbances leading to suicide among some users. The pharmacy indicated that it declined our order because the physician was not included on a national registry of qualified prescribers. Similarly, one U.S. and one Canadian Internet pharmacy declined our order for Clozaril. According to its manufacturer, patients taking Clozaril, an antipsychotic medication, must have ongoing blood tests to monitor for the development of a fatal blood disorder that can occur during treatment. The U.S. pharmacy that declined our order indicated that Clozaril should not have been offered for sale on its Web site, and the Canadian pharmacy indicated that more stringent prescription requirements prevented it from dispensing the drug to patients outside of Canada. Narcotic pain medications—OxyContin, Percocet, and Vicodin—were also less readily available. Despite extensive searching of Internet pharmacy sites, we found few that sold these drugs without a prescription. Other factors also hindered our ability to purchase these drugs. For example, some pharmacies that advertised the narcotics did not actually sell them. Rather, they attempted to substitute a different, often less potent and nonnarcotic drug once the order was placed. In addition, several pharmacies that offered narcotics required payment by means that were beyond our scope, such as check, bank transfers, or “e-gold” exchanges. We were able to place orders for the generic version of Vicodin at several U.S. pharmacies; however, some of these pharmacies required not only an online medical questionnaire, but also a telephone consultation with a pharmacy-designated physician in order to obtain a prescription. Finally, we were able to place only one order for a drug purporting to be OxyContin, and only after locating the source by paying a membership fee and joining an Internet pharmacy drug club, which referred us to the site. Most Problems Identified among Drug Samples Received from Other Foreign Internet Pharmacies We identified several problems associated with the handling, FDA-approval status, and authenticity of the 21 drug samples we received from other foreign Internet pharmacies. None included required pharmacy labels that provided patient instructions for use, and few provided warning information. Thirteen were shipped improperly, were packaged unconventionally, or arrived damaged. Manufacturers reported that most of the samples they reviewed at our request from other foreign pharmacies were not approved by FDA for the United States—although most had a comparable chemical composition to the product we ordered—and 4 were either counterfeit products or otherwise not comparable to the product we ordered. While most of the samples received from Canadian Internet pharmacies were unapproved for the U.S. market, they otherwise had a comparable chemical composition, and the samples from U.S. and Canadian pharmacies exhibited few problems otherwise. Table 4 summarizes the problems we identified among the 68 samples we received. All Drug Samples Received from Other Foreign Pharmacies Exhibited Problems Associated with Their Handling None of the 21 prescription drug samples we received from other foreign Internet pharmacies included a dispensing pharmacy label that provided patient instructions for use, and only 6 of the samples came with warning information. Lack of instructions and warnings on these drugs leaves consumers who take them at risk for potentially dangerous drug interactions or side effects from incorrect or inappropriate use. For example, we received 2 samples purporting to be Viagra, a drug used to treat male sexual dysfunction, without any warnings or instructions for use. (See fig. 1.) According to its manufacturer, this drug should not be prescribed for individuals who are currently taking certain heart medications, as it can lower blood pressure to dangerous levels. Additionally, 2 samples of Roaccutan, a foreign version of Accutane, arrived without any instructions in English. (See fig. 2.) As noted, possible side effects of this drug include birth defects and severe mental disturbances. Compounding the concerns regarding the lack of warnings and patient instructions for use, none of the other foreign pharmacies ensured patients were under the care of a physician by requiring that a prescription be submitted before the order is filled. We observed other evidence of improper handling among 13 of the 21 drug samples we received from other foreign Internet pharmacies. For example, three samples of Humulin N were not shipped in accordance with manufacturer handling specifications. Despite the requirement that this drug be stored under temperature-controlled and insulated conditions, the samples we received were shipped in envelopes without insulation. (See fig. 3.) Similarly, 6 samples of other drugs were shipped in unconventional packaging, in some instances with the apparent intention of concealing the actual contents of the package. For example, the sample purporting to be OxyContin was shipped in a plastic compact disc case wrapped in brown packing tape—no other labels or instructions were included, and a sample of Crixivan was shipped inside a sealed aluminum can enclosed in a box labeled “Gold Dye and Stain Remover Wax.” (See fig. 4.) Additionally, 5 samples we received were damaged and included tablets that arrived in punctured blister packs, potentially exposing pills to damaging light or moisture. (See fig. 5.) One drug manufacturer noted that damaged packaging may also compromise the validity of drug expiration dates. Most Drug Samples Received from Other Foreign Pharmacies Were Unapproved, Four Were Not Authentic Among the 21 drug samples from other foreign pharmacies, manufacturers determined that 19 were not approved for the U.S. market for various reasons, including that the labeling or the facilities in which they were manufactured had not been approved by FDA. For example, the manufacturer of one drug noted that 2 samples we received of that drug were packaged under an alternate name used for the Mexican market. The manufacturer of another drug found that 3 samples we received of that drug were manufactured at a facility unapproved to produce drugs for the U.S. market. In all but 4 instances, however, manufacturers determined that the chemical composition of the samples we received from other foreign Internet pharmacies was comparable to the chemical composition of the drugs we had ordered. Two samples of one drug were found by the manufacturer to be counterfeit and contained a different chemical composition than the drug we had ordered. In both instances the manufacturer reported that samples had less quantity of the active ingredient, and the safety and efficacy of the samples could not be determined. Manufacturers also found 2 additional samples to have a significantly different chemical composition than that of the product we had ordered. Drugs Received from Canadian and U.S. Internet Pharmacies Exhibited Fewer Problems All 47 of the prescription drug samples we received from Canadian and U.S. Internet pharmacies included labels from the dispensing pharmacy that generally provided patient instructions for use and 87 percent of these samples (41 of 47) included warning information. Furthermore, all samples were shipped in accordance with special handling requirements, where applicable, and arrived undamaged. Manufacturers reported that 16 of the 18 samples from Canadian Internet pharmacies were unapproved for sale in the United States, citing for example unapproved labeling and packaging. However, the samples were all found to be comparable in chemical composition to the products we ordered. Finally, the manufacturer found that 1 sample of a moisture-sensitive medication from a U.S. pharmacy was inappropriately removed from the sealed manufacturer container and dispensed in a pharmacy bottle. Some Internet Pharmacies Were Not Reliable in Their Business Practices We observed questionable characteristics and business practices of some of the Internet pharmacies from which we received drugs. Most, but not all, involved other foreign pharmacies. These included pharmacies that accepted payment but did not provide the drugs ordered, shipments of drugs with questionable return addresses, pharmacies that obscured details about the drugs sold, and pharmacies that were under investigation by regulatory agencies. We ultimately did not receive six of the orders we placed and paid for, suggesting the potential fraudulent nature of some Internet pharmacies or entities representing themselves as such. The six orders were for Clozaril, Humulin N, and Vicodin, and cost over $700 in total. Five of these orders were placed with non-Canadian foreign pharmacies and one was placed with a pharmacy whose location we could not determine. We followed up with each pharmacy in late April and early May of 2004 to determine the status. Three indicated they would reship the product, but as of June 10, 2004, we had not received the shipments. Three others did not respond to our inquiry. We determined that at least eight of the return addresses included on samples we received from other foreign Internet pharmacies were shipped from locations that raise questions about the entities that provided the samples. For example, we found a shopping mall in Buenos Aires, Argentina, at the return address provided on a sample of Lipitor. Authorities assisting us in locating this address found it impossible to identify which, if any, of the many retail stores mailed the package. The return address for a sample of Celebrex was found to be a business in Cozumel, Mexico, but representatives of that business informed authorities that it had no connection to an Internet pharmacy operation. Finally, the return addresses on samples of Humulin N and Zoloft were found to be private residences in Lahore, Pakistan. Certain practices of Internet pharmacies may render it difficult for consumers to know exactly what they are buying. Some non-Canadian foreign Internet pharmacies appeared to offer U.S. versions of brand name drugs on their Web sites, but attempted to substitute an alternative drug during the order process. In some cases, other foreign pharmacies substituted alternative drugs after the order was placed. For example, one Internet pharmacy advertised brand name Accutane, which we ordered. The sample we received was actually a generic version of the drug made by an overseas manufacturer. About 21 percent of the Internet pharmacies from which we received drugs (14 of 68) were under investigation by regulatory agencies. The reasons for the investigations by DEA and FDA include allegations of selling controlled substances without a prescription; selling adulterated, misbranded, or counterfeit drugs; selling prescription drugs where no doctor-patient relationship exists; smuggling; and mail fraud. The pharmacies under investigation were concentrated among the U.S. pharmacies that did not require a patient-provided prescription (9) and other foreign (4) pharmacies. One Canadian pharmacy was also included among those under investigation. Concluding Observations Consumers can readily obtain many prescription drugs over the Internet without providing a prescription—particularly from certain U.S. and foreign Internet pharmacies outside of Canada. Drugs available include those for which patients should be monitored for side effects or where the potential for abuse is high. For these types of drugs in particular, a prescription and physician supervision can help ensure patient safety. In addition to the lack of prescription requirements, some Internet pharmacies can pose other safety risks for consumers. Many foreign Internet pharmacies outside of Canada dispensed drugs without instructions for patient use, rarely provided warning information, and in four instances provided drugs that were not the authentic products we ordered. Consumers who purchase drugs from foreign Internet pharmacies that are outside of the U.S. regulatory framework may also receive drugs that are unapproved by FDA and manufactured in facilities that the agency has not inspected. Other risks consumers may face were highlighted by the other foreign Internet pharmacies that fraudulently billed us, provided drugs we did not order, and provided false or questionable return addresses. It is notable that we identified these numerous problems despite the relatively small number of drugs we purchased, consistent with problems recently identified by state and federal regulatory agencies. Agency and External Comments In commenting on a draft of this report, FDA generally agreed with our findings and conclusions and made suggestions to clarify or expand upon its contents (see app. II). FDA commented that, while the draft report noted Internet pharmacy Web sites purported or appeared to be from various countries, the draft did not demonstrate that the drug samples we received were actually sent from those countries, such as by discussing return addresses and postmarks on the samples. FDA suggested we indicate the methods we used to determine the samples’ origins. We modified the report to indicate that we determined the location of the Internet pharmacy Web sites from which we received drug samples based on information contained in the pharmacy Web sites and the return addresses and postmarks on the packages we received. FDA also commented that our finding that certain unapproved drugs were chemically equivalent to the brand name products we ordered was misleading. FDA noted that chemical equivalence testing may not always determine whether a drug is comparable in all respects to the FDA- approved drug and therefore fully therapeutically equivalent. We relied on manufacturers to determine whether the drug samples we received were comparable to their own FDA-approved brand name version of the drug, and manufacturers conducted a range of tests to make this determination. Nevertheless we modified the final report to note the potential limitations to chemical equivalence testing. FDA also made several observations about the practices of Internet pharmacies and provided technical comments, which we incorporated where appropriate. We also provided a draft of this report to DEA for technical comments and to ensure information we reported did not compromise its ongoing investigations. The agency responded that it had no comments. Finally, we provided segments of the draft report to the manufacturer of each drug sample we received. Each manufacturer reviewed the segments of the draft report relating to its own product(s), and provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce this report’s contents, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to the Acting Commissioner of FDA, the Administrator of DEA, and others upon request. In addition, this report will be available at no charge at the GAO Web site at http://www.gao.gov. Please call Marcia Crosse at (202) 512-7119 or Robert Cramer at (202) 512- 7455 if you have any questions. Another contact and other major contributors are listed in appendix I. Comments from the Food And Drug Administration GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Major contributors to this report were Margaret Smith, Corey Houchins- Witt, Andrew O’Connell, Ramon Rodriguez, Julian Klazkin, Helen Desaulniers, Robert Copeland, and Harold Lewis. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
As the demand for and the cost of prescription drugs rise, many consumers have turned to the Internet to purchase drugs. However, the global nature of the Internet can hinder state and federal efforts to identify and regulate Internet pharmacies to help assure the safety and efficacy of products sold. Recent reports of unapproved and counterfeit drugs sold over the Internet have raised further concerns. GAO was asked to examine (1) the extent to which certain drugs can be purchased over the Internet without a prescription; (2) whether the drugs are handled properly, approved by the Food and Drug Administration (FDA), and authentic; and (3) the extent to which Internet pharmacies are reliable in their business practices. GAO attempted to purchase up to 10 samples of 13 different drugs, each from a different pharmacy Web site, including sites in the United States, Canada, and other foreign countries. GAO determined whether the samples contained a pharmacy label with patient instructions for use and warnings on the labels or the packaging and forwarded the samples to their manufacturers to determine whether they were approved by FDA and authentic. GAO also confirmed the locations of several Internet pharmacies and identified those under investigation by regulatory agencies. GAO obtained most of the prescription drugs it targeted from a variety of Internet pharmacy Web sites without providing a prescription. GAO obtained 68 samples of 11 different drugs--each from a different pharmacy Web site in the United States, Canada, or other foreign countries, including Argentina, Costa Rica, Fiji, India, Mexico, Pakistan, Philippines, Spain, Thailand, and Turkey. Five U.S. and all 18 Canadian pharmacy sites from which GAO received samples required a patient-provided prescription, whereas the remaining 24 U.S. and all 21 foreign pharmacy sites outside of Canada provided a prescription based on their own medical questionnaire or had no prescription requirement. Among the drugs GAO obtained without a prescription were those with special safety restrictions and highly addictive narcotic painkillers. GAO identified several problems associated with the handling, FDA approval status, and authenticity of the 21 samples received from Internet pharmacies located in foreign countries outside of Canada. Fewer problems were identified among pharmacies in Canada and the United States. None of the foreign pharmacies outside of Canada included required dispensing pharmacy labels that provided instructions for use, few included warning information, and 13 displayed other problems associated with the handling of the drugs. For example, 3 samples of a drug that should be shipped in a temperature- controlled environment arrived in envelopes without insulation. Manufacturer testing revealed that most of these drug samples were unapproved for the U.S. market; however, manufacturers found the chemical composition of all but 4 was comparable to the product GAO ordered. Four samples were determined to be counterfeit products or otherwise not comparable to the product GAO ordered. Similar to the samples received from other foreign pharmacies, manufacturers found most of those from Canada to be unapproved for the U.S. market; however, manufacturers determined that the chemical composition of all drug samples obtained from Canada were comparable to the product GAO ordered. Some Internet pharmacies were not reliable in their business practices. Most instances identified involved pharmacies outside of the United States and Canada. GAO did not receive six orders for which it had paid. In addition, GAO found questionable entities located at the return addresses on the packaging of several samples, such as private residences. Finally, 14 of the 68 pharmacy Web sites from which GAO obtained samples were found to be under investigation by regulatory agencies for reasons including selling counterfeit drugs and providing prescription drugs where no valid doctor- patient relationship exists. Nine of these were U.S. sites, 1 a Canadian site, and 4 were other foreign Internet pharmacy sites. In commenting on a draft of this report, FDA generally agreed with its findings and conclusions.
Background VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring they receive medical care, benefits, social support, and lasting memorials. The information technology programs that I will be discussing today are primary concerns of two of VA’s major components: the Veterans Health Administration, which manages one of the largest health care systems in the United States, with 157 hospitals nationwide, and the Veterans Benefits Administration, which provides benefits and services to veterans and their dependents that include compensation and pension, educat guaranty, and insurance. VA and DOD Have Been Working on Electronic Medical Records Since 1998 In 1998, following a presidential call for VA and DOD to start developing a “comprehensive, life-long medical record for eac service member,” the two departments began a joint course of action aimed at achieving the capability to share patient health h information for active duty military personnel and veterans. Their first initiative, undertaken in that year, was known as the Government Computer-Based Patient Record (GCPR) project; the goal of this project was an electronic interface that would allow physicians and other authorized users at VA and DOD health facilities to access data from any of the other agency’s health information systems. The interface was expected to compile requested patient information in a virtual record that could be displayed on a user’s computer screen. In our reviews of the GCPR project, we determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. In April 2001 and in June 2002, we made recommendations to help strengthen the management and oversight of the project. In 2001, we recommended that the participating agencies (1) designate a lead entity with final decision-making authority and establish a clear line of authority for the GCPR project and (2) create comprehensive and coordinated plans that included an agreed-upon mission and clear goals, objectives, and performance measures, to ensure that the agencies could share comprehensive, meaningful, accurate, and secure patient health care data. In 2002, we recommended that the participating agencies revise the original goals and objectives of the project to align with their current strategy, commit the executive support necessary to adequately manage the project, and ensure that it followed sound project management principles. VA and DOD took specific measures in response to our recommendations for enhancing overall management and accountability of the project. By July 2002, VA and DOD had revised their strategy and had made progress toward being able to electronically share patient health data. The two departments had refocused the project and named it the Federal Health Information Exchange (FHIE) program and, consistent with our prior recommendation, had finalized a memorandum of agreement designating VA as the lead entity for implementing the program. This agreement also established FHIE as a joint activity that would allow the transfer from DOD to VA of health care information in two phases: ● The first phase, completed in mid-July 2002, enabled the one-way transfer of data from DOD’s existing health information system (the Composite Health Care System or CHCS) to a separate database that VA clinicians could access. ● A second phase, finalized in March 2004, completed VA’s and DOD’s efforts to add to the base of patient health information available to VA clinicians via this one-way sharing capability. According to the December 2004 VA/DOD Joint Executive Council Annual Report, FHIE was fully operational, and providers at all VA medical centers and clinics nationwide had access to data on separated service members. According to the report, the FHIE data repository at that time contained historical clinical health data on 2.3 million unique patients from 1989 on, and the repository made a significant contribution to the delivery and continuity of care and adjudication of disability claims of separated service members as they transitioned to veteran status. The departments reported total GCPR/FHIE costs of about $85 million through fiscal year 2003. In addition, officials stated that in December 2004, the departments began to plan for using the FHIE framework to transfer pre- and postdeployment health assessment data from DOD to VA. According to these officials, transferring of this information began in July 2005, and VA has now received about 1.3 million of these records on more than 560,000 separated service members. However, not all DOD medical information is captured in CHCS. For example, according to DOD officials, as of September 2005, 1.7 million patient stay records were stored in the Clinical Information System (a commercial product customized for DOD). In addition, many Air Force facilities use a system called the Integrated Clinical Database for their medical information. The revised DOD/VA strategy also envisioned achieving a longer term, two-way exchange of health information between DOD and VA, which may also address systems outside of CHCS. Known as HealthePeople (Federal), this initiative is premised on the departments’ development of a common health information architecture comprising standardized data, communications, security, and high-performance health information systems. The joint effort is expected to result in the secured sharing of hea between the new systems that each department is currently developing and beginning to implement—DOD’s AHLTA and VA’s HealtheVet VistA. DOD began developing AHLTA in 1997. component for the planned electronic interface—its Clinical Data Repository, and it expects to complete deployment of all of its major system capabilities by 2011. (When we reported in June 2004, this deployment was expected in September 2008.) DOD expects to spend about $783 million for the system through fiscal year 2006. At that time it was known as CHCS II. In November 2005, DOD renamed CHCS II the Armed Forces Health Longitudinal Technology Application (AHLTA). million on initiatives that comprise HealtheVet VistA through fiscal year 2005. Under the HealthePeople (Federal) initiative, VA and DOD envision that, on entering military service, a health record for the service member would be created and stored in DOD’s Clinical Data Repository. The record would be updated as the service member receives medical care. When the individual separated from activeduty and, if eligible, sought medical care at a VA facility, VA w then create a medical record for the individual, which would be stored in its Health Data Repository. On viewing the medical recor the VA clinician would be alerted and provided with access to the individual’s clinical information residing in DOD’s repository. In same manner, when a veteran sought medical care at a military treatment facility, the attending DOD clinician would be alerted an provided with access to the health information in VA’s repository. According to the departments, this planned approach would mak e virtual medical records displaying all available patient health information from the two repositories accessible to both departments’ clinicians. December 2004 VA and DOD Joint Strategic Plan. Developing the two repositories, populating them with data, a linking them through the CHDR interface would be important steps toward the two departments’ long-term goals as envisioned in HealthePeople (Federal). Achieving these goals would then depend on completing the development and deployment of the associated health information systems—HealtheVet VistA and AHLTA. In a review of the CHDR program in June 2004, we reported that the efforts of DOD and VA in this area demonstrated a number of management weaknesses. Among these were the lack of a well- defined architecture for describing the interface for a common health information exchange; an established project management lead entity and structure to guide the investment in the interface a its implementation; and a project management plan defining thetechnical and managerial processes necessary to satisfy project requirements. With these critical components missing, VA and DOD increased the risk that they would not achieve their goals. Accordingly, we recommended that the departments develop an architecture for the electronic interface between their health systems that includes system requirements, design specifications, and software descriptions; ● select a lead entity with final decision-making authority for the establish a project management structure to provide day-to guidance of and accountability for their inv estments in and implementation of the interface capability; and create and management plan for the electronic interface that defines the technical and managerial processes necessary to satisfy project requirements and includes (1) the authority and responsibility of each organizational unit; (2) a work breakdown structure for all o the tasks to be performed in developing, testing, and implemen the software, along with schedules associated with the tasks; and (3) a security policy. implement a comprehensive and coordinated project f ting In September 2005, we testified that VA and DOD had made progress in the electronic sharing of patient health data in their near-term demonstration projects. We noted that with regard to their long-term goals, the departments had improved the management of the CHDR program, but that this program continued to face significant challenges—in particular, developing a project management plan of sufficient specificity to be an effective guide for the program. Besides pursuing their long-term goals for future systems through the HealthePeople (Federal) strategy, the departments are working on two demonstration projects that focus on exchanging information between existing systems: (1) Bidirectional Health Information Exchange, a project to exchange health information on shared patients, and (2) Laboratory Data Sharing Interface, an application used to transfer laboratory work orders and results. These demonstration projects were planned in response to provisions of the Bob Stump National Defense Authorization Act of 2003, which mandated that VA and DOD conduct demonstration projects that included medical information and information technology systems to be used as a test for evaluating the feasibi advantages, and disadvantages of measures and programs designed to improve the sharing and coordination of health care and health care resources between the departments. Figure 1 is a time line showing initiation points for the VA and DOD efforts discussed here, including strategies, major programs, and the recent demonstration projects. Work on VETSNET Dates to 1986 The VETSNET effort grew out of an initiative begun by the Veterans Benefits Administration (VBA) in 1986 to replace its outdated Benefits Delivery Network. The Benefits Delivery Network, parts of which were developed in the 1960s, contains over 3 million veterans benefits records, including compensation and pension, education, and vocational rehabilitation and employment. Originally, the plan was to modernize all of these systems and in so doing provide a rich source for answering questions about veterans’ benefits and enable faster processing of benefits. As envisioned in the 1980s, the modernization would produce a faster, more flexible, higher capacity system that would be both an information system and a payment system. In 1996, after experiencing numerous false starts and spending approximately $300 million on the overall modernization of BDN, VBA revised its strategy and narrowed its focus to modernizing the compensation and pension payment system. At that time, we undertook an assessment of the department’s software development capability and determined that it was immature. In our assessment, we specifically examined the VETSNET effort and concluded that VBA could not reliably develop and maintain high-quality software on any major project within existing cost and schedule constraints. VBA showed significant weaknesses in requirements management, software project planning, and software subcontract management, with no identifiable strengths. We also testified that (1) VBA did not follow sound systems development practices on VETSNET, such as validation and verification of systems requirements; (2) it employed for the project a new systems development methodology and software development language not previously used; and (3) it did not develop the cost-benefit information necessary to track progress or assess return on investment (for example, total software to be developed and cost estimates). As a result, we concluded that VBA’s modernization efforts had inherent risks. Between 1996 and 2002 we reported several more times on VETSNET, highlighting concerns in several areas. (See attachment 1 for a description of the conclusions and findings of our products on this topic.) In these products, we made several recommendations aimed at improving VA’s software development capabilities, including that the department take steps to achieve greater maturity in its software development processes and that it delay any major investment in software development (beyond that needed to sustain critical day-to-day operations) until it had done so. In addition, we made recommendations aimed specifically at VETSNET development, including that VBA assess and validate users’ requirements for the new system; complete testing of the system’s functional business capability, as well as end-to-end testing to ensure that payments are made accurately; and establish an integrated project plan to guide its transition from the old to the new system. Although VBA took various actions in response to these recommendations, we continued to identify the department’s weak software development capability as a significant factor contributing to VBA’s persistent problems in developing and implementing the system—the same condition that we identified in 1996. We also reported that VBA continued to work on VETSNET without an integrated project plan. As a result, the development of VETSNET continued to suffer from problems in several areas, including project management, requirements development, and testing. VA and DOD Are Working to Share Medical Information VA and DOD have made progress in sharing patient health data by implementing applications developed under two demonstration projects that focus on the exchange of electronic medical information. The first—the Bidirectional Health Information Exchange—has been implemented at 16 VA/DOD locations, and the second—Laboratory Data Sharing Interface—has been implemented at 6 VA/DOD locations. Bidirectional Health Information Exchange. According to a VA/DOD annual report and program officials, Bidirectional Health Information Exchange (BHIE) is an interim step in the departments’ overall strategy to create a two-way exchange of electronic medical records. BHIE builds on the architecture and framework of FHIE, the application used to transfer health data on separated service members from DOD to VA. As discussed earlier, FHIE provides an interface between VA’s and DOD’s existing health information systems that allows one-way transfers only, which do not occur in real time: VA clinicians do not have access to transferred information until about 6 weeks after separation. In contrast, BHIE focuses on the two-way, near-real-time exchange of information (text only) on shared patients (such as those at sites jointly occupied by VA and DOD facilities). This application exchanges data between VA’s VistA system and DOD’s CHCS system (and AHLTA where implemented). As of September 2005, the departments reported having spent $2.6 million on BHIE. The primary benefit of BHIE is near-real-time access to patient medical information for both VA and DOD, which is not available through FHIE. During a site visit to a VA and DOD location in Puget Sound in 2005, we viewed a demonstration of this capability and were told by a VA clinician that the near-real-time access to medical information was very beneficial in treating shared patients. As of June 2006, BHIE was deployed at VA and DOD facilities at 16 sites, where the exchange of demographic, outpatient pharmacy, radiology, laboratory, and allergy data (text only) has been achieved. In addition, according to officials, over 120 outpatient military clinics associated with these sites also have access to this information through BHIE. According to VA and DOD, BHIE will be implemented at two more sites in July 2006. Table 1 presents a schedule for implementation of BHIE; the sites listed are all DOD sites with nearby VA facilities. Additionally, because DOD stores electronic medical information in systems other than CHCS (such as the Clinical Information System and the Integrated Clinical Database), work is currently under way to allow BHIE to have the ability to exchange information with those systems. Currently, one site is testing the use of BHIE as an interface allowing both departments’ staff to view discharge summaries stored in the Clinical Information System. DOD and VA plan to perform a side-by-side comparison to ensure that this capability maintains data quality. When they are satisfied, the capability will be provided to those DOD locations that currently use the Clinical Information System and have BHIE implemented. Doing so will permit all VA sites access to the information in the Clinical Information System on shared patients at DOD sites running BHIE. In addition, at the VA/DOD site in El Paso, a prototype is being designed for exchanging radiological images using the BHIE/FHIE infrastructure. If the prototype is successful, this capability will be extended to the rest of the sites. Laboratory Data Sharing Interface. The Laboratory Data Sharing Interface (LDSI) initiative enables the two departments to share laboratory resources. Through LDSI, a VA provider can use VA’s health information system to write an order for laboratory tests, and that order is electronically transferred to DOD, which performs the test. The results of the laboratory tests are electronically transferred back to VA and included in the patient’s medical record. Similarly, a DOD provider can choose to use a VA lab for testing and receive the results electronically. Once LDSI is fully implemented at a facility, the only nonautomated action in performing laboratory tests is the transport of the specimens. Among the benefits of LDSI are increased speed in receiving laboratory results and decreased errors from manual entry of orders. However, according to the LDSI project manager in San Antonio, a primary benefit of the project will be the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility will no longer have to contract out some of its laboratory work to private companies, but instead use the DOD laboratory. As of September 2005, the departments reported having spent about $3.3 million on LDSI. An early version of what is now LDSI was originally tested and implemented at a joint VA and DOD medical facility in Hawaii in May 2003. The demonstration project built on this application and enhanced it; the resulting application was tested in San Antonio and El Paso. It has now been deployed to six sites. According to the departments, a plan to export LDSI to two additional locations has been approved. Table 2 shows the locations at which it has been or is to be implemented. VA and DOD Are Taking Action to Achieve a Virtual Medical Record, but Much Work Remains Besides the near-term initiatives just discussed, VA and DOD continue their efforts on the longer term goal: to achieve a virtual medical record based on the two-way exchange of computable data between the health information systems that each is currently developing. The cornerstone for this exchange is CHDR, the planned electronic interface between the data repositories for the new systems. The departments have taken important actions on the CHDR initiative. As we testified in September 2005, they successfully completed Phase I of CHDR in September 2004 by demonstrating the two-way exchange of pharmacy information with a prototype in a controlled laboratory environment. According to department officials, the pharmacy prototype provided invaluable insight into each other’s data repository systems, architecture, and the work that is necessary to support the exchange of computable information. These officials stated that lessons learned from the development of the prototype were documented and being applied to Phase II of CHDR, the production phase, which is to implement the two-way exchange of patient health records between the departments’ data repositories. Further, the same DOD and VA teams that developed the prototype were developing the production version. In addition, the departments developed an architecture for the CHDR electronic interface, as we recommended in June 2004. The architecture for CHDR includes major elements required in a complete architecture. For example, it defines system requirements and allows these to be traced to the functional requirements, it includes the design and control specifications for the interface design, and it includes design descriptions for the software. Also in response to our recommendations, the departments established project accountability and implemented a joint project management structure. Specifically, the Health Executive Council was established as the lead entity for the project. The joint project management structure consists of a Program Manager from VA and a Deputy Program Manager from DOD to provide day-to-day guidance for this initiative. Additionally, the Health Executive Council established the DOD/VA Information Management/Information Technology Working Group and the DOD/VA Health Architecture Interagency Group, to provide programmatic oversight and to facilitate interagency collaboration on sharing initiatives between DOD and VA. To build on these actions and successfully carry out the CHDR initiative, however, the departments still have a number of challenges to overcome. The success of CHDR will depend on the departments’ instituting a highly disciplined approach to the project’s management. Industry best practices and information technology project management principles stress the importance of accountability and sound planning for any project, particularly an interagency effort of the magnitude and complexity of this one. Accordingly, in 2004 we recommended that the departments develop a clearly defined project management plan that describes the technical and managerial processes necessary to satisfy project requirements and includes (1) the authority and responsibility of each organizational unit; (2) a work breakdown structure for all of the tasks to be performed in developing, testing, and implementing the software, along with schedules associated with the tasks; and (3) a security policy. As of September 2005, the departments had an interagency project management plan that provided the program management principles and procedures to be followed by the project. However, this plan did not specify the authority and responsibility of organizational units for particular tasks; the work breakdown structure was at a high level and lacked detail on specific tasks and time frames; and security policy was still being drafted. No more recent plan has yet been provided. Without a plan of sufficient detail, VA and DOD increase the risk that the CHDR project will not deliver the planned capabilities in the time and at the cost expected. In addition, officials did not meet a previously established milestone: by October 2005, the departments had planned to be able to exchange outpatient pharmacy data, laboratory results, allergy information, and patient demographic information on a limited basis. However, according to officials, the work required to implement standards for pharmacy and medication allergy data was more complex than originally anticipated and would result in a delay. The new target date for the limited exchange of medication allergy, outpatient pharmacy, and patient demographic data has been postponed from February to June 2006. Currently, the departments report that they are close to finishing the development of a pilot to perform this data exchange at their joint facility in El Paso. They expect to be able to begin the pilot by the end of this month, which will allow them to share outpatient pharmacy and medication allergy information that can support drug- drug interaction checking and drug-allergy alerts. If the pilot is successful, it will enable for the first time the exchange of computable information between the departments’ two data repositories. Finally, the health information currently in the data repositories has various limitations. ● Although DOD’s Clinical Data Repository includes data in the categories that were to be exchanged at the missed milestone described above (outpatient pharmacy data, laboratory results, allergy information, and patient demographic information), these data are not yet complete. First, the information in the Clinical Data Repository is limited to those locations that have implemented the first increment of AHLTA, DOD’s new health information system. As of June 15, 2006, according to DOD officials, 115 of 138 medical treatment facilities worldwide have implemented this increment, and officials expect that the remaining facilities will receive the increment by the end of this year. Second, at present, health information in systems other than CHCS (such as the Clinical Information System and the Integrated Clinical Database) is not yet being captured in the Clinical Data Repository. However, work is currently under way to allow BHIE to have the ability to exchange information with those systems. ● The information in VA’s Health Data Repository is also limited: although all VA medical records are currently electronic, VA has to convert these into the interoperable format appropriate for the Health Data Repository. So far, the data in the Health Data Repository consist of patient demographics, vital signs records, allergy data, and outpatient pharmacy data for the 6 million veterans who have electronic medical records in VA’s current system, VistA (this system contains all the department’s medical records in electronic form). VA officials told us that they are currently converting lab results data. VA Has Been Severely Challenged by VETSNET Project Since its inception, the VETSNET program has been plagued by problems. In 2002, we offered a number of recommendations regarding the ongoing compensation and pension (C&P) replacement program. We testified that VBA should assess and validate users’ requirements for the new system and complete testing of the system’s functional business capability, including end- to-end testing. We also recommended that VA appoint a project manager, thoroughly analyze its current initiative, and develop a number of plans, including a revised C&P replacement strategy and an integrated project plan. We also noted that VBA had much work to do before it could fully implement the VETSNET C&P system by its target date (at that time) of 2005, and thus it would have to ensure that the aging Benefits Delivery Network (BDN) would be available to continue accurately processing benefits payments until a new system could be deployed. Accordingly, we recommended that VBA develop action plans to move from the current to the replacement system and to ensure the availability of BDN to provide the more than 3.5 million payments made to veterans each month. VA concurred with our recommendations and took several actions to address them. For example, it appointed a full-time project manager. Also, the project team reported that to ensure that business needs were met, certification had been completed of users’ requirements for the system’s applications. In addition, VA reported that a revised strategy for the replacement system was completed. This revised strategy included the business case, described the methodology used to identify system development alternatives, displayed the cost/benefit analysis results of the viable alternatives that could be used to develop the system, and provided a description of the recommended development plan. Based on this strategy, the Secretary of Veterans Affairs, Assistant Secretary for Information and Technology, the Under Secretary for Benefits, and the Deputy Chief Information Officer for Benefits approved continuation of the VETSNET development in September 2002. Further, to ensure that the benefits delivery network would be able to continue accurately processing benefits payment until the new system was deployed, VBA purchased additional BDN hardware, hired 11 new staff members to support BDN operations, successfully tested a contingency plan in the event of disruption of the system, and provided retention bonuses to staff familiar with BDN operations. However, VBA did not develop an integrated project plan for VETSNET, which is a basic requirement of sound project management. In addition, it did not develop an action plan for transitioning from the current to the replacement system. Thus, although the actions taken addressed some of our specific concerns, they were not sufficient to establish the program on a sound footing. In 2005, the VA CIO became concerned by continuing problems with VETSNET: the project continued to postpone target dates, and costs continued to increase (VA indicated that by 2005 these costs exceeded $69 million). Accordingly, he arranged to contract for an independent assessment of the department’s options for the VETSNET project, including an evaluation of whether the program should be terminated. This assessment, conducted by the Carnegie Mellon Software Engineering Institute (SEI), concluded that the program faced many risks arising from management, organizational, and program issues, but no technical barriers that could not be overcome. According to SEI, terminating the program would not solve the underlying management and organizational problems, which would continue to hamper any new or revised effort. SEI recommended that the department not terminate the program but take an aggressive approach to dealing with the issues SEI described while continuing to work on the program at a reduced pace. According to SEI, this approach would allow VA to make necessary improvements to its system and software engineering and program management capabilities while making gradual progress on the system. SEI also discussed specific concerns about the system’s management and the organization’s capabilities, presenting areas that required focus regardless of the particular course that VA chose for the system. For example: ● Setting realistic deadlines. SEI commented that there was no credible evidence that VETSNET would be complete by the target date, which at the time of the SEI review was December 2006. Because this deadline was unrealistic, VBA needed to plan and budget for supporting BDN so that its ability to pay veterans benefits would not be disrupted. ● Establishing an effective requirements process. ● Implementing effective program measurements in order to assess progress. ● Establishing sound program management. According to SEI, different organizational components had independent schedules and priorities, which caused confusion and deprived the department of a program perspective. These observations are consistent with our long-standing concerns regarding fundamental deficiencies in VBA’s management of the project. In the wake of the SEI assessment and recommendations, VA is in the process of creating, with contract help, an integrated master plan that is to cover the C&P replacement project. Because this plan is in process, no cost or schedule milestones have yet been finalized. According to VA, the integrated master plan is to be completed by the end of August 2006. VA officials told us that they intend to complete this plan before beginning to plan for modernizing the systems for paying education benefits or for paying vocational rehabilitation and employment benefits. Plans for making the transition to VETSNET and ending VBA’s dependence on BDN are also on hold. Thus, VA still lacks an integrated project plan or a plan to move from the current to the replacement system. Until it has an integrated project plan and schedule incorporating all the critical areas of the system development effort, VBA will lack the means of determining what needs to be done and when, and of measuring progress. Without plans to move from the current to the replacement system, VBA will lack assurance that it can c pay beneficiaries accurately and on time through the transition period. In summary, developing an electronic interface that will enable VA and DOD to exchange computable patient medical records is a highly complex undertaking that could lead to substantial benef its— improving the quality of health care and disability claims processing for the nation’s service members and veterans. VA and DOD have made progress in the electronic sharing of patient health data in their limited, near-term demonstration projects, and have taken a important step toward their long-term goals by improving the management of the CHDR program. However, the departments considerable work and significant challenges before they can achieve these long-term goals. While the departments have ma progress in developing a project management plan, it is not yet complete. Having a project management plan of sufficient speci to guide the program—including establishing accountability and addressing security—would help the departments avoid further delays in their schedule and ensure that they produce a capabilit that meets their expectations. VA has also been working to modernize the delivery of benefits through its development of VETSNET, but the pace of progress has been discouraging. Much work remains in accomplishing the original comprehensive goal of modernizing the aging system that VBA currently depends on to pay veterans benefits. Until VBA develops an integrated project plan that addresses the long-stan management weaknesses that we and others have identified, it will be uncertain when and at what cost VETSNET will be delivered. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. Contacts and Acknowledgments For information about this testimony, please contact Linda D. Koontz, Director, Information Management Issues, at (202) 512-6240 or at koontzl@gao.gov. Other individuals making key contributions to this testimony include Barbara S. Collier, Martin Katz, Barbara S. Oliver, Eric L. Trout, Robert Williams Jr., and Charles Youman. Attachment 1. Past GAO Products Highlighting VETSNET Concerns We previously performed several reviews addressing VETSNET and made numerous recommendations aimed at strengthening the program and VA’s software development and management capabilities. The table summarizes the results of these reviews. GAO Products Highlighting Concerns with VETSNET Project to Replace Compensation and Pension (C&P) Payment System VETSNET had inherent risks in that (1) it did not follow sound systems development practices, such as validation and verification of systems requirements; (2) it employed a new systems development methodology and software development language not previously used; and (3) VBA did not develop the cost-benefit information necessary to track progress or assess return on investment (for example, total software to be developed and cost estimates). VBA’s software development capability was immature and it could not reliably develop and maintain high- quality software on any major project within existing cost and schedule constraints, placing its software development projects at significant risk. VBA showed significant weaknesses in requirements management, software project planning, and software subcontract management, with no identifiable strengths. VETSNET experienced schedule delays and missed deadlines because (1) it employed a new software development language not previously used by the development team, one that was inconsistent with the agency’s other systems development efforts; (2) the department’s software development capability was immature and it had lost critical systems control and quality assurance personnel, and (3) VBA lacked a complete systems architecture; for example, neither a security architecture nor performance characteristics had been defined for the project. VBA’s software development capability remained ad hoc and chaotic, subjecting the agency to continuing risk of cost overruns, poor quality software, and schedule delays in software development. $11 million had reportedly been spent on VETSNET C&P; neither the May 1998 completion date nor the revised completion date of December 1998 were met. Contributing factors included lack of an integrated architecture defining the business processes, information flows and relationships, business requirements, and data descriptions, and VBA’s immature software development capability. VBA’s software development capability remained ad hoc and chaotic. The VETSNET implementation approach lacked key elements, including a strategy for data conversion and an integrated project plan and schedule incorporating all critical systems development areas. Further, data exchange issues had not been fully addressed. The project’s viability was still a concern. It continued to lack an integrated project plan and schedule addressing all critical systems development areas, to be used as a means of determining what needs to be done and when. A pilot test of 10 original claims that did not require significant development work may not have been sufficient to demonstrate that the product was capable of working as intended in an organizationwide operational setting. VBA still had fundamental tasks to accomplish before it could successfully complete development and implementation. It still had to assess and validate users’ requirements for the new system to ensure that business needs were met. It needed to complete testing of the system’s functional business capability, as well as end-to-end testing to ensure that payments would be made accurately. Finally, it needed to establish an integrated project plan to guide its transition from the old to the new system. VA still needed to address long-standing concerns regarding development and implementation. VA needed to appoint a project manager, undertake a complete analysis of the initiative, and develop plans, including a revised C&P replacement system strategy and an integrated project plan. It also needed to develop and implement action plans to move VBA from the current to the replacement system and to ensure that the Benefits Delivery Network would be able to continue accurately processing benefits payments until the new system was deployed. Much work remained before VBA could fully implement the VETSNET C&P system, and complete implementation was not expected until 2005. This meant that VBA had to continue relying on its aging Benefits Delivery Network to provide the more than 3.5 million payments that VA had to make to veterans each month. In late March, a VETSNET executive board and a project control board were established to provide decision support and oversee implementation, and VBA expected to hire a full-time project manager by the end of September. VBA also began revalidating functional business requirements for the new system, with completion planned by January 2003, and it identified actions needed to transition VBA from the current to the replacement system. VBA also hired a contractor and tasked the contractor with conducting functional, integration, and linkage testing, as well as software quality assurance for each release of the system applications. Despite these actions, completing implementation of the new system could take several years. All but one of the software applications for the new system still needed to be fully deployed or developed. Specifically, a rating board automation tool (RBA 2000) was deployed, although VBA did not plan to require all its regional offices to use it until July 2003. In addition, two others had not been completely deployed: one of these (Share, used to establish a new claim) was in use by only 6 of the 57 regional offices. The other (Modern Award Processing–Development, used to develop information on claims) was in pilot testing at two regional offices—Salt Lake and Little Rock—but was not expected to be implemented at the other 55 regional offices until October 2003. The remaining three software applications (Award Processing, Finance and Accounting System, and Correspondence) were still in development. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) is engaged in an ongoing effort to share electronic medical information with the Department of Defense (DOD), which is important in helping to ensure high-quality health care for active duty military personnel and veterans. Also important, in the face of current military responses to national and foreign crises, is ensuring effective and efficient delivery of veterans' benefits, which is the focus of VA's development of the Veterans Service Network (VETSNET), a modernized system to support benefits payment processes. GAO is testifying on (1) VA's efforts to exchange medical information with DOD, including both near-term initiatives involving existing systems and the longer term program to exchange data between the departments' new health information systems, and (2) VA's ongoing project to develop VETSNET. To develop this testimony, GAO relied on its previous work and followed up on agency actions to respond to GAO recommendations. VA and DOD are implementing near-term demonstration projects that exchange limited electronic medical information between their existing systems, and they are making progress in their longer term effort to share information between the new health information systems that each is developing. Two demonstration projects have been implemented at selected sites: (1) a project to achieve the two-way exchange of health information on patients who receive care from both departments and (2) an application to electronically transfer laboratory work orders and results. According to VA and DOD, these projects have enabled lower costs and improved service to patients by saving time and avoiding errors. In their longer term effort, VA and DOD have made progress, in response to earlier GAO recommendations, by designating a lead entity with final decision-making authority and establishing a project management structure. However, VA and DOD have not yet developed a clearly defined project management plan that gives a detailed description of the technical and managerial processes necessary to satisfy project requirements, as GAO previously recommended. Moreover, the departments have experienced delays in their efforts to begin exchanging patient health data; they have not yet fully populated the repositories that will store the data for their future health systems. As a result, much work remains to be done before the departments achieve their ultimate goal of sharing virtual medical records. VA has also been working to modernize the delivery of benefits through its development of VETSNET, but the pace of progress has been discouraging. Originally initiated in 1986, this program was prompted by the need to modernize VA's Benefits Delivery Network--parts of which are now 40-year-old technology--on which the department relies to make benefits payments, including compensation and pension, education, and vocational rehabilitation and employment. In 1996, after experiencing numerous false starts and spending approximately $300 million, VBA revised its strategy and narrowed its focus to modernizing the compensation and pension system. In earlier reviews, GAO has made numerous recommendations to improve the program's management, including the development of an integrated project plan. In response to GAO's recommendations as well as those of an independent evaluator, VA is now developing an integrated master plan for the compensation and pension system, which it intends to complete in August. Until VA addresses the managerial and program weaknesses that have hampered the program, it is uncertain when VA will be able to end its reliance on its aging benefits technology.
Background Military servicemembers, federal workers, and industry personnel must generally obtain security clearances to gain access to classified information. The three clearance level categories are: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably cause to national security. The degree of expected damage that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage” for top secret information, “serious damage” for secret information, and “damage” for confidential information. DOD’s Office of the Under Secretary of Defense for Intelligence has responsibility for determining eligibility for clearances for servicemembers, DOD civilian employees, and industry personnel performing work for DOD and 23 other federal agencies, as well as employees in the federal legislative branch. That responsibility includes obtaining background investigations, primarily through OPM. Within DOD, government employees use the information in OPM-provided investigative reports to determine clearance eligibility of clearance subjects. DOD’s program maintains approximately 2.5 million clearances. Although our high-risk designation covers only DOD’s program, our reports have documented clearance-related problems affecting other agencies such as the Department of Homeland Security (DHS). For example, our October 2007 report on state and local information fusion centers cited two clearance-related challenges: (1) the length of time needed for state and local officials to receive clearances from the Federal Bureau of Investigation (FBI) and DHS, and (2) the reluctance of some federal agencies—particularly DHS and FBI—to accept clearances issued by other agencies (i.e., clearance reciprocity). Similarly, our April 2007 testimony on maritime security and selected aspects of the Security and Accountability for Every Port Act of 2006 (SAFE Port Act) identified the challenge of obtaining clearances so that port security stakeholders could share information through area committees or interagency operational centers. The SAFE Port Act includes a specific provision requiring the Secretary of Homeland Security to sponsor and expedite individuals participating in interagency operational centers in gaining or maintaining their security clearances. Recent events affecting clearance programs across the federal government include the passage of the Intelligence Reform and Terrorism Prevention Act (IRTPA) of 2004 and the issuance of the June 2005 Executive Order 13381, “Strengthening Processes Relating to Determining Eligibility for Access to Classified National Security Information.” IRTPA included milestones for reducing the time to complete clearances, general specifications for a database on security clearances, and requirements for reciprocity of clearances. The executive order stated, among other things, that OMB was to ensure the effective implementation of policy regarding appropriately uniform, centralized, efficient, effective, timely, and reciprocal agency functions relating to determining eligibility for access to classified national security information. Since 2005, OMB’s Deputy Director for Management has taken several actions to improve the security clearance process, including establishing an interagency working group to improve the reciprocal acceptance of clearances issued by other agencies and taking a lead role in preparing a November 2005 strategic plan to improve personnel security clearance processes governmentwide. Four Key Factors Should Be Considered in Efforts to Reform the Security Clearance Process In our prior work, we identified four key factors that should be considered to reform the security clearance process. These include (1) ensuring a strong requirements-determination process, (2) building quality in all clearance processes, (3) developing additional metrics to provide a fuller picture of clearance processes, and (4) including long-term funding requirements of security clearance reform. Ensuring a Strong Requirements- Determination Process Can Help Manage Clearance Workloads and Costs As we testified in February 2008, ensuring a strong requirements- determination process can help the government manage the workloads and costs associated with the security clearance process. Requirements- determination in the clearance process begins with establishing whether a position requires a clearance, and if so, at what level. We have previously stated that any reform process should address whether the numbers and levels of clearances are appropriate, since this initial stage in the clearance process can affect workloads and costs in other clearance stages. While having a large number of cleared personnel can give the military services, agencies, and industry a great deal of flexibility when assigning personnel, having unnecessary requirements for security clearances increases the investigative and adjudicative workloads that are required to provide the clearances and flexibility, and further taxes a clearance process that already experiences delays in determining clearance eligibility. A change in the level of clearances being requested also increases the investigative and adjudicative workloads. For example, an increase in the proportion of investigations at the top secret level increases workloads and costs because top secret clearances must be renewed twice as often as secret clearances (i.e., every 5 years versus every 10 years). In August 2006, OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for each investigation to support a secret or confidential clearance. The doubling of the frequency along with the increased effort to investigate and adjudicate each top secret reinvestigation adds costs and workload for the government. Cost. For fiscal year 2008, OPM’s standard billing rate is $3,711 for an investigation for an initial top secret clearance; $2,509 for an investigation to renew a top secret clearance, and $202 for an investigation for a secret clearance. The cost of obtaining and maintaining a top secret clearance for 10 years is approximately 30 times greater than the cost of obtaining and maintaining a secret clearance for the same period. For example, an individual getting a top secret clearance for the first time and keeping the clearance for 10 years would cost the government a total of $6,220 in current year dollars ($3,711 for the initial investigation and $2,509 for the reinvestigation after the first 5 years). In contrast, an individual receiving a secret clearance and maintaining it for 10 years would result in a total cost to the government of $202 ($202 for the initial clearance that is good for 10 years). Time/Workload. The workload is also affected by the scope of coverage in the various types of investigations. Much of the information for a secret clearance is gathered through electronic files. However, the investigation for a top secret clearance requires the information needed for the secret clearance as well as data gathered through time-consuming tasks such as interviews with the subject of the investigation request, references in the workplace, and neighbors. The investigative workload for a top secret clearance increases about 20-fold compared to the workload for a secret clearance, since (1) the average investigative report for a top secret clearance takes about 10 times as many investigative staff hours as the average investigative report for a secret clearance, and (2) the top secret clearance must be renewed twice as often as the secret. Additionally, the adjudicative workload increases about 4-fold. In 2007, DOD officials estimated that it took about twice as long to review an investigative report for a top secret clearance, which would need to be done twice as often as for a secret clearance. We are not suggesting that the numbers and levels of clearances are or are not appropriate—only that any unnecessary requirements in this initial phase use government resources that can be utilized for other purposes, such as building additional quality into other clearance processes or decreasing delays in clearance processing. Unless reforms ensure a strong requirements-determination process is present, workload and costs may be higher than necessary. Building Quality in All Processes Could Promote Positive Outcomes Such as Greater Clearance Reciprocity We have emphasized—since the late 1990s—a need to build more quality and quality monitoring throughout the clearance process to promote positive outcomes such as greater clearance reciprocity. In our November 2005 testimony on the previous governmentwide strategic plan to improve the clearance process, we noted that the plan devoted little attention to monitoring and improving the quality of the personnel security clearance process, and that limited attention and reporting about quality continues. In addition, when OMB issued its February 2007 annual report on security clearances, it documented quality with a single metric in one of the six phases of the security clearance process (i.e., requirements setting, application submission, investigation, adjudication, appeal, and clearance updating). OMB stated that overall, less than 1 percent of all completed investigations are returned to OPM from the adjudicating agencies for quality deficiencies. When OMB issued its February 2008 annual report on security clearances, it did not discuss the percentage of completed investigations that are returned to OPM or the development or existence of any other metric measuring the level of quality in security clearance processes or products. We have also reported that it is problematic to equate the quality of investigations with the percentage of investigations that are returned by requesting agencies due to incomplete case files. For example, in October 1999 and again in our November 2005 evaluation of the governmentwide strategic plan, we stated that the number of investigations returned for rework is not by itself a valid indicator of quality because adjudication officials said they were reluctant to return incomplete investigations as they anticipated this would lead to further delays. As part of our September 2006 report, we examined a different aspect of quality—the completeness of documentation in investigative and adjudicative reports. We found that OPM provided some incomplete investigative reports to DOD adjudicators, which the adjudicators then used to determine top secret clearance eligibility. In addition, DOD adjudicators granted clearance eligibility without requesting additional information for any of the incomplete investigative reports and did not document that they considered some adjudicative guidelines when adverse information was present in some reports. In our September 2006 report, we recommended that regardless of whether the metric on investigations returned for rework continues to be used, OMB’s Deputy Director for Management should require OPM and DOD to develop and report metrics on investigative and adjudicative completeness and other measures of quality. In his comments to our report, OMB’s Deputy Director for Management did not take exception to this recommendation. We are currently reviewing the timeliness and quality of DOD personnel security clearances in ongoing work and plan to review any actions taken by OMB with regard to this recommendation. In September 2006, we also reported that while eliminating delays in clearance processes is an important goal, the government cannot afford to achieve that goal at the expense of quality. We additionally reported that the lack of full reciprocity of clearances is an outgrowth of agencies’ concerns that other agencies may have granted clearances based on inadequate investigations and adjudications. An interagency working group, the Security Clearance Oversight Steering Committee, noted that agencies are reluctant to be accountable for poor quality investigations or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies need to have confidence in the quality of the clearance process. Without full documentation of investigative actions, information obtained, and adjudicative decisions, agencies could continue to require duplicative investigations and adjudications. It will be important for any reform process to incorporate both quality and quality monitoring and reporting throughout the clearance process. In their absence, reciprocity concerns will continue to exist and Congress will not have sufficient information to perform its oversight function. Government Clearance Metrics Emphasize Timeliness Measurement, but Additional Metrics Could Provide a Fuller Picture of Clearance Processes As we testified in February 2008, reform efforts should also consider metrics beyond timeliness to evaluate the clearance processes and procedures and to provide a more complete picture of the performance of a reformed clearance process. Prior GAO reports as well as inspector general reports identify a wide variety of methods and metrics that program evaluators have used to examine clearance processes and programs. For example, our 1999 report on security clearance investigations used multiple methods to examine numerous issues that included documentation missing from investigative reports; investigator training (courses, course content, and number of trainees); investigators’ perceptions about the process; customer perceptions about the investigations; and internal controls to protect against fraud, waste, abuse, and mismanagement. Much of the recent quantitative information provided on clearances has dealt with how much time it takes for the end-to-end processing of clearances (and related measures such as the numbers of various types of investigative and adjudicative reports generated); however, there is less quantitative information on other aspects of the clearance process such as the metrics listed above. In February 2008, we noted that including these additional metrics could add value in monitoring clearance processes and provide a more complete picture of the performance of a reformed clearance process. In our November 2005 testimony, we noted that a previous government plan to improve the clearance process placed an emphasis on monitoring the timeliness of clearances governmentwide, but that plan detailed few of the other elements that a comprehensive strategic plan might contain. An underlying factor that places emphasis on timeliness is the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA). Among other things, IRTPA established specific timeliness guidelines to be phased in over 5 years. The act states that, in the initial period that ends in 2009, each authorized adjudicative agency shall make a determination on at least 80 percent of all applications for personnel security clearances within an average of 120 days after the receipt of the application for a security clearance by an authorized investigative agency. This 120-day period includes no more than 90 days to complete the investigative phase of the clearance review and a period of not longer than 30 days to complete the adjudicative phase of the clearance review. By December 17, 2009, the act will require that adjudicative agencies make a determination on at least 90 percent of all applications for a security clearance within an average of 60 days after the date of receipt of the application, including no more than 40 days for the investigation and 20 days for the adjudication. Moreover, IRTPA also includes a requirement for a designated agency (currently OMB) to provide information on, among other things, the timeliness of security clearance determinations in annual reports to Congress through 2011, as OMB did most recently in February 2008. While timeliness is important, other metrics are also needed to evaluate a reformed clearance process. Long-Term Funding Requirements Information Could Enable More- Informed Congressional Oversight of Security Clearance Reform In February 2008, we recommended that the Joint Reform Team also provide Congress with long-term funding requirements as it develops plans to reform the security clearance process. We have previously reported that DOD has not provided Congress with long-term funding needs for industry personnel security clearances. In February 2008, we reported that in its August 2007 report to Congress, DOD provided funding requirements information that described its immediate needs for its industry personnel security program, but it did not include information about the program’s long-term funding needs. Specifically, DOD’s August 2007 required report on clearances for industry personnel provided less than 2 years of data on funding requirements. In its report, DOD identified its immediate needs by submitting an annualized projected cost of $178.2 million for fiscal year 2007 and a projected funding need of approximately $300 million for fiscal year 2008. However, the report did not include information on (1) the funding requirements for fiscal year 2009 and beyond even though the survey used to develop the funding requirements asked contractors about their clearance needs through 2010 and (2) the tens of millions of dollars that the Defense Security Service Director testified before Congress in May 2007 were necessary to maintain the infrastructure supporting the industry personnel security clearance program. As noted in our February 2008 report, limiting or excluding funding information in security clearance reports for Congress and the executive branch reduces the utility of those reports in developing and overseeing budgets for reform. In addition, the long-term funding requirements to implement changes to security clearance processes are also needed to enable the executive branch to compare and prioritize alternative proposals for reforming the clearance processes especially as the nation’s fiscal imbalances constrain federal funding. Without information on long- term funding requirements, both Congress and the executive branch will not have sufficient information to perform their budget oversight and development functions. Conclusions We are encouraged that the Joint Reform Team issued an initial plan to develop a reformed federal government security clearance process. As the Joint Reform Team develops its reform initiatives, we encourage the team to consider the four factors highlighted in my statement today. As much remains to be done before a new system can be designed and implemented, we look forward to evaluating the Joint Reform Team’s efforts to assist Congress in its oversight. Chairman Akaka and members of the subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. Contact and Acknowledgments For further information regarding this testimony, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are David E. Moser, Assistant Director; Renee S. Brown; Shvetal Khanna; James P. Klein; Caryn Kuebler; Ron La Due Lake; Gregory Marchand; and Brian D. Pegram. Related GAO Products Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. Washington, D.C.: October 16, 2007. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. Managing Sensitive Information: DOD Can More Effectively Reduce the Risk of Classification Errors. GAO-06-706. Washington, D.C.: June 30, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1974, GAO has examined personnel security clearance processes and acquired a historical view of key factors to consider in reform efforts. GAO placed the Department of Defense's (DOD) personnel security clearance program, which represents 80 percent of federal government clearances, on its high-risk list in 2005 due to long-standing problems. These problems include incomplete investigative reports from the Office of Personnel Management (OPM), the agency primarily responsible for providing clearance investigation services; the granting of some clearances by DOD adjudicators even when required data were missing from the investigative reports used to make such determinations; and delays in completing clearance processing. Delays can lead to a heightened risk of disclosure of classified information, additional costs and delays in completing related contracts, and problems retaining qualified personnel. DOD has reported on these continuing delays. However, there has been recent high-level governmentwide attention to improving the process, including establishing a team to develop a reformed federal government security clearance process. This statement addresses four key factors that should be considered in personnel security clearance reforms. This statement draws on GAO's past work, which included reviews of clearance-related documents and interviews of senior officials at DOD and OPM. Efforts to reform personnel security clearance processes should consider, among other things, the following four key factors: (1) a strong requirements determination process, (2) quality in all clearance processes, (3) metrics to provide a fuller picture of clearance processes, and (4) long-term funding requirements of security clearance reform. In February 2008, GAO noted that a sound requirements process is important because requesting a clearance for a position in which it will not be needed, or in which a lower-level clearance would be sufficient, will increase both costs and investigative workload unnecessarily. For example, the cost of obtaining and maintaining a top secret clearance for 10 years is approximately 30 times greater than the cost of obtaining and maintaining a secret clearance for the same period. Also, changing a position's clearance level from secret to top secret increases the investigative workload for that position about 20-fold. Building quality throughout the clearance process could promote positive outcomes, including more reciprocity governmentwide. However, agencies have paid little attention to this factor despite GAO's 2006 recommendation to place more emphasis on quality. For example, the Office of Management and Budget's (OMB) February 2007 report on security clearances documented quality with a single metric in only one of the six phases of the process. Further, OMB did not discuss the development or existence of any metric measuring the level of quality in security clearance processes or products in its February 2008 report. Concerns about the quality of investigative and adjudicative work underlie the continued reluctance of agencies to accept clearances issued by other agencies; thus, government resources may be used to conduct duplicative investigations and adjudications. Federal agencies' efforts to monitor clearance processes emphasize timeliness, but additional metrics should be developed to provide a fuller picture of the performance of the clearance process. GAO has highlighted a variety of metrics in its reports (e.g., completeness of investigative reports, staff's and customers' perceptions of the process, and the adequacy of internal controls), all of which could add value in monitoring clearance processes. The emphasis on timeliness is due in part to the Intelligence Reform and Terrorism Prevention Act of 2004 which provides guidelines for the speed of completing clearances and requires annual reporting of that information to Congress. Providing Congress with the long-term funding requirements to implement changes to security clearance processes could enable more-informed congressional oversight. Reform efforts should identify long-term funding requirements to implement proposed changes, so that decision makers can compare and prioritize alternate reform proposals in times of fiscal constraints. The absence of long-term funding requirements to implement reforms would limit decision makers'--in the executive and legislative branches--ability to carry out their budgetary development and oversight functions.
Background In the wake of the 1998 bombings at the U.S. embassies in Nairobi, Kenya, and Dar es Salaam, Tanzania, State has received increased funding for the construction of new, secure facilities overseas. Funding from fiscal year 1999 to 2004 totaled about $3.4 billion. In addition, Congress passed the Secure Embassy Construction and Counterterrorism Act of 1999. The act established a number of security requirements for diplomatic facilities overseas, one of which was that all U.S. government personnel (except those under the command of an area military commander) at any new U.S. diplomatic facility abroad must be located at the same site. State identified facilities at about 185 posts that would need to be replaced to meet the security standards. To help manage this large-scale construction program, OBO developed the Long-Range Overseas Buildings Plan, first published in July 2001 and recently updated in March 2004. The plan is updated annually, adding new projects as scheduled projects’ construction contracts are awarded. The plan prioritizes posts based on security and operational considerations, including input from State’s regional bureaus and the Bureau of Diplomatic Security. The most recent version of the plan prioritizes 77 proposed security capital and regular capital projects from fiscal years 2004 through 2009, including 18 separate USAID annex buildings. Until the late 1990s, the majority of USAID missions were not colocated with embassies but existed in separate commercial or freestanding buildings. Most of these facilities were rented; several were built with host country trust funds, and a small number were constructed with funds appropriated by the Foreign Operations Appropriations Acts. Since the 1999 colocation requirement, State and USAID have not been fully successful in obtaining funding for construction of separate USAID annex buildings at locations where State was building a new embassy compound. In its fiscal year 2001 report on Commerce, Justice, and State funding, the House Committee on Appropriations wrote that it did not approve the use of the funds for the USAID annexes because appropriations requirements of USAID fall under the jurisdiction of the Foreign Operations, Export Financing, and Related Programs Subcommittee. In an effort to overcome funding problems, USAID requested that the Foreign Operations subcommittee fund a new account for fiscal year 2003, the Capital Investment Fund, to fund information technology enhancements and construction of colocated USAID facilities. Although the fund has been established, USAID has not obtained full funding to construct all of its buildings. In its report on the fiscal year 2003 Foreign Operations appropriation, the House Committee on Appropriations noted that buildings and space for all other government agencies overseas were appropriated through State’s account for overseas construction, and stated that therefore the committee had not funded all requests for USAID buildings on new embassy compounds. State’s fiscal year 2005 budget request, which has been approved by the House and is pending approval in the Senate, includes the construction of four USAID buildings anticipated to be funded from contributions through the Capital Security Cost-Sharing Program. State Has Built Compounds in Stages State has built new embassy compound facilities in separate stages to accommodate the lack of USAID funding, according to State and USAID officials. Only one of three new embassy compounds completed to date includes the planned annex for USAID. In addition, contracts have been awarded or construction is under way on several more compounds that do not include, but will eventually have, a separate annex for USAID. Under OBO’s current 6-year building plan, nonconcurrent construction will continue through at least fiscal year 2009. State and USAID Attribute Nonconcurrent Construction to Lack of Funds State initiated the Security Capital Construction Program to replace its most vulnerable posts. Under this program, OBO is constructing replacement facilities on embassy and consulate compounds that will contain the main office building, or chancery, all support buildings, and a separate annex building for USAID, where necessary. According to OBO, it has always preferred to construct all components of a new embassy compound concurrently, and its 2002 long-range plan included projects in which the USAID building would be built concurrently with rest of the compound. It was only after USAID did not receive funding for its annexes in fiscal year 2001 that OBO began to move to a nonconcurrent approach to construction, according to OBO officials. Since 1999, OBO has completed construction of new embassy compounds in Dar es Salaam, Nairobi, and Kampala, Uganda, which were planned to include a separate facility for USAID. So far, OBO has completed the annex for USAID only at the compound in Dar es Salaam. Initially, OBO awarded a construction contract that did not include the USAID annex, but USAID received $15 million in additional operating expense funds through the regular appropriation process to pay for new construction. In addition, $2.5 million from program funds were used with $25 million obtained from the Security Supplemental Account for security upgrades. The funding became available in time for OBO to modify the original construction contract and complete the USAID annex at the same time as the rest of the compound. For Nairobi, OBO awarded a construction contract for a new embassy compound in September 1999 that did not include the USAID facility because there were no funds for this USAID annex. Subsequently, OBO and the contractor negotiated to include the USAID annex as a modification to the original contract, but sufficient funding did not become available in time. Construction of the chancery building was completed in 2003. USAID received funding for its annex in fiscal year 2003; construction began in June 2004 and is scheduled to end in June 2006, 3 years after the compound was completed and became operational (see fig. 1). In the meantime, USAID is leasing space at a cost of about $300,000 per year on the campus of a nongovernmental research facility. A construction contract for the new embassy compound in Kampala was awarded in 1999 and construction was completed in fiscal year 2002, but USAID did not receive funding for its annex until fiscal year 2004. OBO expects to award a construction contract for the USAID annex sometime in 2004, according to an OBO official. USAID plans to remain in its interim location outside the new compound—an office converted from a residence in Kampala and leased for $144,000 per year—until its new facility is built in about 2006. In addition, contracts have been awarded or construction is under way on the following seven compounds that do not include a separate building for USAID because the agency lacked funding for the construction: Yerevan, Armenia; Phnom Penh, Cambodia; Tbilisi, Georgia; Conakry, Guinea; Bamako, Mali; Kingston, Jamaica; and Abuja, Nigeria. At most of these posts, construction of the USAID facility will start between 2 to 4 years after OBO awarded the contract for the compound. For example, OBO awarded construction contracts for the new embassy compounds in Phnom Penh and Conakry in fiscal year 2002 and plans to solicit bids to construct the USAID annexes on these compounds during fiscal year 2004. In Yerevan, the U.S. Ambassador and OBO devised an alternative to waiting for funds to build a separate facility for USAID: OBO is adding a floor to a warehouse building under construction on the compound to house USAID. OBO can add a floor to the building for less money than it would cost to build a separate annex, although USAID will have less space, according to State and USAID officials. Table 1 shows the contract award dates for selected new embassy compounds and the award dates for the corresponding USAID annex. The Secretary of State has had to issue waivers of the colocation requirement for some of these locations to permit USAID to remain outside the compound pending construction of a facility on the compound. Nonconcurrent Construction Will Continue for Years under Current Plan In addition to the projects previously discussed, OBO’s current 6-year building plan includes 18 new embassy compounds that will include a separate facility for USAID, 9 of which are slated to be built nonconcurrently. For the remainder of fiscal year 2004, OBO will award construction contracts for 3 new embassy compounds without including the USAID annex: in Managua, Nicaragua; Kathmandu, Nepal; and Accra, Ghana. The current schedule also calls for awarding embassy construction contracts in 5 locations in fiscal year 2006 but not awarding the contracts for USAID annexes until fiscal year 2007; and awarding 1 embassy project in fiscal year 2008 but not awarding the USAID contract until at least fiscal year 2009. Table 2 lists new embassy compound construction projects through fiscal year 2009. Concurrent Construction Could Decrease Costs, Improve Security Nonconcurrent construction increases the overall cost to the government and raises concerns about security. We also found that, in some cases, constructing a separate annex building could cost more than building a larger chancery to accommodate USAID. OBO’s own analysis of 9 projects shows that the current schedule of nonconcurrent construction will add more than $35 million in costs. In addition, extrapolating from OBO’s data, we have projected an overall cost increase of as much as $78 million if all 18 future USAID annexes follow the historical pattern of nonconcurrent construction. This estimate does not include security enhancements and other costs that USAID will incur while its staff are in interim facilities pending completion of the USAID annex. This estimate also does not include potential cost savings from merging USAID space into chancery buildings in the future. Finally, nonconcurrent construction has security implications for USAID employees left behind in interim facilities and for the other U.S. government employees moved to more secure compounds. State and USAID Officials and Contractors Agree That Nonconcurrent Construction Is the More Costly Approach All government officials and private construction contractors with whom we spoke agreed that the practice of nonconcurrent construction significantly adds to the overall expense of building USAID office space. Building nonconcurrently can result in a second expensive mobilization of contractor staff and equipment, additional work to procure building materials, and added construction management oversight. According to contractors experienced in building embassy compounds overseas, such remobilization and duplication of support activities can add 20 percent to 25 percent to what a concurrent construction contract would have cost, depending on the location. They said that when the U.S. government does not receive funds for the USAID annex until after the contractor has finished building the embassy, there is no chance of maximizing economies of scale that result from contractor staff and equipment already on site. In Nairobi, for example, according to one of the contractors we met with, OBO could have built the USAID annex for $19 million using the same contractor building the embassy chancery, but OBO awarded a contract for almost $30 million to a different builder since funding was not available until 4 years later. OBO officials have also said that in addition to contractor mobilization costs, nonconcurrent expenses must include OBO’s added supervision and site security costs for a second project. For instance, in Nairobi, OBO’s project supervision and construction security cost estimates for the USAID annex rose from $683,000 for concurrent building to $2.9 million for nonconcurrent building. Nonconcurrent construction also increases security enhancement expenses. Until secure office space is built, USAID must either remain in or move to interim facilities. The interim site may require significant security upgrades to obtain some minimum level of protection for staff, including the leasing of surrounding property to create setbacks from roads, as well as the addition of perimeter fencing and installation of anti-ram barriers. Security, supervision, and maintenance personnel costs would continue to accrue until the new USAID facility were completed. For instance, in Kampala, Uganda, USAID will likely spend $3.2 million for operational and security expenses from the time the new chancery opened in 2002 until the annex in the new embassy compound is finished in 2006. Concurrent Construction of Future USAID Annexes Could Save Millions of Dollars OBO estimates that future USAID annex projects now scheduled for nonconcurrent construction will increase costs to taxpayers by $35 million. Extrapolating from these data, we project that annex construction costs could rise between $68 million to $78 million if all 18 future annexes were delayed. These expenses do not include the $27 million to $30 million cost increase that, based on OBO’s data, we inferred was generated by nonconcurrent construction of annexes for embassy compounds awarded before 2004. There are no opportunities to significantly reduce this expense because these compounds are already built or under construction without USAID annexes. Estimates from OBO calculate the cost increase for building the 9 USAID annexes now scheduled for nonconcurrent construction over the next 5 budget years at $43 million (or $35 million at present value) above the cost for building at the same time. We further extrapolated from these and other OBO cost estimates, an average cost differential increase for a USAID annex of about 24 percent to 31 percent for nonconcurrent construction. Using OBO’s cost data, we calculated the range of the potential cost increase of nonconcurrent construction for all future annex projects to be between $88 million to $101 million over the next five budget years (or $68 million to $78 million at present value). Figure 2 compares concurrent and nonconcurrent construction costs for 18 projects to be built after fiscal year 2004. Further, our estimates do not include the costs of additional operations and lease expenses or increased security and personnel costs, for which we did not have comprehensive data but which could be substantial. For example, OBO has estimated that in one European location, USAID will have to pay an additional $690,000 for rent because of a 1-year delay in awarding the construction contract. In two African locations, USAID officials estimate they will have to pay an additional $5.5 million for 3 or more years of rent and continuing security expenses until their new facilities are built. Including USAID Space within a Larger Chancery Rather Than Constructing a Separate USAID Annex May Decrease Costs Depending on a number of factors, building a separate annex for USAID increases costs over designing additional space for USAID within the chancery. Two of the contractors we met with stated that constructing one building could be more cost effective than constructing two. OBO has estimated that in two African locations where new embassy construction is scheduled for 2006, building a separate USAID annex would cost at least $6.5 million per site more than housing USAID in the chancery, assuming nonconcurrent construction. OBO officials have said that, except for very large USAID missions, there may be little reason to build a separate USAID annex other than the ease of allocating construction costs to USAID. Further, OBO has recently re-evaluated the office space parameters for new overseas missions, significantly reducing the sizes of proposed USAID annexes. Assuming these revised space allocations are adequate, an additional five to eight proposed USAID annexes would be similar in size (2,500 gross square meters or less) to those at the two African locations OBO analyzed. The cost differentials between a separate annex versus locating USAID within the chancery in those locations could be similar if other key factors, such as building configuration and site conditions, were also comparable. However, there are factors other than costs that should be considered when determining whether to build a separate annex or include space for USAID in the chancery, according to USAID. Such factors include geographic location, the type of work USAID is engaged in, and the security profiles of the country. Moreover, a separate unclassified USAID annex may allow greater access for local staff and visitors. Nonconcurrent Construction Poses Security Risks In addition to cost considerations, nonconcurrent construction of the USAID annexes raises a number of security concerns, according to State Diplomatic Security and embassy officials as well as USAID security officials. For example, some officials expressed concern about the safety of USAID employees who remain in interim facilities after other U.S. government personnel have moved to the new embassy compounds. State is building the compounds to provide safe, secure facilities because U.S. facilities and personnel have faced continued threats from terrorist and other attacks since the Kenya and Tanzania embassy bombings. For example, from 1998 through 2002, there were 30 terrorist attacks against overseas posts, personnel, and diplomatic residences. During that same period, overseas posts were forced to evacuate personnel or suspend operations 83 times in response to direct threats or unstable security situations in the host country. Terrorists continue to look for targets, according to the security officials, and an interim USAID facility might be perceived to be a “softer” target than a new, more secure embassy, thus making USAID employees more vulnerable to attack. For example, figure 3 shows a new embassy compound main gate with an anti-vehicle delta barrier, anti-ram perimeter wall, and blast-resistant guardhouse containing bomb detection equipment, compared with an interim USAID facility entrance with temporary barriers that are removed after work hours. State, USAID, and embassy officials described a number of actions taken to mitigate the risks for USAID employees who are not colocated in new embassy compounds. For example, a post may construct special jersey barriers and fences, dig trenches, close streets adjacent to a USAID facility to create a setback around the building during the day, and lease properties adjacent to its facilities to create a buffer. A post may also use contract guard services, deploy surveillance detection teams and mobile response teams, and use the services of local police. Despite actions to mitigate the security risks of nonconcurrent construction, State and USAID officials remain concerned because interim facilities do not meet the security standards established by the Overseas Security Policy Board. In addition to the Secure Embassy Construction and Counterrorism Act of 1999, which requires a 100-foot setback and colocation of all U.S. government employees at a new site, the security standards for new office buildings include anti-ram perimeter walls and barriers, construction to meet blast protection, forced entry/ballistic resistant protection for doors and windows, and controlled access points. A USAID security official stated that, despite measures to reduce security risks, facilities are vulnerable when they are not controlled by the U.S. government. For example, the official said that posts using temporary jersey barriers eliminate the setback each evening when the barriers are removed and the streets are reopened to normal traffic. Further, a State security official stated that the Bureau of Diplomatic Security was an early advocate of colocating all U.S. personnel when the new embassy compound is built. He said that the bureau is concerned from a threat perspective and that the threat to U.S. personnel remains high. He said that when USAID cannot be colocated, the bureau tries to find ways to mitigate the risk but there is no perfect solution. However, he said the bureau does not recommend delaying the construction of a compound until funding for the USAID annex is available because that would leave a greater number of staff vulnerable. Nonconcurrent construction also has security implications for the employees who move into the newly constructed compound. Subsequent construction of the USAID annex on the compound results in more workers, vehicles, and equipment on site, which may increase the vulnerability of the overall embassy compound and its personnel by giving terrorists the opportunity to conduct surveillance or attack the embassy, according to State and USAID officials. To address this issue, OBO and regional security officers in Nairobi, Kenya, and Kampala, Uganda, described a number actions required to control the access of construction personnel and equipment to the compound. For example, the regional security officers told us that they need to hire additional security guards to inspect trucks bringing building materials to the compound. The regional security officer in Nairobi said he would need about 14 additional guards to perform these inspections. For some sites, destruction of part of the perimeter wall to add an entrance for the construction vehicles and equipment has been discussed as a way of allowing contractor access to the compound. Construction workers need to undergo background checks and receive identification cards, according to regional security officers; these requirements could place a significant burden on their time and workload unless State hires a site security manager. Opportunities Exist for More Concurrent Construction OBO acknowledged that it would be advantageous to the U.S. government to build embassy compounds concurrently. OBO said it may revise its schedule to allow for more concurrent construction and consider on a case-by-case basis whether USAID should have a separate annex if the Capital Security Cost-Sharing Program is funded. However, even without cost sharing, there are opportunities for more concurrent and efficient construction. By delaying one project slated for fiscal year 2006 and estimated to cost more than $100 million, State would have sufficient funds to eliminate the backlog of USAID projects. Moreover, it is not unprecedented for projects in successive annual plans to be moved from one year to another. For example, over the last three planning cycles, several planned projects have had to be moved from one year to another due to factors such as a failure to acquire land in a timely manner or a change in executive branch priorities. Therefore, if OBO could reschedule planned projects it could make headway in minimizing nonconcurrent construction. OBO emphasized that it would need congressional support to do this. Conclusion OBO’s multibillion-dollar program to build new, secure embassies and consulates around the world was designed to colocate all U.S. employees stationed overseas within a secure compound, as required by law. However, by building the compounds in stages, some employees must temporarily remain in less secure space outside the compound. Concurrent construction will help State and USAID comply with the colocation requirement. Our analysis also shows that concurrent construction likely results in cost savings for the taxpayer and that incorporating all office space into the main chancery building rather than building a separate annex may be, in some cases, a more efficient approach. According to State, lack of funding and restrictions on the use of funds has required OBO to phase construction of new embassy compounds that have a USAID annex component. However, State said it will consider revising its construction schedule to achieve more concurrent construction if the Capital Security Cost-Sharing Program is implemented in fiscal year 2005. However, even if the plan is not implemented, opportunities exist to schedule the construction of more projects concurrently. Matter for Congressional Consideration In order to minimize costs and further improve security associated with building new embassy compounds, if the Capital Security Cost-Sharing Program is not implemented in fiscal year 2005, Congress may wish to consider alternative funding approaches to support concurrent construction of new embassy compounds. Recommendations for Executive Action We recommend that the Director of State’s Bureau of Overseas Buildings Operations (1) update the Long-Range Overseas Buildings Pan to achieve the concurrent construction of USAID facilities to the maximum extent possible; and (2) in coordination with USAID, consider incorporating USAID space into single office buildings in future compounds, where appropriate. Agency Comments and Our Evaluation The State Department and the U.S. Agency for International Development provided written comments on a draft of this report (see app. II and app. III). State also provided technical comments, which we have incorporated into the report as appropriate. In its comments, State said that the report is a fair and accurate representation of the issue and welcomed our recommendations. State said it would update the Long-Range Overseas Buildings Plan to achieve concurrent construction to the maximum extent and coordinate with USAID to consider incorporating USAID space into single office buildings in future compounds where appropriate if the Capital Security Cost- Sharing Program is implemented. However, our recommendations and matter for consideration are designed to bring about concurrent construction to the maximum extent regardless of the implementation of the Capital Security Cost-Sharing Program. In its comments, USAID said the report successfully attempts to address the rationale as well as many of the difficulties in achieving the goal of concurrent construction of new embassy compounds and facilities to be occupied by USAID employees on those compounds. USAID said it agreed with both our recommendations and provided information to support the recommendations and explain its requirements. We are sending copies of this report to interested congressional committees, the Secretary of State, and the Administrator of USAID. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix IV. Scope and Methodology To examine State’s efforts to incorporate office space for the U.S. Agency for International Development (USAID) into the construction of new embassy compounds and to assess the cost and security implications of its approach, we reviewed the State Department’s Bureau of Overseas Buildings Operations (OBO) construction documents and the Long-Range Overseas Buildings Plans for fiscal years 2002 to 2007, years 2003 to 2008, and 2004 to 2009; interviewed State Department and USAID officials regarding completed, ongoing, and planned new embassy compound projects that include a separate annex for USAID, including operational, cost, and security issues arising from nonconcurrent construction and the issues involved in housing USAID in separate buildings; interviewed officials from several U.S. construction firms experienced in building new embassy projects regarding the costs of OBO construction scheduling practices; and analyzed OBO estimates of the cost differentials between concurrent and nonconcurrent construction. Further, we visited two field locations—in Nairobi, Kenya, and Kampala, Uganda—where we discussed with State and USAID officers at each post the implications of construction sequencing to the embassies and USAID. To analyze the cost impacts of different USAID annex construction scheduling, we developed a cost model enabling us to extrapolate from State data the aggregate and annual costs for both concurrent or nonconcurrent construction projects for USAID annexes. Our model is based on cost estimate data provided by OBO for 26 projects. (Estimates for Yerevan, Armenia, were not used because OBO no longer plans to build a separate annex for USAID.) For some projects, we had estimates of fiscal year contract award and midpoint construction costs in nominal dollars for concurrent and nonconcurrent construction. Using such data from 13 projects, we estimated the average percentage cost differential per project to build a nonconcurrent annex as a range of 30.75 percent and 23.83 percent. The higher end of the range results from excluding data for 3 of the 13 locations (Kampala, Uganda; Harare, Zimbabwe; and Kingston, Jamaica) where OBO indicated that site-specific factors accounted for major deviations from the mean. Both average percentage differentials are used to project base-year costs for concurrent construction for Abuja, Nigeria, and for nonconcurrent construction on 11 projects for which we had incomplete data. We also had data on an additional project (Tbilisi, Georgia) but used it only to represent the costs of that project, not to estimate the average cost differential because the data for the project reflected building sizes for concurrent and nonconcurrent construction costs that were substantially different. Assumptions included the length of construction period (24 months for concurrent and 15 months for nonconcurrent), the 1-year lag between proposed award year for concurrent construction and nonconcurrent construction, cost distribution over the construction period, and average dollar cost escalation of 3 percent per year. These assumptions for each of the 26 projects enabled us to estimate total budget dollar costs, and present value costs in fiscal year 2004 dollars. We did not verify the accuracy of OBO’s cost estimates or its methodology for estimating costs. However, we did meet with OBO officials responsible for the cost estimates to discuss their methodology and underlying assumptions. The cost differentials between concurrent and nonconcurrent construction that OBO estimated were consistent with those estimated by two of the contractors we met with. We conducted our work from December 2003 to July 2004 in accordance with generally accepted government auditing standards. Comments from the Department of State The following are GAO’s comments on the State Department letter dated September 14, 2004. GAO Comments 1. Our recommendations and matter for consideration are designed to bring about concurrent construction to the maximum extent regardless of the implementation of the Capital Security Cost-Sharing Program. 2. We have revised our statement accordingly. Comments from the U.S. Agency for International Development The following is GAO’s comment on the U.S. Agency for International Development letter dated September 10, 2004. GAO Comment We agree with the U.S. Agency for International Development that many factors should be considered to determine whether housing USAID in a separate building or within the chancery building is beneficial to the U.S. government. We have added a brief description of some of these factors. GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Omar Beyah, Janey Cohen, David Hudson, Bruce Kutnick, and La Verne Tharpes made key contributions to this report. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
After the 1998 bombings of two U.S. embassies in Africa, the State Department embarked on a multibillion-dollar, multiyear program to build new, secure facilities on compounds at posts around the world. The Secure Embassy Construction and Counterterrorism Act of 1999 generally requires that all U.S. agencies, including the U.S. Agency for International Development (USAID), colocate offices within the newly constructed compounds. This report discusses how State is incorporating office space for USAID into the construction of new embassy compounds and the cost and security implications of its approach. State has built new embassy compounds in separate stages--scheduling construction of the USAID annex after work has begun (or in many cases after work has been completed) on the rest of the compound. State and USAID attributed this practice to a lack of full simultaneous funding for construction at nine locations through fiscal year 2004. Concurrent construction of USAID annexes could help decrease overall costs to the government and help achieve security goals. Concurrent construction would eliminate the second expensive mobilization of contractor staff and equipment and added supervision, security, and procurement support expenses that result from nonconcurrent construction. State has estimated that if nine future USAID annexes scheduled for nonconcurrent construction are built concurrently, it could save taxpayers $35 million. Extrapolating from data provided by State, GAO estimated a total cost savings of around $68 million to $78 million if all 18 future USAID projects are built concurrently. GAO also found that designing additional space for USAID within the main office building, or chancery, may cost less than erecting a separate annex, depending on a number of factors, including the size and configuration of the planned buildings. In addition to cost considerations, concurrent construction could help State and USAID comply with the colocation requirement and decrease the security risks associated with staff remaining outside of the embassy compound. For example, USAID staff who remain in a temporary USAID facility after other U.S. government personnel move into a new embassy compound may be more vulnerable to terrorist attack because the temporary facility does not meet security standards for new buildings and may be perceived to be a "softer" target relative to the new, more secure embassy compound. State's current plans call for continued nonconcurrent construction through fiscal year 2009. State acknowledged that there are substantial advantages to concurrent construction and has indicated that it may revise its building schedule to allow for more concurrent construction if a new cost-sharing proposal to fund new embassies by allocating construction costs among all agencies having an overseas presence is implemented in fiscal year 2005. However, even if cost sharing is not implemented, there are still opportunities for building some USAID facilities concurrently with the overall construction of the embassy compound if State, with congressional consent, revised its plan and rescheduled some projects.
Medicare’s Long-Term Outlook Has Worsened As I have previously testified before this Committee, Medicare as currently structured is fiscally unsustainable. While many people have focused on the improvement in the HI Trust Fund’s shorter-range solvency status, the real news is that Medicare’s long-term outlook has worsened significantly during the past year. A new consensus has emerged that previous program spending projections have been based on overly optimistic assumptions and that actual spending will grow faster than has been assumed. Traditional HI Trust Fund Solvency Measure Is a Poor Indicator of Medicare’s Fiscal Health First, let me talk about how we measure Medicare’s fiscal health. In the past, Medicare’s financial status has generally been gauged by the projected solvency of the HI Trust Fund, which covers primarily inpatient hospital care and is financed by payroll taxes. Looked at this way—and based on the latest Trustees’ report—Medicare is viewed as solvent through 2029. (See fig. 1). However, HI trust fund solvency does not measure the growing cost of the Part B Supplementary Medical Insurance (SMI) component of Medicare, which covers outpatient services and is financed through general revenues and beneficiary premiums. Part B accounts for somewhat more than 40 percent of Medicare spending and is expected to account for a growing share of total program dollars. In addition, HI trust fund solvency does not mean the program is financially healthy. Although the trust fund is expected to remain solvent until 2029, HI outlays are predicted to exceed HI revenues beginning in 2016. As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Thus, in 15 years the HI trust fund will begin to experience a growing annual cash deficit. At that point, the HI program must redeem Treasury securities acquired during years of cash surplus. Treasury, in turn, must obtain cash for those redeemed securities either through increased taxes, spending cuts, increased borrowing, retiring less debt, or some combination thereof. Clearly, it is total program spending—both Part A and Part B—relative to the entire federal budget and national economy that matters. This total spending approach is a much more realistic way of looking at the combined Medicare program’s sustainability. In contrast, the historical measure of HI trust fund solvency cannot tell us whether the program is sustainable over the long haul. Worse, it can serve to distort the timing, scope, and magnitude of our Medicare challenge. New Estimates Increase Urgency of Reform Efforts Besides looking at total program spending, any assessment of Medicare’s financial condition must acknowledge that absent meaningful program reforms, program cost growth will likely be greater than has been previously projected. A technical panel advising the Medicare Trustees recently recommended assuming that future per-beneficiary costs for both HI and SMI eventually will grow at a rate 1 percentage point above GDP growth—about 1 percentage point higher than had previously been assumed. That recommendation was consistent with a similar change CBO made to its Medicare and Medicaid long-term cost growth assumptions last year. In their new estimates published on March 19, 2001, the Trustees adopted the technical panel’s long-term cost growth recommendation. The Trustees note in their report that this new assumption substantially raises the long-term cost estimates for both HI and SMI. In their view, incorporating the technical panel’s recommendation yields program spending estimates that represent a more realistic assessment of likely long-term program cost growth. (See fig. 2.) Under the old assumption (the Trustees’ 2000 best estimate intermediate assumptions), total Medicare spending consumes 5 percent of GDP by 2063. Under the new assumption (the Trustees’ 2001 best estimate intermediate assumptions), this occurs almost 30 years sooner—2035— and by 2075 Medicare consumes over 8 percent of GDP, compared with 5.3 percent under the old assumption. The difference clearly demonstrates the dramatic implications of a 1 percentage point increase in annual Medicare spending over time. Figure 3 reinforces the need to look beyond the HI program. HI is only the first layer in this figure. The middle layer adds the SMI program, which is expected to grow faster than HI in the near future. By the end of the 75- year projection period, SMI will represent almost half of total estimated Medicare costs. If federal Medicaid spending is also considered, an even more complete picture of the future health care entitlement burden emerges. Including Medicaid, federal health care costs will grow to 14.5 percent of GDP from today’s 3.5 percent. Taken together, the two major government health programs—Medicare and Medicaid—represent an unsustainable burden on future generations. In addition, this figure reflects only the federal government’s share—the burden of states’ Medicaid matching costs on state budgets is another fiscal challenge. According to a recent National Governors Association statement, increased Medicaid spending has already made it difficult, if not impossible, for states to increase funding for other priorities. When viewed from the perspective of the federal budget and the economy, the growth in health care spending will become increasingly unsustainable over the longer term.Our message remains the same as in my earlier appearances before this Committee: to move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government in the future. Assuming, for example, that Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term simulations show a world by 2030 in which Social Security, Medicare, and Medicaid absorb most of the available revenues within the federal budget. Under this scenario, these programs would require more than three-quarters of total federal revenue even without adding a prescription drug benefit. (See fig. 4.) Little room would be left for other federal spending priorities such as national defense, education, and law enforcement. Absent changes in the structure of Medicare and Social Security, sometime during the 2040s government would do nothing but mail checks to the elderly and their health care providers. Accordingly, substantive reform of the Medicare and Social Security programs remains critical to recapturing our future fiscal flexibility. As our long-term budget simulations show, this is true even if the entire projected surplus is saved. (See fig. 5.) Higher cost estimates are not the only reason why early action to address the daunting challenges of Medicare is critical. First, ample time is required to phase in the reforms needed to put this program on a more sustainable footing before the baby boomers retire. Second, timely action to bring costs down pays large fiscal dividends for the program and the budget. The high projected growth of Medicare in the coming years means that the earlier reform begins, the greater the savings will be as a result of the effects of compounding. Beyond reforming the Medicare program itself, maintaining an overall sustainable fiscal policy and strong economy is vital to enhancing our nation’s future capacity to afford paying benefits in the face of an aging society. Decisions on how we use today’s surpluses can have wide-ranging impacts on our ability to afford tomorrow’s commitments. As I have testified before, you can think of the budget choices you face as a portfolio of fiscal options balancing today’s unmet needs with tomorrow’s fiscal challenges. At the one end—with the lowest risk to the long-range fiscal position—is reducing publicly held debt. At the other end—offering the greatest risk—is increasing entitlement spending without fundamental program reform. Reducing publicly held debt helps lift future fiscal burdens by freeing up budgetary resources encumbered for interest payments, which currently represent more than 12 cents of every federal dollar spent, and by enhancing the pool of economic resources available for private investment and long-term economic growth. This is particularly crucial in view of the known fiscal pressures that will begin bearing down on future budgets in about 10 years as the baby boomers start to retire. However, as noted above, debt reduction is not enough. Our long-term simulations illustrate that, absent entitlement reform, even saving all projected unified surpluses will ultimately be insufficient to prevent the return of large persistent deficits. Benefit Expansions Will Need to Be Accompanied by Meaningful Reform Despite common agreement that, without reform, future program costs will consume growing shares of the federal budget, there is also a mounting consensus that Medicare’s benefit package should be expanded to cover prescription drugs, which will add billions to the program’s cost. Thus, to contain spending while revamping benefits, the Congress is considering proposals to fundamentally reform Medicare. Our work on the nuts and bolts of the Medicare program provides, I believe, some considerations that are relevant to your discussion regarding the potential addition of a prescription drug benefit, various Medicare reform options based on competition, effective implementation and refinement of new policies, and improving program management. I make these observations ever mindful of the need to ensure the program’s sustainability for the longer term. Adding a Fiscally Responsible Prescription Drug Benefit Will Entail Multiple Trade-Offs Among the major policy challenges facing the Congress today is how to reconcile Medicare’s unsustainable long-range financial condition with the growing demand for an expensive new benefit—namely, coverage for prescription drugs. It is a given that prescription drugs play a far greater role in health care now than when Medicare was created. Today, Medicare beneficiaries tend to need and use more drugs than other Americans. However, because adding a benefit of such potential magnitude could further erode the program’s already unstable financial condition, we face difficult choices about design and implementation options that will have a significant impact on beneficiaries, the program, and the marketplace. Let’s examine the current status regarding Medicare beneficiaries and drug coverage. About a third of Medicare beneficiaries have no coverage for prescription drugs. Some beneficiaries with the lowest incomes receive coverage through Medicaid. Some beneficiaries receive drug coverage through former employers, some can join Medicare+Choice plans that offer drug benefits, and some have supplemental Medigap coverage that pays for drugs. However, significant gaps remain. For example, Medicare+Choice plans offering drug benefits are not available everywhere and generally do not provide catastrophic coverage. Medigap plans are expensive and have caps that significantly constrain the protection they offer. Thus, beneficiaries with modest incomes and high drug expenditures are most vulnerable to these coverage gaps. Overall, the nation’s spending on prescription drugs has been increasing about twice as fast as spending on other health care services, and it is expected to keep growing. Recent estimates show that national per-person spending for prescription drugs will increase at an average annual rate exceeding 10 percent until at least 2010. As the cost of drug coverage has been increasing, employers and Medicare+Choice plans have been cutting back on drug benefits by raising enrollees’ cost-sharing, charging higher copayments for more expensive drugs, or eliminating the benefit altogether. It is not news that adding a prescription drug benefit to Medicare will be costly. However, the cost consequences of a Medicare drug benefit will depend on choices made about its design—including the benefit’s scope and financing mechanism. The details of its implementation will also have a significant impact on beneficiaries, program spending, and the pharmaceutical market. Experience suggests that some combination of enhanced access to discounted prices, targeted subsidies, and measures to make beneficiaries aware of costs may be needed. Any option would need to balance concerns about Medicare sustainability with the need to address what will likely be a growing hardship for beneficiaries in obtaining prescription drugs. Reform Options Based on Competition Offer Advantages but Contain Limitations As you consider the options to add a drug benefit, fiscal prudence argues for balancing this action with the adoption of meaningful Medicare spending reforms. Before the 107th Congress are two leading proposals, popularly known as Breaux-Frist I and Breaux-Frist II. Both proposals are based on a model in which a competitive process determines the amount that the government and beneficiaries pay to participating health plans. Currently, Medicare follows a complex formula to set payment rates for Medicare+Choice plans, and plans compete primarily on the richness of their benefit packages. Medicare permits plans to earn a reasonable profit, equal to the amount they can earn from a commercial contract. Efficient plans that keep costs below the fixed payment amount can use the “savings” to enhance their benefit packages, thus attracting additional members and gaining market share. Under this arrangement, competition among Medicare plans may produce advantages for beneficiaries, but the government reaps no savings., In contrast, the competitive premium approach of both Breaux-Frist proposals offers certain advantages. Instead of having the government administratively set a payment amount and letting plans decide—subject to some minimum requirements—the benefits they will offer, plans would set their own premiums and offer at least a required minimum Medicare benefit package. Under both proposals, beneficiaries would generally pay a portion of the premium and Medicare would pay the rest. Plans operating at lower cost could reduce premiums, attract beneficiaries, and increase market share. Beneficiaries who joined these plans would enjoy lower out-of-pocket expenses. Unlike today’s Medicare+Choice program, the premium support approach provides the potential for taxpayers to benefit from the competitive forces. As beneficiaries migrated to lower- cost plans, the average government payment would fall. A key difference between the two Breaux-Frist proposals is in how the program’s contribution is determined. Under Breaux-Frist I, traditional Medicare would, like the other plans, have to set a premium price. The amount of the program contribution would be based on the average of the traditional plan’s premium price and the prices set by the other plans. Under Breaux-Frist II, the program contribution would be based on the traditional plan’s premium price alone. Under either version, Medicare costs would be more transparent: beneficiaries could better see what they and the government were paying for in connection with health care expenditures. More importantly, both beneficiaries and the government would share in the savings if plans lower premiums to gain market share. Experience with the Medicare+Choice program reminds us that competition in Medicare has its limits. First, not all geographic areas are able to support multiple health plans. Medicare health plans historically have had difficulty operating efficiently in rural areas because of a sparseness of both beneficiaries and providers. In 2000, 21 percent of rural beneficiaries had access to a Medicare+Choice plan, compared to 97 percent of urban beneficiaries. Second, separating winners from losers is a basic function of competition. Thus, under a competitive premium approach, not all plans would thrive, requiring that provisions be made to protect beneficiaries enrolled in less successful plans. Effective Implementation Requires Capacity to Assess and Refine New Policies The fundamental nature of proposed Medicare reforms, such as adding a drug benefit or reshaping the program’s design, makes monitoring the effects of these changes a necessary responsibility. Today, however, major difficulties exist in measuring the effects of Medicare policies in a comprehensive and timely manner, making it difficult to assess the appropriateness of both program expenditures and provision of services. Although Medicare is the nation’s largest third-party payer, some of its vital information systems are decades old and operate on software no longer commonly used. These systems house a wealth of health and payment data but lack the flexibility to generate the kind of prompt and reliable reports that other large payers use to ensure health care quality and efficiency. This dearth of timely, accurate, and useful information hinders effective policymaking. This shortcoming is particularly significant in a program where small rate changes developed from faulty estimates can mean billions of dollars in overpayments or underpayments. Our work on BBA payment reforms shows the importance of data-driven analyses in determining the impact of policy changes. Providers affected by BBA-mandated lower rates, lower rate increases, or altogether new payment systems blamed the BBA for their financial difficulties and pressured the Congress to undo some of the act’s payment reforms. The Congress responded by making adjustments in subsequent legislation, but the affected providers argue that more changes are needed and call for higher payments on the basis of anecdotal evidence. Medicare analysts were ill-equipped to address these concerns through objective analysis because the necessary program data were not readily available. Our own reviews of BBA provisions and their impact showed that payments generally were adequate to cover providers’ Medicare costs and ensure beneficiary access, although we identified areas where refinements would improve the appropriateness of rates to individual providers. The lesson is that better information, promptly generated, can help policymakers understand the budgetary impact of policy changes and distinguish between desirable and undesirable consequences. Such information could, for example, reveal whether across-the-board rate increases are warranted or will result in overly generous payments for some and inadequate payments for others. Based on good data, refinements can help ensure that payments are not only adequate in the aggregate but also fairly targeted to protect individual beneficiaries and providers. The BBA experience underscores the need to rely on hard data and objective analyses rather than assertions and anecdotes. It also argues for the Congress to ensure that adequate resources are secured for efforts underway to modernize Medicare’s information systems and conduct needed research and analyses. Effective Leadership and Sufficient Capacity Are Critical to Success of Medicare Reform The extraordinary challenge of developing and implementing Medicare reforms should not be underestimated. Our look at health care spending projections shows that, with respect to Medicare reform, “getting it wrong” will have severe consequences. To get it right, effective program design will need to be coupled with competent program management. With that goal in mind, questions have been raised about the capacity of the Health Care Financing Administration (HCFA)—Medicare’s current steward—to administer the Medicare program effectively. Our reviews of Medicare program activities confirm the legitimacy of these concerns and suggest that changes may be necessary to HCFA’s focus, structure, resources, and operations. Several proposals have been made to address HCFA management shortcomings. One approach is to create an entity that would administer Medicare without any non-Medicare responsibilities. The rationale for this view is that HCFA’s other responsibilities—administering Medicaid, the State Children’s Health Insurance Program, and other oversight, enforcement, and credentialing programs—constitute a separate full-time job. In the meantime, effective Medicare management requires monitoring the claims payment and review activities of more than 50 contractors; setting thousands of payment rates for the various providers of Medicare- covered services; and administering consumer information and beneficiary protection activities for the traditional fee-for-service component and Medicare+Choice plans. Alternative approaches would divide the administration of Medicare’s components between HCFA and an entirely new entity. The intention would be to eliminate a conflict of interest that some perceive exists in having the same agency manage both the traditional fee-for-service and the managed care components. More details would be necessary before the Congress could consider the merits of one approach over another. Creating a new agency allows for a fresh start, eliminating the need to reengineer established practices. The downside is that it typically takes years before a new agency acquires the personnel and infrastructure to become fully effective. In addition, it is questionable whether the perceived advantages of dividing Medicare’s administration would outweigh the inefficiencies that could result from duplication or coordination difficulties. Closely allied with the issue of agency restructuring is the question of agency leadership. Frequent changes in HCFA leadership make it difficult for the agency to develop and implement a consistent long-term vision. The maximum term of a HCFA administrator is, as a practical matter, only as long as that of the President who appointed him or her. Historically, their terms have been much shorter. In the 24 years since HCFA’s inception, there have been 20 administrators or acting administrators, whose tenure has been, on average, little more than 1 year. These short tenures have not been conducive to carrying out whatever strategic plans or innovations an individual may have developed for administering Medicare efficiently and effectively. Other federal agencies offer a precedent for an administrator’s tenure to span presidential administrations. For example, the FBI director’s term is 10 years and the Social Security Administrator’s term is 6 years. A benefit of similarly lengthening the HCFA administrator’s tenure would be to better insulate the program from short-term political pressures. No matter how well-conceived or how well-led, however, no agency can function effectively without adequate resources and appropriate accountability mechanisms. Over the years, HCFA’s administrative dollars have been stretched thinner as the agency’s mission has grown. Adequate resources are vital to support the kind of oversight and stewardship activities that Americans have come to count on—inspection of nursing homes and laboratories, certification of Medicare providers, collection and analysis of critical health care data, to name a few. We and other health policy experts, including several former HCFA administrators, contend that too great a mismatch between the agency’s administrative capacity and its designated mandate will leave HCFA unprepared to handle Medicare reforms and future population growth. In 1999, Medicare’s operating expenses represented less than 2 percent of the program’s benefit outlays. Although private insurers incur other costs, such as those for advertising, and seek to earn a profit, they would not attempt to manage such a large and complex program with so comparatively small an administrative budget. It is not yet clear whether a successfully administered Medicare program requires reengineering HCFA, creating an entirely new agency, or some combination of the two options. What is clear, however, is that the program’s effective governance rests on finding a balance between flexibility and accountability—that is, granting an entity adequate flexibility to act prudently and ensuring that the entity can be held accountable for its results-based decisions and their implementation. Moreover, because Medicare’s future will play such a significant role in the future of the American economy, we cannot afford to settle for anything less than a world-class organization to run the program. However, achieving such a goal will require a clear recognition of the fundamental importance of efficient and effective day-to-day operations. Conclusions In determining how to reform the Medicare program, much is at stake— not only the future of Medicare itself but also assuring the nation’s future fiscal flexibility to pursue other important national goals and programs. I feel that the greatest risk lies in doing nothing to improve the Medicare program’s long-term sustainability. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Engaging in a comprehensive effort to reform the Medicare program and put it on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play. While not ignoring today’s needs and demands, we should remember that surpluses can also serve as an occasion to promote the transition to a more sustainable future for our children and grandchildren. Updating Medicare’s benefit package may be a necessary part of any realistic reform program. Such changes, however, need to be considered in the context of Medicare’s long-term fiscal outlook and the need to make changes in ways that will promote the program’s longer-term sustainability. We must remember that benefit expansions are often permanent, while the more belt-tightening payment reforms—vulnerable to erosion—could be discarded altogether. The BBA experience reminds us about the difficulty of undertaking reform. Specifically, we must acknowledge that adding prescription drug coverage to the Medicare program would have a substantial impact on program costs. At the same time, many believe it is needed to ensure the financial well-being and health of many of its beneficiaries. The challenge will be in designing and implementing drug coverage that will minimize the financial implications for Medicare while maximizing the positive effect of such coverage on Medicare beneficiaries. Most importantly, any substantial benefit reform should be coupled with other meaningful program reforms that will help to ensure the long-term sustainability of the program. In the end, the Congress should consider adopting a Hippocratic oath for Medicare reform proposals—namely, “Don’t make the long-term outlook worse.” Ultimately, we will need to engage in a much more fundamental health care reform debate to differentiate wants, which are virtually unlimited, from needs, which should be defined and addressed, and overall affordability, of which there is a limit. We at GAO look forward to continuing to work with this Committee and the Congress in addressing this and other important issues facing our nation. In doing so, we will be true to our core values of accountability, integrity, and reliability. Chairman Grassley and Ranking Member Baucus, this concludes my prepared statement. I will be happy to answer any questions you or other Members of the Committee may have. GAO Contacts and Acknowledgments For future contacts regarding this testimony, please call William J. Scanlon, Health Care Issues, at (202) 512-7114 or Paul L. Posner, Federal Budget and Intergovernmental Relations, at (202) 512-9573. Other individuals who made key contributions include Linda F. Baker, James C. Cosgrove, Paul Cotton, Hannah F. Fein, James R. McTigue, and Melissa Wolf. (290042)
Much is at stake when it comes to Medicare reform--not only the program's future but also the nation's fiscal flexibility to pursue other important national goals and programs in the future. A comprehensive effort to reform Medicare and put it on a sustainable path would help fulfill this generation's stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play. Updating Medicare's benefit package may be a necessary part of any realistic reform program. Such changes, however, need to be considered in the context of Medicare's long-term fiscal outlook and the need to make changes that will sustain the program over the long-term. Specifically, adding prescription drug coverage to the Medicare program would have a substantial impact on program costs. At the same time, many believe it is needed to guarantee the financial well-being and health of many beneficiaries. The challenge will be to design and implement drug coverage that will minimize the financial implications for Medicare while maximizing the positive effect of such coverage on Medicare beneficiaries. Most importantly, any substantial benefit reform should be coupled with other meaningful program reforms that will help to ensure the program's long-term viability.
Background In March 2009, Treasury issued the first HAMP guidelines for modifying first-lien mortgages in an effort to help homeowners avoid foreclosure. The goal of the first-lien mortgage modification program is to reduce struggling homeowners’ mortgage payments to more affordable levels— specifically to 31 percent of the borrower’s income. To reduce mortgage payments, servicers may modify the loan by lowering the interest rate, extending the amortization period, or forbearing principal. According to Treasury officials, the program was intended to offer reduced monthly payments to up to 3 to 4 million homeowners. Through December 2010, there were a total of 143 active servicers under the TARP-funded portion of HAMP. Through December 2010, over 1.7 million HAMP trial modifications had been offered to borrowers, nearly 1.5 million of which had begun HAMP trial modifications. Of the trial modifications begun, approximately 152,000 were active trial modifications, and roughly 522,000 were active permanent modifications. Approximately 735,000 trial modifications and around 58,000 permanent modifications had been canceled (fig. 1). As of December 31, 2010, $1 billion in TARP funds had been disbursed for TARP-funded housing programs, of which roughly $840 million was disbursed to servicers for HAMP-related activity. Most of the disbursements to date have been made for the first-lien modification program. In addition to first-lien modifications, Treasury has announced a number of TARP-funded housing programs, including those for modifying second liens held by borrowers with first-lien modifications under HAMP, reducing principal, offering temporary forbearance for unemployed borrowers, and providing alternatives to foreclosure (see table 1). At the current time, with the exception of the Housing Finance Agency (HFA) Hardest-Hit Fund, the cutoff date for borrowers to be accepted into TARP- funded programs is December 31, 2012, and disbursements of TARP funds may continue until December 2017. Servicers Have Been Inconsistent in Soliciting and Evaluating HAMP Borrowers and More Treasury Action Is Needed to Ensure Equitable Treatment of Borrowers with Similar Circumstances Although one of Treasury’s stated goals for HAMP is to standardize the loan modification process across the servicing industry, in our June 2010 report, we identified several inconsistencies in the way servicers treated borrowers under HAMP that could lead to inequitable treatment of similarly situated borrowers. First, because Treasury did not issue guidelines for soliciting borrowers for HAMP until a year after announcing the program, the servicers we contacted solicited borrowers differently. A few solicited those who were 31 days delinquent on their payments, but other servicers waited until borrowers were at least 60 days delinquent. We also noted that many borrowers had complained they did not receive timely responses to their HAMP applications and had difficulty obtaining information about the program. In March 2010, Treasury issued guidelines to address some of the issues related to communicating with borrowers about the program, and said it planned to monitor servicers’ compliance with the guidelines. Second, Treasury’s lack of specific guidelines for determining HAMP eligibility for borrowers current or less than 60 days delinquent, but in imminent danger of defaulting has led to inconsistencies in how servicers evaluate them. The 10 servicers who GAO contacted reported seven different sets of criteria for determining imminent default. Two servicers considered borrowers in imminent default if they met basic HAMP eligibility requirements, but other servicers had additional criteria, such as requiring that a hardship situation has existed for more than 1 year. Treasury’s goal was to create uniform, clear, and consistent guidance for loan modifications across the servicing industry, but these differences may result in one borrower’s being approved for HAMP and another borrower with the same financial situation and loan terms being denied by a different servicer. We recommended that Treasury establish clear, specific criteria for determining whether a borrower was in imminent default to ensure greater consistency across servicers. However, Treasury believes the impact of these variations on borrowers is inconsequential and has declined to implement this recommendation. We continue to believe that further actions are warranted. In addition, Treasury has not clearly informed borrowers that they can use the HOPE Hotline to raise concerns about servicers’ handling of HAMP loan modifications and to challenge potentially incorrect denials, likely limiting the number of borrowers who have used the hotline for these purposes. The HOPE Hotline also has procedures for referring borrowers who need additional assistance to the Making Home Affordable Escalation Team. However, it is unclear whether borrowers are aware of and using the HOPE Hotline to raise concerns about their servicers and challenge potentially incorrect denials. We recommended that Treasury (1) more clearly inform borrowers that the HOPE Hotline may also be used for these purposes and (2) monitor the effectiveness of the HOPE Hotline as a process for handling borrower concerns. Finally, Treasury has taken some steps to ensure that servicers comply with HAMP program requirements, but has yet to establish specific remedies for noncompliance with HAMP guidelines. For instance, the HAMP servicer participation agreement describes actions that could be taken in response to noncompliance and the HAMP Compliance Committee monitors servicers’ performance and activities. But without standardized remedies for noncompliance, Treasury risks treating servicer noncompliance inconsistently and its methods of responding to incidents of noncompliance lack transparency. In our June 2010 report, we recommended that Treasury finalize and expeditiously issue consequences for servicers who do not comply with HAMP requirements. We made eight recommendations to improve the transparency and accountability of HAMP in June 2010. Treasury stated that it intended to implement some of the recommendations, but little action has been taken to date. Implementation Challenges Have Affected the Progress of Treasury’s Newer Housing Programs The implementation of Treasury’s 2MP, HAFA, and PRA programs has been slow, and limited activity has been reported to date. This slow pace is attributed in part to several implementation challenges, including the following. Difficulty matching first and second liens for 2MP. Because eligibility for 2MP required a first-lien HAMP modification, Treasury contracted with a database vendor to provide data on existing second liens that corresponded with these modifications. However, the servicers we contacted noted that even differences in the spelling of addresses—for example, in abbreviations or spacing—could prevent an accurate identification. Initial 2MP guidelines stated that servicers could not offer a second-lien modification without confirming a match with the database vendor, even if they had serviced both first and second liens on the same property. In November 2010, Treasury provided updated program guidance that allowed servicers to offer a 2MP modification if they could identify a first- and second-lien match within their own portfolio or had evidence that a corresponding first lien existed, even if the database had not identified it. Extensive program requirements for HAFA. All six of the large MHA servicers we spoke with identified extensive program requirements as reasons for the slow implementation of HAFA, including the initial requirement that applicants first be evaluated for a HAMP first-lien modification. Because of this requirement, potential HAFA borrowers had to submit extensive income and other documentation required for a modification, even if they simply wanted to sell. In cases where a borrower had already identified a potential buyer before executing a short-sale agreement with the servicer, the additional time required for a HAMP first- lien evaluation may have dissuaded the buyer from purchasing the property. Restrictive short-sale requirements and a requirement that mortgage insurers waive certain rights may have also contributed to the limited activity under HAFA. Servicers said that given these requirements, they did not expect HAFA to increase their overall number of short sales and deeds-in-lieu. In response to this concern, Treasury released updated HAFA guidance on December 28, 2010, to no longer require servicers to document and verify a borrower’s financial information to be eligible for HAFA. Voluntary nature of the PRA program. Treasury officials told us that 13 of the 20 largest MHA servicers were planning to offer principal reduction to some extent, but some servicers we spoke with said they would limit the conditions under which they would offer principal forgiveness under PRA. Treasury’s PRA guidelines require all servicers participating in HAMP to consider principal forgiveness for HAMP-eligible borrowers with mark-to- market loan-to-value ratios (LTV) greater than 115 percent. But servicers are not required to offer principal reduction, even if the net present value (NPV) is higher when principal is forgiven. For example, one servicer had developed a “second look” process that used internal estimates of default rates to determine NPV and did not forgive principal unless these estimates—not those calculated using program guidelines—indicated a higher NPV with forgiveness. As a result, only 15 to 25 percent of those who otherwise would have received principal forgiveness received it, according to this servicer. We recommended in June 2010 that Treasury report activity under PRA, including the extent to which servicers determined that principal reduction was beneficial to mortgage investors but did not offer it, to ensure transparency in the implementation of this program. Treasury officials told us they would report PRA activity at the servicer level once the data were available. We plan to continue to monitor Treasury’s reporting of PRA and other TARP-funded housing programs. In addition, we found that Treasury could do more to build on the lessons learned from its first-lien modification program. For example, we previously reported that Treasury had not sufficiently assessed the capacity of servicers to implement the first-lien program. More recently, we observed that Treasury has not obtained all required documentation to demonstrate that servicers have the capacity to successfully implement the newer programs. According to Treasury, Fannie Mae has conducted program-specific readiness reviews for the top-20 large servicers for 2MP, HAFA, and PRA, including all 17 servicers participating in 2MP. These reviews assess servicers’ operational readiness, including the development of key controls to support new programs, technology readiness, training readiness, staffing resources, and program processes and documentation. According to Treasury’s summary of these reviews, of those that had completed reviews, 4 had provided all required documents for HAFA, and 3 had provided all required documents for PRA. None of the servicers provided all required documents for 2MP. As a result, servicers’ ability to effectively offer troubled homeowners second-lien modifications, foreclosure alternatives, and principal reductions is unclear. Further, Treasury has not implemented our June 2010 recommendation that it establish goals and effective performance measures for these programs, nor has it said how it will use any assessments to hold servicers accountable for their performance. Treasury also has not established actions it will take in cases where individual servicers are not performing as expected in these programs. As we noted in June 2010, without performance measures and goals, Treasury will not be able to effectively assess the outcomes of these programs. HAMP Borrowers Shared Several Characteristics, Including Reduced Income; Early Data Indicate that Borrowers Who Redefaulted from Permanent Modifications Were Further Into Delinquency Our analysis of HAMP data for borrowers in trial and permanent modifications indicated that over half of borrowers cited curtailed income, such as reduced pay, as the primary reason for needing to lower their mortgage payments (56 percent of borrowers in active modifications and 53 percent in trial modifications). However, only 5 percent of borrowers in each of these groups cited unemployment as their primary reason for financial hardship. Borrowers also had high levels of debt prior to modification with monthly mortgage payments that were 45 and 46 percent of gross monthly income, respectively, and total debt levels of 72 and 76 percent of gross monthly income, respectively. Even after modification, these borrowers continued to have high debt levels (median back-end DTI ratios of 55 and 57 percent for those in trial and permanent modifications, respectively). In addition, borrowers in trial and permanent modifications tended to be “underwater,” with median mark-to-market LTV ratios of 123 percent and 128 percent, respectively. Borrowers who were either canceled from a trial modification or redefaulted from a permanent one shared several of these characteristics, including having high debt levels and being “underwater” on their mortgages. However, some characteristics appeared to increase the likelihood that a borrower would be canceled from a trial modification. For example, borrowers who received a trial modification based on stated income were 52 percent more likely to be canceled from trial modifications than those who started a trial modification based on documented income. In some cases, borrowers who received trial modifications based on stated income were not able to or failed to provide proof of their income or other information for conversion to permanent modification. In other cases, borrowers may have submitted the required documentation but the servicer lost the documents. In addition, borrowers who were 60 or 90 days or more delinquent at the time of their trial modifications were 6 and 9 percent more likely to have trial modifications canceled, respectively, compared with borrowers who were not yet delinquent at the time of their trial modifications. Treasury has acknowledged the importance of reaching borrowers before they are seriously delinquent by requiring servicers to evaluate borrowers still current on their mortgages for imminent default. But, as we noted in June 2010, this group of borrowers may be defined differently by different servicers. Borrowers who had high mark-to-market LTV ratios (from 120 to 140 percent) were 7 percent less likely to be canceled from trial modifications than those with mark-to-market LTV ratios at or below 80 percent, and those with a mark-to-market LTV ratio of more than 140 percent were 8 percent less likely to be canceled. Borrowers who received principal forgiveness of between 1 and 50 percent of their total loan balance were less likely to be canceled from trial modifications compared with those who did not receive principal forgiveness. In addition, larger monthly payment reductions lowered the likelihood that a trial modification would be canceled. For example, borrowers who received a principal and interest payment reduction of least 10 percent were less likely to be canceled from their trial modifications than other borrowers. Our initial observations of over 15,000 non-GSE borrowers who had redefaulted from permanent HAMP modifications through September 2010 indicated these borrowers differed from those in active permanent modifications in several respects. Specifically, non-GSE borrowers who redefaulted on their HAMP permanent modifications tended to have higher levels of delinquency at the time they were evaluated for a trial modification (median delinquency of 8 months compared to 5 months for those still in active permanent modifications), lower credit scores, and lower median percentage of payment reduction compared with those who were still current in their permanent modifications (24 percent compared with 33 percent). These borrowers may have received smaller reductions in their payments because they had lower debt levels before modification than borrowers who did not redefault. Most Borrowers Denied or Canceled from Trial Modifications Have Avoided Foreclosure to Date, but Limits to Treasury’s Data Make Understanding Their Outcomes Difficult We requested data from six servicers on the outcomes of borrowers who (1) were denied a HAMP trial modification, (2) had their trial modification canceled, or (3) redefaulted from a HAMP permanent modification. According to the data we received, of the about 1.9 million GSE and non- GSE borrowers who were evaluated for a HAMP modification by these servicers as of August 31, 2010, 38 percent had been denied a HAMP trial modification, 27 percent had seen their HAMP trial modifications canceled, and 1 percent had redefaulted from a HAMP permanent modification. According to these servicers’ data, borrowers who were denied HAMP trial modifications were more likely to become current on their mortgages without any additional help from the servicer (39 percent) than to have any other outcome. Of those borrowers who were canceled from a HAMP trial modification, servicers often initiated actions that could result in the borrower retaining the home. Specifically, 41 percent of these borrowers had received or were in the process of receiving a permanent proprietary modification, and 16 percent had received or were in the process of receiving a payment plan. However, servicers started foreclosure proceedings on 27 percent of borrowers at some point after the HAMP trial modification was canceled, but only 4 percent of these borrowers completed foreclosure. Compared with borrowers who were denied, borrowers who had a HAMP trial modification canceled were less likely to become current on their mortgages (15 percent) or to pay off their loan (4 percent). Finally, though data are limited, of the borrowers who redefaulted from a HAMP permanent modification, almost half were reflected in categories other than proprietary modification, payment plan, becoming current, foreclosure alternative, foreclosure, or loan payoff. Twenty-eight percent of borrowers who redefaulted from permanent modifications were referred for foreclosure at some point after redefaulting, but, like borrowers denied or canceled from a HAMP trial modification, the percentage of borrowers who completed foreclosure remained low relative to other outcomes (less than 1 percent). Unlike borrowers who were denied or canceled, borrowers who redefaulted were less likely to receive or be in the process for receiving a permanent proprietary modification or payment plan after redefaulting, with 27 percent of borrowers receiving or in the process for receiving one of the outcomes. In addition, less than 1 percent of borrowers who redefaulted had become current as of August 31, 2010. We also looked at data that Treasury had begun reporting on the disposition paths of borrowers who were denied or canceled from HAMP trial modifications. However, weaknesses in how Treasury requires servicers to report data make it difficult to understand the current status of these borrowers. First, Treasury’s system for reporting the disposition of borrowers requires servicers to place borrowers in only one category, even when borrowers are being evaluated for several possible dispositions, with non-HAMP (proprietary) modifications reported first. As a result, the proportion of borrowers reported as receiving proprietary modifications is likely to be overstated relative to other possible dispositions, such as foreclosure starts. Further, Treasury does not require servicers to distinguish between completed and pending actions, so some reported outcomes may not be clear. For example, we asked six large servicers to separate borrowers who had a HAMP trial modification canceled into two groups: those who were being evaluated for permanent proprietary modifications and those who had actually received them. The servicers’ data indicated that 23 percent of these borrowers were in the process of being approved for proprietary modifications, and 18 percent had received one. At the same time, Treasury reported that 43 percent of borrowers canceled from a HAMP trial modification through August 2010 were in the process of obtaining a proprietary modification. Servicers told us they had been able to offer more proprietary modifications than HAMP permanent modifications because proprietary modifications offered greater flexibility. For example, several servicers told us that their proprietary modification programs had fewer documentation requirements. Several servicers told us they were able to offer more proprietary modifications than HAMP modifications or help borrowers when HAMP could not because their proprietary modifications had fewer eligibility requirements, such as restrictions on occupancy type. In addition, while HAMP guidelines require borrowers to have a mortgage payment exceeding 31 percent of their income, all of the servicers we spoke with indicated their proprietary modification programs also served borrowers who had lower payment ratios. While the number of proprietary modifications has outpaced the number of HAMP modifications, the sustainability of both types of modifications is still unclear. For example, proprietary modifications may not reduce monthly mortgage payments as much as HAMP modifications, potentially affecting the ability of borrowers to maintain their modified payments. In summary, we reported in June 2010 that it would be important for Treasury to expeditiously implement a prudent design for the remaining TARP-funded housing programs. Our current work shows there is more Treasury can do to ensure the effective implementation of these programs, including ensuring that servicers have sufficient capacity to implement them, and that borrowers are notified about potential eligibility for second-lien modifications. We also believe it will be important for Treasury to have clear and accurate information on the dispositions of borrowers who are denied or fall out from HAMP modifications. Without accurate reporting of borrower outcomes, Treasury cannot know the actual extent to which borrowers who are denied, canceled, or redefaulted from HAMP are helped by other programs or evaluate the need for further action to assist this group of homeowners. We provided a copy of our current draft report to Treasury for its review and comment. Treasury acknowledged the report’s description of servicers’ challenges and appreciated our assessment of Treasury’s housing programs. Treasury indicated that the draft report raised certain criticisms of the design and implementation of MHA that were unwarranted. We continue to believe there are opportunities to improve the transparency, accountability, and effectiveness of MHA and anticipating the report this month, in March 2011. We will continue to monitor Treasury’s implementation and management of TARP-funded housing programs as part of our ongoing oversight of the performance of TARP in meeting its legislative goals. We are also conducting a broad-based study of the federal government’s efforts to mitigate the impact of foreclosures, which will include an assessment of how federal foreclosure mitigation efforts or alternatives might better preserve homeownership, prevent avoidable foreclosures, and otherwise help resolve troubled mortgages. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, I appreciate this opportunity to discuss this important program and would be happy to answer any questions that you may have. Thank you. GAO Contact and Staff Acknowledgments For information on this testimony, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this statement include Lynda Downing, Harry Medina, John Karikari (Lead Assistant Directors); Tania Calhoun; Emily Chalmers; William Chatlos; Grace Cho; Rachel DeMarcus; Marc Molino; Mary Osorno; Jared Sippel; Winnie Tsen; Jim Vitarello; and Heneng Yu. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on the Making Home Affordable (MHA) program, including the Home Affordable Modification Program (HAMP). Since the Department of the Treasury (Treasury) first announced the framework for its MHA program over 2 years ago, the number of homeowners facing potential foreclosure has remained at historically high levels. HAMP, the key component of MHA, provides financial incentives to servicers and mortgage holders/investors to offer modifications on first-lien mortgages. The modifications are intended to reduce borrowers' monthly mortgage payments to affordable levels to help these homeowners avoid foreclosure and keep their homes. Since HAMP's inception, concerns have been raised that the program is not reaching the expected number of homeowners. In two prior reports, we looked at the implementation of the HAMP first-lien modification program, noted that Treasury faced challenges in implementing it, and made several recommendations intended to address these challenges. In addition, our ongoing work examines the extent to which additional MHA programs have been successful at reaching struggling homeowners, the characteristics of homeowners who have been assisted by the HAMP first-lien modification program, and the outcomes for borrowers who do not complete HAMP trial or permanent modifications. These programs include the Second-Lien Modification Program (2MP) for those whose first liens have been modified under HAMP, the Home Affordable Foreclosure Alternatives (HAFA) program for those who are not successful in HAMP modifications, and the Principal Reduction Alternatives (PRA) program for borrowers who owe more on their mortgages than the value of their homes. This testimony is based on the report on HAMP that we issued in June 2010, as well as on preliminary observations from our ongoing work. Specifically, this statement focuses on (1) the extent to which HAMP servicers have treated borrowers consistently and the actions that Treasury and its financial agents have taken to ensure consistent treatment; (2) the status of Treasury's second-lien modification, foreclosure alternatives, and principal reduction programs; (3) the characteristics of borrowers who received HAMP modifications; and (4) outcomes for borrowers who are denied or fall out of HAMP trial or permanent first-lien modifications. In June 2010, we reported on several inconsistencies in the way servicers treated borrowers under HAMP that could lead to inequitable treatment of similarly situated borrowers. These inconsistencies involved how servicers solicited borrowers for the program, how they evaluated borrowers who were not yet 60 days delinquent on their mortgage payments, and how they handled borrower complaints. In addition, we noted that while Treasury had taken some steps to ensure servicer compliance with program guidance, it had not yet finalized consequences for servicer noncompliance. We made eight recommendations to improve the transparency and accountability of HAMP in June 2010. Treasury stated that it intended to implement some of the recommendations, but little action has been taken to date. Further, as part of our ongoing work, we identified several implementation challenges that had slowed implementation of newer MHA programs, specifically 2MP, HAFA, and the Principal Reduction Alternative (PRA). For example, we found that servicers experienced difficulties in using a required database to identify borrowers who might be eligible for 2MP, contributing to a slow start for this program. We found that borrowers who were in HAMP trial or permanent modifications tended to share certain characteristics, such as reduced income and having high debt levels, and that those who were canceled from trial modifications or redefaulted from permanent modifications tended to be further into delinquency at the time of their modifications. Lastly, we found that many borrowers who were denied or fell out of HAMP modifications had been able to avoid foreclosure to date. But weaknesses in how Treasury reports the disposition paths, or outcomes, for these borrowers make it difficult to understand exactly what has happened to these homeowners.
Reaping the Benefits of Technology Is Central to Controlling Costs and Providing Better Services One of the six major categories in our high risk series is obtaining an adequate return on multibillion dollar investments in information technology. We added this category in 1995 because we continued to find major system development projects that greatly exceed estimated costs, fall years behind schedule, and fail to achieve operational goals. These failures have left the Congress and executive branch severely handicapped by the lack of reliable data. Moveover, huge opportunities have been lost to use technology to reduce federal operating costs and improve program performance. The effective use of information technology is integral in some way to solving problems in all the high-risk areas mentioned in our 1997 series. The seriousness of these information management problems is underscored by the fact that nearly every aspect of over $1.5 trillion in annual federal government operations depends on information systems. Additionally, the American public, enjoying the everyday benefits of technology-driven service improvements in the private sector, are becoming increasingly frustrated with poor performance from federal agencies. In our 1997 high risk report on information management and technology,we focus on four major modernization efforts that provide a vivid study in technology management problems that, unfortunately, are all too typical across the federal government. The Internal Revenue Service (IRS) has spent or obligated over $3 billion since 1986 on its Tax Systems Modernization (TSM), which is designed to overhaul the paper-intensive approach to tax return processing. We reported in 1995 that the modernization lacked basic elements needed to bring it to a successful conclusion, such as a comprehensive business strategy for reducing paper filings and the requisite management, software development, and technical infrastructure. We made over a dozen recommendations to address these weaknesses, including implementing (1) a sound process to manage technology investments, (2) disciplined procedures for software requirements management, and (3) an integrated systems architecture. We reported in June and September 1996 that IRS had initiated many activities to improve its modernization efforts but had not fully implemented any of our recommendations. The Congress subsequently directed IRS to establish a schedule for implementing GAO’s recommendations. It also required regular status reports on corrective actions and TSM spending. IRS and the Department of the Treasury have taken steps to address our recommendations and respond to congressional direction, but further concerted, sustained improvement efforts are needed. For over 15 years, the Federal Aviation Administration’s (FAA) $34-billion air traffic control (ATC) modernization has experienced cost overruns, schedule delays, and performance shortfalls. Though FAA has recently made important progress on aspects of the modernization, some serious problems remain. Most notably, this large effort has long proceeded without the benefit of a complete systems architecture to guide the modernization’s development and evolution. Among other things, this lack of a technical blueprint has led to unnecessarily higher spending to buy, integrate, and maintain hardware and software. We have recommended that FAA develop and enforce a complete systems architecture. Exacerbating the modernization’s problems is unreliable information on costs—both future estimates of costs and accumulations of actual costs. We have recommended that FAA institutionalize a defined cost process and develop and implement a managerial cost accounting capability. The Department of Defense’s (DOD) Corporate Information Management (CIM) effort was supposed to save billions of dollars by streamlining operations and implementing standard information systems in areas such as materiel management, personnel, finance, and transportation. But after 8 years and $20 billion in spending on CIM, DOD has yet to meet its savings goals, largely because of its failure to implement sound management practices for CIM. We have recommended that DOD (1) better link system modernization projects to business process improvement efforts, (2) establish plans and performance measures and clearly defined roles and responsibilities for implementing CIM, (3) improve controls over information technology investments, and (4) not initiate system improvement projects without sound economic and technical analyses.DOD has yet to successfully implement these recommendations and continues to spend billions of dollars on system migration projects with little sound analytical justification. Recently, however, DOD has begun an initiative to better manage its technology investments using its planning, programming, and budgeting system. Similarly, the National Weather Service (NWS) has yet to resolve serious problems with its $4.5-billion modernization effort. New radars are not always up and running when severe weather is threatening and ground-based sensors fall short of performance and user expectations. We have recommended several actions for correcting these problems and have also recommended that NWS improve its technical capabilities to design and manage the modernization. NWS has addressed some of our concerns in these areas, but others remain. We also recommended that NWS establish a sound decision-making process for managing the modernization’s massive investment and getting promised returns from technology. Finally, the modernization effort has long gone without a systems architecture to guide it. In response to our recommendations, NWS has begun to develop a technical blueprint for the modernization. However, until a systems architecture is developed and enforced, the modernization will continue to incur higher system development and maintenance costs. Correcting problems in these four major modernization efforts is important. But we also recognize the need to address and overcome the root causes of the government’s chronic information management problems. To do this, GAO has worked closely with the Congress and the administration to fundamentally revamp and modernize federal information management practices. We studied information management practices at leading public-sector and private-sector organizations—ones that have dramatically improved their performance and met mission goals through the use of technology. In our executive guide to improving information management, we identified proven techniques used by these successful organizations and developed an integrated set of information management practices for federal agencies. The 104th Congress used these best practices to craft the first major information management reform legislation in over a decade: the Paperwork Reduction Act of 1995 (PRA) and the Clinger-Cohen Act of 1996. These laws emphasize involving senior executives in information management decisions, establishing senior-level Chief Information Officers, tightening controls over technology spending, redesigning inefficient work processes, and using performance measures to assess technology’s contribution to achieving mission results. These management practices provide agencies—such as IRS for tax systems—a practical means of addressing their information problems, maximizing benefits from technology spending, and controlling the risks of system development efforts. Past experience has shown that the early days following the passage of reform legislation are telling. Let me quickly highlight areas where this Committee can ensure that these reforms get off to a strong start. Executive Leadership Is Crucial In the successful organizations we studied, senior executives were personally committed to improving the management of technology. They recognized that information management needed to be incorporated into an executive-level management framework that included mission planning, goal setting, budgeting, and performance improvement. Both the PRA and the Clinger-Cohen Act incorporate this practice by making agency heads directly responsible for establishing goals for using information technology to improve the effectiveness of agency operations and service to the public, measuring the actual performance and contribution of technology in supporting agency programs, and including with their agencies’ budget submissions to the Office of Management and Budget (OMB) a report on their progress in meeting operational improvement goals through the use of technology. Qualified Chief Information Officers Are Needed Throughout Government The PRA requires major agencies to appoint well-qualified Chief Information Officers (CIO) who report directly to agency heads. The CIO is responsible for working with the agency head and other senior managers to (1) promote improvements to work processes used to carry out programs, (2) implement an adequate information technology architecture, and (3) strengthen the agency’s capabilities to deal with emerging technology issues and develop effective information systems. Getting the right people in place will make a real difference in implementing lasting management reforms. CIOs should have knowledge of and practical experience in using technology to produce major improvements in performance. This year, the Congress should expect to see well-qualified CIOs making clear progress in implementing the reforms. CIOs should also be active in identifying the technical capabilities that their agencies need to acquire and manage information resources in a disciplined manner to better control risks and achieve desired outcomes. Improved Investment Controls Are Vital Leading organizations manage information technology projects as important investments. Top executives periodically assess all major projects, prioritize them, and make funding decisions based on factors such as cost, risk, return on investment, and support of mission-related outcomes. Once projects are selected for funding, executives monitor them continually, taking quick action to resolve development problems and mitigate risks. After a project is implemented, executives evaluate actual versus expected results and revise their investment management process based on lessons learned. The PRA and the Clinger-Cohen Act incorporate these investment practices. Agency heads and CIOs should be designing and implementing a structure for maximizing the value and managing the risk of technology investments by selecting, controlling, and evaluating investments using sound criteria and modernizing work processes before making significant technology investments; and building large, complex systems in a modular fashion. Last month, GAO issued a comprehensive guide for agencies to use in assessing how well they are selecting and managing their information technology resources. The guide, which is based on best practices, will be instrumental in helping agencies identify specific areas for improving their investment process to maximize the returns on technology spending and better control system development risks. As part of its review of fiscal year 1998 budget proposals, the Congress should look for clear evidence that agencies have established sound investment processes and explore agencies’ track records in achieving performance improvements from technology. Congressional committees should expect agencies to provide hard data on how technology spending is planned to be used to improve mission performance and reduce operating costs. OMB’s Role Is Critical Under the reform legislation, OMB has significant leadership responsibilities to help agencies to improve their information management practices. This is especially important in establishing guidance and policies for agencies to follow in implementing evaluating the results of agency technology investments and enforcing accountability for results through the executive branch budget process. OMB has been proactive in developing policies and procedures to help agencies institute effective investment decision-making processes. For example, OMB and GAO worked together to produce a guide in 1995 for both OMB budget examiners and agency executives on how to evaluate information technology investments using the concepts from our best practices work. OMB needs to continue to define expectations for agencies and for itself in this key area. Also, in 1996, we recommended that OMB develop recommendations for the President’s budget on funding levels for technology projects that take account of an agency’s track record in delivering performance improvements from technology investments and develop an approach for determining whether OMB itself is having an impact on reducing the risk or increasing the returns on agency information technology investments. To its credit, at the beginning of this fiscal year, OMB issued a memorandum to heads of executive departments and agencies laying out decision criteria that OMB will use in evaluating and funding major information system investments proposed for funding under the President’s fiscal year 1998 budget. The criteria strongly reinforce the provisions of the reform legislation. OMB also has a crucial role helping to resolve two governmentwide information management issues added new to our 1997 high-risk list. The first is information security. Malicious attacks on computer systems are an increasing threat to our national welfare. Despite their sensitivity and criticality, federal systems and data across government are not being adequately protected, thereby putting billions of dollars worth of assets a risk of loss and vast amounts of sensitive data at risk of unauthorized disclosure. Since June 1993, we have issued over 30 reports describing serious information security weaknesses at major federal agencies. For example, in May 1996, we reported that tests at DOD showed that DOD systems may have experienced as many as 250,000 attacks during 1995, that over 60 percent of the attacks were successful at gaining access, and that only a small percentage of these attacks were detected. And in September 1996, we reported that during the previous 2 years, serious information security control weaknesses had been reported for 10 of the 15 largest federal agencies. We have made dozens of recommendations to individual agencies for improvement and they have acted on many of them. Also, in 1996, we recommended that OMB play a more proactive role in promoting awareness in monitoring agency practices. In particular, we recommended that OMB work with the interagency CIO Council to develop a strategic plan for (1) identifying information security risks, (2) reviewing individual agency security programs, and (3) developing or identifying security training programs. The second governmentwide high-risk issue concerns the need to modify information systems to correctly process dates past the year 1999 (the “Year 2000 Problem”). As chair of the CIO Council, OMB has a key role to play in solving this problem, which threatens widespread disruption of federal computer systems. It is important for OMB to get agencies to rapidly review their information technology systems, assess the scope of their Year 2000 problem, renovate the systems that need to be changed, and test and implement them. For our part, GAO has developed a step-by-step framework to guide agencies in planning and managing their Year 2000 programs. Our guide incorporates best practices identified by leading agencies for dealing with this issue, and is coordinated with the work of the Best Practices Subcommittee of the Interagency Year 2000 Committee. Managing the Cost of Government Programs More Effectively Better financial management is central to providing much needed accountability and addressing high-risk problems. The government’s financial systems are all too often unable to effectively perform the most rudimentary bookkeeping for organizations, many of which are oftentimes much larger than many of the nation’s largest private corporations. Federal financial management suffers from decades of neglect, inattention to good controls, and failed attempts to improve financial management and modernize outdated financial systems. This situation is illustrated in a number of high-risk areas, including the weaknesses that undermine DOD’s ability to obtain a positive audit opinion showing that it can accurately account for a $250 billion annual budget and over $1 trillion in government assets, the substantial improvements that are needed in IRS’ accounting and financial reporting for federal tax revenue, and the fundamental control weaknesses that resulted in the Department of Housing and Urban Development’s Inspector General being unable to give an opinion on the department’s fiscal year 1995 financial statements. The landmark CFO Act, as expanded in 1994 by the Government Management Reform Act, provides a long overdue and ambitious agenda to help resolve these types of financial management deficiencies. The act established a CFO structure in 24 major agencies to provide the necessary leadership. Moreover, the CFO Act set expectations for (1) the deployment of modern systems to replace existing antiquated, often manual, processes, (2) the development of better performance and cost measures, and (3) the design of results-oriented reports on the government’s financial condition and operating performance. In the next few months, we will witness a monumental achievement: 24 CFO act agencies—covering virtually the entire federal budget—will have prepared and have audited financial statements for their entire operations for fiscal year 1996. This major milestone represents the first time that all major government agencies will have exercised the type of financial reporting and control discipline that has been required in the private sector for over 60 years and in state and local governments since the early 1980s. As we have testified several times, important and steady progress is being made under the act to bring about sweeping reforms and rectify the devastating legacy from inattention to financial management. For example, CFO Act financial audits have resulted in IRS top management having a better understanding than ever before of the agency’s financial management problems. Also, the act provided impetus for IRS’ progress in improving payroll processing and accounting for administrative operations and is prompting the agency to work on solutions to revenue and accounts receivable accounting problems. These efforts are in response to the nearly 60 improvement recommendations we have made as a result of our audits of IRS’ financial statements under the CFO Act during the past several years. Also, implementing the CFO Act’s blueprint for financial management improvements is at the heart of resolving many of DOD’s high-risk problems. Since 1990, auditors have made over 400 recommendations aimed at helping to correct DOD’s financial management problems. While no military service or other DOD component has been able to withstand the scrutiny of an independent financial statement audit and the department’s financial management processes are among the worst in government, DOD’s financial management leaders have recognized the importance of tackling these problems. They have expressed a commitment to financial management reform and have many initiatives underway to address long-standing financial management weaknesses. Much remains to be done at both IRS and DOD to realize necessary improvements, and our reports have outlined the actions necessary to improve their financial management. An intensive effort by IRS and DOD and support by the Congress will be required as well. Also, financial statements for many government programs and operations involving billions of dollars, such as Medicare, are being prepared and audited for the first time ever. We have worked with agency CFOs and Inspectors General, OMB, and the Department of the Treasury over several years to be a catalyst for the preparation and audit of agencywide financial statements across government. We also have worked with OMB and Treasury to create the Federal Accounting Standards Advisory Board, which recently completed a complete set of new accounting standards for the federal government. When financial statement audits under the CFO Act are completed, it will be important for the Congress to ensure that agencies promptly and thoroughly correct problems that these audits identify. To assist the Congress in this area, we plan to explore the concept of agency audit committees, which are commonplace and effective for private-sector corporations, as a means of maintaining high-level vigilance and support for fixing problems. continuing to build stronger financial management organizations by upgrading skill levels, enhancing training, and ensuring that CFOs possess all the necessary authorities within their agencies to achieve change; devising and applying more effective solutions to address difficult problems plaguing agencies’ underlying financial systems; designing comprehensive accountability reports to permit more thorough and objective assessments of agencies’ performance and financial conditions, as well as to enhance the budget preparation and deliberation process; and implementing complementary legislative requirements, including (1) the Debt Collection Improvement Act of 1996 enacted to expand and strengthen federal agency debt collection practices and authorities and (2) the Federal Financial Management Improvement Act of 1996 requiring agencies to comply with new federal accounting standards, federal financial systems requirements, and the U.S. government’s standard general ledger. Improving Performance and Providing Better Service The Government Performance and Results Act seeks to shift the focus of federal management and decision-making from a preoccupation with the number of tasks completed or services provided to a more direct consideration of the results of programs—that is, the real differences the tasks or services make to the nation or individual taxpayer. GPRA originated in part from the Congress’s frustration that congressional policymaking, spending decisions, and oversight and agencies’ decision-making all had been seriously handicapped by the lack of clear goals and sound performance information. The Congress viewed GPRA as a critical tool to address serious shortfalls in the effectiveness of federal programs—many of which had been extensively documented in our work. In crafting GPRA, the Congress built on the experiences of leading states and local governments and other countries that were successfully implementing management reform efforts and becoming more results-oriented. As a starting point, GPRA requires executive agencies to complete—no later than September 30 of this year—strategic plans in which they define their missions, establish results-oriented goals, and identify the strategies they will use to achieve those goals. GPRA requires agencies to consult with the Congress and solicit the input of other stakeholders as they develop these plans. Next, beginning with fiscal year 1999, executive agencies are to use their strategic plans to prepare annual performance plans. These performance plans are to include annual goals linked to the activities displayed in budget presentations as well as the indicators the agency will use to measure performance against the results-oriented goals. Agencies are subsequently to report each year on the extent to which goals were met, provide an explanation if these goals were not met, and present the actions needed to meet any unmet goals. When it passed GPRA, the Congress clearly understood that most agencies would need to make fundamental management changes to properly implement this law and that these changes would not come quickly or easily. As a result, GPRA included a pilot phase where about 70 federal organizations gained experience in implementing key parts of GPRA and provided valuable lessons for the rest of the government. Our Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996) was intended to help agencies implement GPRA by drawing on the experiences of leading public-sector organizations here and abroad to suggest a proven and practical path that agencies can take to implement GPRA. Our work has found numerous examples of management-related problems stemming from unclear agency missions; the lack of results-oriented performance goals; the absence of well-conceived agency strategies to meet those goals; and the failure to gather and use accurate, reliable, and timely program performance and cost information to measure progress in achieving results. Addressing these problems is both a challenge and an opportunity for effectively implementing GPRA. The congressional consultations on agencies’ strategic plans—which in many cases are beginning now—provide an important opportunity for the Congress and the executive branch to work together to ensure that missions are focused, goals are results-oriented and clearly established, and strategies and funding expectations are appropriate and reasonable. The experiences of leading organizations suggest that planning efforts that have such characteristics can become driving forces in improving the effectiveness and efficiency of program operations. The GPRA strategic planning process thus provides the Congress with a potentially powerful vehicle for clarifying its expectations for agencies and expanding the focus on results expected from funding decisions. Moreover, as part of the Congress’s integrated statutory framework, the successful implementation of the CFO Act, the PRA, and the Clinger-Cohen Act are absolutely critical if GPRA is to be successful in improving program performance. For example, with successful implementation, the audited financial statements required by the CFO Act will provide congressional and executive branch decisionmakers with the reliable financial and program cost information that they have not previously had. This information is to be provided to decisionmakers in results-oriented reports on the government’s program results and financial condition that, for the first time, integrate budget, financial, and program information. These reports are also to include cost information that enables users to relate costs to outputs and outcomes. Equally important, the sound application and management of information technology to support strategic program goals must be an important part of any serious attempt to improve agency mission performance, cut costs, and enhance responsiveness to the public. The successful implementation of information technology reform legislation—which, among other things, requires that agencies have a strategy that links technology investments to achieving programmatic results—is critical to ensuring the wise use of the billions of dollars the government is investing in information systems. Thus, in concert with the CFO Act and information technology legislation, improved goal-setting and performance measures developed under GPRA are critical to addressing high-risk areas. Clear goals and sound performance data are key to strengthening decision-making in agencies and in the Congress and pinpointing specific opportunities for improved performance. For example, performance measures can be useful in guiding management of defense inventory levels to prevent the procurement of billions of dollars of centrally managed inventory items that may not be needed. For example, as of 1995 about half of the $69.6 billion defense inventory is beyond what is needed to support war reserve or current operating requirements. reaching agreement with the Congress on and monitoring acceptable levels of errors in benefit programs, which may never be totally eliminated but can be much better controlled. For instance, no one can determine with precision how much Medicare loses each year to fraudulent and abusive claims, but losses could be from $6 billion to as much as $20 billion based on 1996 outlays. monitoring loan loss levels and delinquency rates for the government’s direct loan and loan guarantee programs—multibillion dollar operations in which losses for a variety of programs involving farmers, students, and home buyers are expected but can be minimized with greater oversight. For example, in fiscal year 1995, the federal government paid out over $2.5 billion to make good its guarantee on defaulted student loans. assessing the results of tax enforcement initiatives, delinquent tax collection activities, and filing fraud reduction efforts. For instance, in fiscal year 1996, IRS reported it had collected almost $30 billion in delinquent taxes—more than in any previous year. However, fundamental problems continue to hamper IRS’ efforts to efficiently and effectively manage and collect its reported $216 billion inventory of tax debts. While the experiences of leading organizations and federal efforts under GPRA thus far show that full GPRA implementation will take time and much effort, our executive guide shows that improvements in performance—sometimes substantial ones—are possible even in the short term when an organization adopts a disciplined approach to defining its mission and desired results, measuring its performance, and using information to make decisions. For example, our executive guide provides examples from the Federal Emergency Management Agency, the Veterans’ Health Administration, the Coast Guard, and other agencies that are well on the way to improving performance by better focusing on results. No Substitute for Diligent Management Commitment and Follow-Through Management commitment is key to solving high-risk problems and getting off the high-risk list. There is no substitute for the basic management practices of goal-setting and follow-through. Agencies have successfully used these common mechanics to make significant progress and get at the root causes of high-risk problems. In 1995, progress in addressing five high-risk areas was sufficient to warrant the high-risk designation being removed, including the following. The Pension Benefit Guaranty Corporation’s (PGBC) high-risk designation was removed due to substantially improved internal controls and systems. For example, PBGC’s liability for future benefits (amounts owed to employees of terminated pension plans insured by PBGC) represents about 95 percent of PBGC’s total liability. In fiscal year 1992, PBGC sufficiently addressed long-standing deficiencies in (1) documentation and support for various techniques and assumptions used for estimating PBGC’s liability for future benefits, (2) the ability to assure the completeness and accuracy of data used in the estimating techniques, and (3) estimating software. These improvements enabled us to certify PBGC’s balance sheet for the first time. In fiscal year 1993, PBGC resolved serious system limitations that had restricted its ability to fully process all premium information, assess the accuracy of premium amounts, and collect amounts due. These improvements, coupled with the improved controls over the process for estimating PBGC’s liability for future benefits, enabled us to certify PBGC’s complete set of financial statements in fiscal years 1993 and 1994. PBGC has maintained its auditability since the Corporation’s Inspector General took over responsibility for auditing its annual financial statements in fiscal year 1995. Also, the Congress enacted legislation in 1994 to strengthen minimum funding standards for pension plans and to phase out the cap on variable rate premiums paid by underfunded plans. These provisions were designed to lower the underfunding in pension plans, thus reducing PBGC’s exposure, and to reduce the Corporation’s deficit overtime. The Resolution Trust Corporation (RTC) was moved off the high-risk list because the Congress enacted specific management reforms with required progress reporting to achieve the needed improvements in RTC’s contracting, asset disposition, and supporting management information systems. Also, RTC improved its internal controls over receivership operations and methodology for estimating cash recoveries from the assets of failed thrifts; strengthened its financial systems and controls, which enabled us to fully certify RTC’s financial statements for the fiscal year ended December 31, 1992, and subsequent fiscal years until RTC was terminated on December 31, 1995; and created an audit committee that included the Director of the Office of Thrift Supervision, a Federal Reserve Board member, and a representative from the private sector. In contrast, our experience is that programs are designated high risk when agencies fail to quickly recognize growing problems, underestimate what it will take to correct them, and do not take prompt corrective measures. This has occurred for the 16 new areas that have been designated high risk since our high-risk initiative began 7 years ago. Of these, 5 were designated just last month. Overall, of the 25 areas that are the current focus of our high-risk program, 12 areas, or about half, have been on the list for 2 years or less. Sustained Congressional Oversight and Focused Attention Are Essential We have also long advocated sustained oversight and attention by the Congress to agencies’ efforts to fix high-risk problem areas and implement broad management reforms. The Congress must continue to play a central role in ensuring that management problems in agencies’ operations are identified and weaknesses addressed. Providing Accountability Reports We have advocated that congressional committees of jurisdiction hold annual or at least biennial comprehensive oversight on each department and major independent agency. The plans and reports that agencies are to develop under GPRA and the audited financial statements that are to be prepared under the expanded CFO Act should serve as the basis for those hearings. Congressional oversight can be shaped by thorough accountability reports that provide a comprehensive picture of agencies’ performance pursuant to its stated goals and objectives. Under the Government Management Reform Act, several agencies are preparing accountability reports on a pilot basis. These new reports will combine the separate reports required under various laws, such as GPRA and the CFO Act. The accountability reports are intended to show the degree to which an agency met its goals, at what cost, and whether the agency was well run. The Congress must have a central role in defining the content and format of these reports to ensure that the reports eventually provide the Congress with comprehensive “report cards” on the degree to which agencies are making wise and effective use of tax dollars and to provide a better basis for identifying issues to focus on during the oversight process. This will also provide a full picture of an agency’s program performance and resource usage to accomplish its mission. Meeting the Human Resource Management Challenge Another matter for congressional attention is improving the management and effectiveness of federal programs by modernizing human resource management systems. Hiring the right people and managing them effectively will be indispensable to improving the performance of federal agencies. In an era that demands improved performance at reduced costs, agencies’ success increasingly will depend upon their abilities to assemble a staff with the right blend of talents and skills. However, as our work on financial and information technology issues has suggested, many agencies’ staffs are not well prepared to meet this challenge. GPRA also recognizes the importance of human resource management by requiring that agencies’ strategic plans include a description of how they intend to use their people to achieve their strategic goals. The question is: does the existing civil service system allow agencies the flexibility to respond to these new demands? On the one hand, the competitive service is undoubtedly more flexible than it was 2 decades ago. Efforts to make it so go back at least as far as the Civil Service Reform Act of 1978 (CSRA). Yet, despite CSRA and other measures taken since then, the competitive service as a whole is still widely viewed as burdensome to managers, unappealing to ambitious recruits, hidebound and outdated, overregulated, and inflexible. In short, there is general recognition that in one way or another, the civil service must be made more flexible in response to a changing environment. Leading private-sector employers—as well as some government entities both here and abroad—are creating personnel systems that diverge sharply from the federal government’s traditional approach. The new model is more decentralized, focused more directly on mission accomplishment, and set up more to establish guiding principles than to prescribe detailed rules and procedures. In our contacts with experts from private-sector organizations and from other governments both here and abroad and with labor representatives, academicians, and experienced federal officials, we have identified several newly emerging principles for managing people in high-performing organizations. Our Transforming the Civil Service: Building the Workforce of the Future—Results of A GAO-Sponsored Symposium (GAO/GGD-96-35, December 20, 1995) distilled the key principles we learned. Among these key principles were: First, in today’s high-performing organizations, people are valued as assets rather than as costs. They are recognized as crucial to organizational success—as partners rather than as mere hired help—and organizations that recognize them as partners invest in their professional development and empower them to contribute ideas and make decisions. Second, organizational mission, vision, and culture are emphasized over rules and regulations. In place of highly detailed rules to manage their employees, leading organizations are relying increasingly on a well-defined mission, a clearly articulated vision, and a coherent organizational structure to form the foundation for the key business systems and processes they use to achieve desired results. Third, managers are given the authority to manage their people flexibly and creatively so they can focus on achieving results rather than on doing things “by the book.”They are held accountable for outcomes—for furthering the mission and vision of the organization—rather than for adhering to a set of minutely defined procedures. This, once again, is an approach that we have observed largely in the private sector. But the integration of human resource management into the business of the organization coincides with a practice we have identified as critical to the implementation of GPRA—the alignment of activities, core processes, and resources to support mission-related goals. As the federal government fully implements GPRA, agencies and the Congress will be able to gain further experience with how best to provide flexibility in managing federal employees to better achieve mission results while observing merit systems principles. Linking Resource Allocation Decisions to Results Another future challenge is to better link resource allocation decisions to results. Ultimately, to improve the effectiveness and efficiency of government, the statutory framework described above—GPRA, the CFO Act, and information technology reforms—must be better integrated with the federal government’s resource estimation and allocation processes. Although vitally important as an agency management improvement tool, this framework also will provide new information and perspectives that can be particularly useful to the process of allocating scarce resources among competing national priorities. Comparably, the budget process will need to continue to adapt to take full use of the benefits flowing from these initiatives and to support their further development. The statutory framework established by the Congress can significantly improve the information presented to decisionmakers during the annual budget process. Financial systems improvements and audited financial statements brought about by the CFO Act will enhance the accuracy and reliability of financial information undergirding budgetary estimates and provide a clearer appreciation of long-term unfunded commitments and of the full costs of current government programs. The information technology reforms and the Federal Acquisition Streamlining Act of 1994 are part of a broader agenda that recognizes the need for better risk management and integrated life-cycle costing of capital investments, which should ensure appropriate consideration and full-funding of such proposals within annual budget deliberations. Similarly, GPRA holds promise of restoring public confidence in government at a time when we must make increasingly more painful budgetary choices. GPRA aims to provide systematic information on the performance of government programs and to directly link such information with the annual budget process. Although many factors appropriately influence budget decisions, effective implementation of GPRA will add critical information about what citizens and the nation are receiving for each dollar spent. Ultimately, debate about funding levels should begin to focus on the performance of individual programs, the overall effectiveness of agency operations, and the need for efforts to better coordinate and harmonize federal agency missions and activities. History indicates, however, that careful attention will be needed to ensure that the separate objectives and processes of these reform initiatives are effectively melded with the budget process. Integrating strategic planning, financial accounting, and budget formulation and execution processes will pose profound challenges; attempts to connect performance goals and results to traditional budget decision structures will inevitably encounter issues that the Congress and the executive branch will need to jointly address. The challenges of solving pressing management problems are great, but the rewards are high. While the legislative framework is in place, much work remains to be done to fully and effectively achieve its goals. Continued dialogue between legislative and executive branch officials is key to strengthen management of the federal government’s enormous investment in information technology, improve data to help make spending decisions, and enable better assessments of the performance and cost of federal activities and operations. Mr. Chairman, this concludes my statement. I would be happy to now respond to any questions. Areas Designated High Risk Providing for Accountability and Cost-Effective Management of Defense Programs Financial management (1995) Contract management (1992) Inventory management (1990) Weapon systems acquisition (1990) Defense infrastructure (1997) Ensuring All Revenues Are Collected and Accounted for IRS financial management (1995) IRS receivables (1990) Filing fraud (1995) Tax Systems Modernization (1995) Customs Service financial management (1991) Asset forfeiture programs (1990) Obtaining an Adequate Return on Multibillion Dollar Investments in Information Technology Tax Systems Modernization (1995) Air traffic control modernization (1995) Defense’s Corporate Information Management initiative (1995) National Weather Service modernization (1995) Information security (1997) The Year 2000 Problem (1997) Controlling Fraud, Waste, and Abuse in Benefit Programs Medicare (1990) Supplemental Security Income (1997) Minimizing Loan Program Losses HUD (1994) Farm loan programs (1990) Student financial aid programs (1990) Improving Management of Federal Contracts at Civilian Agencies Department of Energy (1990) NASA (1990) Superfund (1990) Also, planning for the 2000 Decennial Census was designated high risk in 1997. 1997 High-Risk Series An Overview (GAO/HR-97-1) Quick Reference Guide (GAO/HR-97-2) Defense Financial Management (GAO/HR-97-3) Defense Contract Management (GAO/HR-97-4) Defense Inventory Management (GAO/HR-97-5) Defense Weapon Systems Acquisition (GAO/HR-97-6) Defense Infrastructure (GAO/HR-97-7) IRS Management (GAO/HR-97-8) Information Management and Technology (GAO/HR-97-9) Medicare (GAO/HR-97-10) Student Financial Aid (GAO/HR-97-11) Department of Housing and Urban Development (GAO/HR-97-12) Department of Energy Contract Management (GAO/HR-97-13) Superfund Program Management (GAO/HR-97-14) The entire series of 14 high-risk reports is numbered GAO/HR-97-20SET. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed actions needed to bring about lasting solutions to serious and long-standing federal government management problems. GAO noted that: (1) its mission is helping the Congress in its efforts to improve management of our national government; (2) one approach has entailed identifying critical management problems before they become uncontrollable crises; (3) since 1990, GAO has produced a list for the Congress of areas that GAO identified, based on its work, as highly vulnerable to waste, fraud, abuse and mismanagement; (4) to help solve high risk problems, GAO has made hundreds of recommendations to get at the heart of these problems, which have at their core a fundamental lack of accountability; (5) this list helps focus attention by the administration and the Congress on critical management problems; (6) the high risk designation has prompted agencies to take action in many areas, and progress in addressing management problems has ensued; (7) the need to address fundamental management problems also was a factor in prompting the Congress to to enact important reforms such as the 1995 Paperwork Reduction Act and the 1996 Clinger-Cohen Act to better manage investments in information technology (IT), the Government Management and Reform Act of 1994, which expanded the 1990 Chief Financial Officers (CFO) Act's requirement for financial statements and controls that can pass the test of an independent audit, and the 1993 Government Performance and Results Act (GPRA) to better measure performance and focus on results; (8) this legislation forms an integrated framework that will help agencies identify and monitor high risk areas and operate programs more efficiently and will assist the Congress in overseeing agencies' efforts to achieve these results; (9) through the set of reforms embodied in the CFO Act, GRPA, and the IT initiatives, the Congress has laid the groundwork for the federal government to use proven best management practices that have been successfully applied in the private sector and state and local governments; and (10) these reforms will not produce lasting improvements, however, without successful implementation by agencies and relentless Congressional involvement.
Background Dramatic increases in the volume and speed of international travel and trade in recent years have increased opportunities for diseases to spread across international boundaries. The global reach of the ongoing HIV/AIDS pandemic and the recent appearance in the United States of West Nile virus—a pathogen never before identified in the Western Hemisphere— demonstrate this point. Diseases once regarded as declining in significance have also reemerged in recent years to once again become major global health threats. For example, according to WHO, global reports of yellow fever have dramatically increased over the last 2 decades. The emergence of previously unknown diseases and the development of disease strains resistant to antimicrobial drugs further complicate international disease control efforts. Over the past 3 decades, more than 30 previously unknown diseases have been identified. Many, including Ebola hemorrhagic fever, Rift Valley fever, and Lyme disease, appear to have become threats to human health because of increased human movement into or alteration of the habitats of disease-carrying insects and animals. Excessive, uncontrolled use of antimicrobial drugs has contributed to the evolution of disease strains that are highly resistant to available medications. Infectious diseases can be a substantial obstacle to economic and social advancement in developing countries, where the great majority of cases of such diseases occur. For example, WHO has concluded that Africa’s gross domestic product would be nearly one-third higher than it is today if malaria alone had been eliminated 35 years ago. Development experts believe that the HIV/AIDS pandemic will have a similar impact on African economies. Surveillance provides information for action against infectious disease threats. Basic surveillance functions include detecting and reporting cases of disease, analyzing and confirming this information to identify outbreaks and clarify longer-term trends, and applying the information to inform public health decisionmaking. When effective, surveillance can facilitate (1) timely action to control outbreaks, (2) informed allocation of resources to meet changing disease conditions, and (3) adjustment of disease control programs to make them more effective. According to CDC, factors that can be taken into account in evaluating surveillance systems include their ease of operation; the extent to which health care providers and laboratory personnel actually provide the system with information; and the system’s ability to identify cases of disease, accurately diagnose them, and generate timely and accurate information on disease events and trends. Basic responsibility for disease surveillance and response lies with individual countries. The legal underpinnings for cooperation among countries to control infectious diseases are limited in scope. The primary function of the International Health Regulations—the most important and only binding international agreement on disease control—is to delineate measures that countries may take to protect themselves against epidemics of three diseases: cholera, plague, and yellow fever. To provide national authorities with a basis for applying protective measures, the regulations require countries that record cases of these three diseases to report to WHO, which then makes that information available to other countries. The Regulations do not provide an international framework for addressing threatening epidemics at their source—within countries. At the global level, surveillance functions are carried out through a loose framework that links elements of national health care systems with various entities, including media channels, nongovernmental organizations active in health, and laboratories and other institutions participating in networks focusing on particular diseases and/or regions. Figure 1 presents one illustration of this global “network of networks.” The groupings presented in this figure are not mutually exclusive. For example, national public health authorities may operate WHO Collaborating Centers, participate in epidemiology training networks, and maintain Internet discussion sites. Note 1: UNHCR represents the United Nations High Commissioner for Refugees. Note 2: UNICEF represents the United Nations Children’s Fund. WHO plays a central role in the surveillance framework by working to strengthen national and international surveillance capacity and coordinating international efforts to monitor disease trends, detect and respond to outbreaks, and carry out disease control programs. Foreign assistance agencies such as the World Bank and USAID, as well as private foundations, are important sources of support for strengthening surveillance operations, particularly those taking place in developing countries. For example, in commenting on a draft of this report, the World Bank noted that it is actively working with a number of developing country governments to strengthen their national surveillance systems, within the context of the Bank’s overall emphasis on health. While many technical agencies contribute to framework operations, CDC is the single largest source of expertise and resources available to the international surveillance and response system. The Department of Defense also contributes to global surveillance through its Global Emerging Infections Surveillance and Response System. In commenting on a draft of this report, for example, the department cited its contributions to global surveillance for drug-resistant malaria and influenza. Global Surveillance Varies by Disease The global surveillance framework’s capacity for serving the public interest varies according to the level of commitment that the international community has made to controlling individual diseases or groups of diseases. The most significant influence on the framework’s development has been the international public health community’s focus on controlling specific diseases. In some circumstances—when a disease can be eradicated with comparative ease or when it poses a high risk of a global pandemic—these programs have attracted broad support and substantial funding. In such situations, public health officials have been able to establish specific goals and create comparatively high-performing systems—including surveillance systems—to support achievement of those goals. Surveillance for other diseases is more limited. Multiple Surveillance Systems Created to Support Disease-Specific Control Programs The strongest influence on the evolution of the existing surveillance framework has been the collaboration among medical professionals, national governments, and foreign assistance agencies to develop control programs and associated surveillance efforts that focus on specific diseases or groups of diseases. The longest standing of these disease- specific efforts is the global influenza program, which was launched prior to WHO’s founding in 1948. Later, the success of the global effort to eradicate smallpox (1966 through 1977) spurred the creation of other programs designed to eradicate or eliminate global disease threats, such as polio and leprosy, and diseases found in specific regions, such as guinea worm and river blindness, which are both found primarily in Africa.Global consensus in favor of these eradication/elimination campaigns was achieved during the late 1980s and early 1990s, after reduction programs had achieved substantial progress. WHO also collaborates with numerous institutions around the world to maintain programs to control noneradicable infectious diseases such as HIV/AIDS, cholera, tuberculosis, malaria, and dengue. National disease-control programs reflect this focus on specific diseases. They are generally managed through separate programs aimed at specific diseases, such as polio and tuberculosis, or groups of diseases, such as those covered by the Expanded Program on Immunization. Disease Characteristics, International Commitment Affect Surveillance Quality Variation in the quality of global surveillance systems can be attributed in large measure to disease characteristics. Under certain circumstances— for example, if a disease can be eradicated or if it poses a high risk of a global pandemic—disease-specific control programs have attracted broad support and have employed this support to create comparatively effective surveillance systems. Surveillance for other diseases, including emerging infections, has received less international support and is more limited. High-Quality Surveillance for Some Diseases The best surveillance systems have been established to support international campaigns aimed at eradicating or eliminating certain diseases, including polio and guinea worm, and at protecting the global community against influenza—a disease that has the potential to inflict global pandemics. The international community has been supportive of eradication/elimination campaigns because they promise dramatic results—the removal of targeted diseases as public health threats—after relatively short periods of concentrated effort. However, only diseases with certain characteristics can be eradicated or eliminated. In addition to imposing substantial disease burdens—a trait common to many illnesses— diseases that the global community has targeted for eradication or elimination tend to share other characteristics that have encouraged consensus in favor of concerted action. Although the international community has targeted other diseases for eradication or elimination, polio and guinea worm are discussed below to illustrate the characteristics of eradicable diseases and the comparatively high quality of surveillance systems that are created to support international eradication/elimination campaigns. The polio virus and the guinea worm parasite both require human hosts to complete their reproductive life cycles. Both can be controlled by interrupting their transmission from infected to uninfected individuals. Also, available diagnostic tools and approaches make these diseases relatively easy to identify and differentiate from other illnesses. For example, a small but predictable number of polio victims (less than 1 percent) develop acute flaccid paralysis—a condition in which those infected suddenly lose control of the muscles in their limbs. This makes it possible to readily identify communities where intervention may be required. Guinea worm is easily detected when mature worms emerge from infected people’s bodies. Moreover, these diseases generally can be controlled through application of effective, comparatively inexpensive, and easily applied interventions. Polio, for example, can be prevented through immunization with vaccines that are available to developing countries at very low prices. Guinea worm transmission can be dramatically reduced through education and relatively cheap and simple water filtration systems. These characteristics have allowed disease experts to develop clearly stated, technically feasible, time-limited goals and indicators for measuring progress. Advocates for campaigns against these diseases have been able to obtain political commitment and financial support from countries with these diseases and from public and private sources of foreign assistance. For example, the global polio eradication effort has received financial and/or technical support from the governments of the United States, Japan, Norway, Australia, Canada, Denmark, the United Kingdom, and other industrialized countries; Rotary International and other private organizations; and developing country governments. With major financial resources and support from all concerned governments, these campaigns have developed comparatively high- performing surveillance systems. For example, donors and developing country governments have combined their efforts to create a system of active surveillance for acute flaccid paralysis that can promptly identify potential polio cases. This surveillance system has helped reduce the global incidence of polio by 99 percent since 1988. The surveillance effort is ambitious—most countries employ multiple surveillance officers to conduct active surveillance for cases of acute flaccid paralysis. According to CDC officials, most countries in Africa dedicate at least one motor vehicle and significant financial resources to polio surveillance. The ability to confirm the presence of the disease has been helped by creation of a global network of 148 laboratories at the national, regional, and global levels to ensure accurate diagnosis and differentiation among strains.These laboratories participate in an annual accreditation program to ensure the accuracy of their analyses. Surveillance efforts to eradicate guinea worm have been similarly ambitious. This eradication program began with comprehensive village-by- village surveys in endemic countries to identify every afflicted locality. To use these data effectively, WHO and the U.N. Children’s Fund (UNICEF) created a Joint Program on Health Mapping. The “HealthMapper” project generated national and international maps of guinea worm incidence that were used to target interventions and plot progress in interrupting transmission. Endemic countries created networks of community workers in every village to report guinea worm cases so that response measures could be delivered in a timely fashion. This surveillance effort facilitated reduction of the global incidence of this disease from an estimated 10 million to 15 million cases a year in the early 1980s to about 75,000 cases in 2000 (more than two-thirds of them occurring in war-torn Sudan). Although influenza cannot be eradicated due to its presence in a variety of animal hosts and its constantly evolving character, the international community has created an extensive surveillance system for this disease. Factors leading to the considerable level of investment in this system include the disease burdens imposed by influenza and the character of available interventions. Although often perceived as a comparatively low- level threat, the viruses that cause influenza are continually evolving and occasionally appear in highly virulent forms. For example, the 1918 to 1919 influenza pandemic killed more than 20 million people in locations as diverse as China, Spain, the United States, and Samoa. Although not as severe, influenza pandemics in 1957 and 1968 killed a total of 1.5 million people and caused an estimated $32 billion in economic losses worldwide, according to WHO. While influenza’s adverse impacts can be reduced via immunization, vaccines have to be re-engineered each year to target the strains considered likely to be most prevalent in the upcoming “flu season.” Worldwide surveillance is necessary to permit continuous updating of the information that manufacturers use to reformulate these vaccines. Since the late 1940s, WHO has created a global network of 111 national influenza centers in 83 countries, supported by 4 international reference laboratories. These centers collaborate in collecting and analyzing influenza strains to identify those that appear most likely to spread around the globe and present major risks to public health. According to CDC, the system produced vaccines that precisely or substantially targeted 12 of 13 virus strains that circulated widely between 1988 and 1997. WHO has also created “FluNet,” an Internet site devoted to monitoring global influenza activity. More Limited Surveillance for Other Diseases Although diseases such as yellow fever, cholera, and dengue also present substantial public health threats, surveillance for these diseases tends to be more limited. These diseases have characteristics that work against international commitment in favor of ambitious, goal-directed control campaigns. Cholera, dengue, and yellow fever do not appear to be good candidates for eradication because the pathogens that cause them can live and reproduce without human hosts. Advocates for addressing these diseases cannot therefore hold out the prospect of eradication or elimination as an incentive for investing in control efforts. Without laboratory confirmation, all three can be confused with other diseases causing similar symptoms. They are therefore comparatively difficult to identify, especially in developing country conditions. Although effective yellow fever vaccines are available, many developing country governments do not administer them routinely. Cholera vaccines are infrequently employed, and there is currently no vaccine for dengue. No specific treatment exists for any of the three diseases; all are treated primarily by ensuring that patients are hydrated. Therefore, although all three cause periodic outbreaks that require an organized response, health care providers may simply address patient needs without seeking laboratory confirmation of possible cases or reporting cases to higher level authorities. This reduces the likelihood that surveillance reports will accurately reflect disease incidence or trends and makes it difficult for disease campaign advocates to set specific objectives for reductions in these diseases. Finally, although all three diseases are quite serious and can spread across international borders, they do not threaten to cause rapidly spreading global pandemics like those that can be caused by influenza. Global surveillance for yellow fever is quite limited. Efforts by WHO, UNICEF, and others to encourage greater investment in controlling this disease, including more widespread employment of yellow fever vaccines, have met with limited success. Ongoing laboratory training organized by WHO for the polio laboratory network in Africa has been expanded to include yellow fever but the global community has not established any specific targets for yellow fever reduction. According to WHO, countries that report information on yellow fever immunization coverage typically reach 50 percent or less of eligible children. Despite the fact that the International Health Regulations require reporting on yellow fever, WHO officials estimate that actual caseloads are up to 500 times greater than reported. Surveillance for cholera is also problematic. While WHO and multiple partner organizations established a Global Task Force on Cholera Control in 1991, the task force was not given specific targets. Seven years later, a U.N. review found that the global community’s approach focused on outbreak response and that, while this approach can reduce cholera death rates, it failed to prevent cholera from occurring. Developing countries have had little incentive to improve surveillance beyond the detection of outbreaks. Although the International Health Regulations require reporting on cholera, a WHO official estimated that the numbers of cholera cases and deaths occurring in the world are 10 times higher than official reports indicate. In 1999, WHO was officially notified of approximately 9,200 cholera deaths, but disease experts believe that the annual number of deaths from cholera is closer to 120,000. Surveillance for dengue is similarly limited. WHO developed a Global Strategy for Prevention and Control of Dengue Fever and Dengue Hemorrhagic Fever in 1995 and has, with USAID support, held two international meetings to focus attention on this disease. In collaboration with the French National Institute for Medical Research and Health and other partners, WHO has also created “DengueNet,” an Internet site dedicated to gathering and sharing dengue-related information. However, without the incentive that would be provided by a clear, goal-directed international commitment to responding to the threat posed by this disease, surveillance for dengue remains limited. For example, although WHO officials pointed out that progress has been made in the Americas, no organized surveillance for dengue exists in Africa, even though disease experts are certain that the illness is present there. Countries use different definitions of what constitutes a reportable case of dengue and different procedures for deciding when to report cases (that is, with or without laboratory confirmation) and for reporting on dengue versus dengue hemorrhagic fever. WHO officials highlighted the general inadequacy of laboratory support for dengue surveillance and observed that epidemiological data on dengue is “frequently incomplete, delayed, and not used for decisionmaking purposes.” While national authorities are officially reporting just over 1 million cases per year, WHO estimates the actual number of cases at more than 50 million per year. In addition, public health experts observe that global surveillance for identifying and investigating emerging infections is weak. Sizable, apparently sudden outbreaks of unknown diseases, such as the 1976 Ebola outbreak in Zaire, often occur after the disease has been infecting local populations for weeks or months. Health authorities are frequently unaware of the problem until sick people begin showing up at hospitals, where concentration of infected individuals and reuse of unsterile equipment can dramatically increase the spread of the disease. Isolated cases or small clusters of cases of such diseases can be easily missed, and diseases that closely resemble others may spread before they are detected and identified. Disease experts believe, for example, that HIV/AIDS began to appear in humans decades before WHO called for its worldwide surveillance in 1981. However, these early cases were isolated and those contracting the disease tended to die from other infections, which forestalled identification and investigation of the disease. Similarly, isolated Ebola cases may have been occurring for many years, only to be diagnosed as shigella or other diseases. Global Framework’s Performance Constrained by Weaknesses in Developing Countries Developing country systems are a weak link in the global surveillance framework. Surveillance systems in industrialized and developing countries suffer from a number of common constraints, including a lack of human and material resources, weak infrastructure, poor coordination, and uncertain linkages between surveillance and response. However, these constraints are more pronounced in developing countries, which bear the greatest burden of disease and are where new pathogens are more likely to emerge, old ones to reemerge, and drug-resistant strains to propagate. Weaknesses in these countries thus substantially impair global capacity to understand, detect, and respond to infectious disease threats. Surveillance Systems Lack Qualified People and Equipment Several disease experts we met with observed that health care systems typically emphasize the care and treatment of sick people and that support systems such as surveillance are generally assigned a lower priority and receive comparatively few human and material resources. A 2000 report by the National Intelligence Council concluded that, with some exceptions, such as Thailand and South Africa, developing country governments throughout Africa and Asia assigned health care a comparatively or extremely low priority. The report observed that, as a result, these countries have rudimentary or no domestic systems for disease surveillance, response, or prevention. As shown in table 1, both overall health care spending and government health expenditures tend to decline along with national income levels. For example, total health care spending per capita in low income countries amounts to about 3 percent of per capita spending in high income countries. With the fewest resources to call upon and intense pressure to provide care and treatment services, public health authorities in the poorest countries are likely to spend the least amount of resources on surveillance. The human resources necessary to perform surveillance activities are at a premium in developing countries. In the United States, surveillance officials at the state level report that inadequate staffing and training hinder their ability to operate. In developing countries, human resources are an even more pressing concern. Many African officials with whom we spoke said that poor salaries and working conditions drive many qualified public health workers abroad in search of work. One CDC official observed that, in Zimbabwe, only two people are devoted to surveillance at the national level. Key positions in developing countries, including laboratory technicians and health care workers, are often filled by people who do not possess the necessary qualifications. In Uganda, for example, officials charged with assessing the national surveillance system found that a shortage of trained health care workers at peripheral health units contributed to inadequate analysis and application of data for decisionmaking, incomplete and untimely reports sent to higher levels, and a lack of laboratory confirmation or accurately validated diagnoses. WHO officials stated that laboratory personnel in developing countries often cannot competently test blood samples for malaria because they are not properly trained. WHO also observed that, although quality assurance programs are an important means of ensuring laboratory competence, staff in more than 90 percent of developing country laboratories are not familiar with quality control or quality assurance principles. Few surveillance workers in developing countries possess the epidemiological skills that make CDC so effective at clarifying and resolving infectious disease challenges. For example, one WHO official commented that many of those assigned responsibility for analyzing disease information in developing countries are able to produce accurate tables and graphs but cannot probe the data to identify discrepancies that bear investigation. Equipment shortages also constrain surveillance. In the United States, public health departments often lack computers and fax machines or integrated data systems that allow surveillance data to be immediately shared with public and private partners. Developing country health departments have little access to such equipment. The ability of developing country health officials to provide accurate disease information is further compromised by their frequent lack of clear and accurate diagnostic tests that they can perform themselves or ready access to functioning laboratories. As a result, they have difficulty making appropriate decisions about disease control measures and may waste valuable resources, such as antibiotics and vaccines. Inexpensive, rapid diagnostic tests are available for some diseases, including hepatitis B and HIV, but many other diseases, including cholera and yellow fever, can only be confirmed by a laboratory. CDC and WHO officials observed that public health laboratories in Africa are generally poorly funded, understaffed, and underequipped. According to WHO, more than 60 percent of laboratory equipment in developing countries is outdated or not functioning. Sixteen of the 19 WHO-sponsored assessments of African national surveillance systems that we reviewed reported weaknesses in laboratory capacity, ranging from a lack of trained technicians to deteriorating buildings, and 9 specifically cited a lack of laboratory equipment or poorly maintained equipment as reasons for difficulty in confirming cases. During fieldwork in Malawi, for example, we were told that all clinics should have a microscope to scan blood for malaria parasites, but at the clinic we visited, the only microscope was broken. Weak Infrastructure Exacerbates Surveillance Difficulties in Developing Countries Weaknesses in transportation and communications infrastructure in developing countries substantially impair surveillance in these countries. Many people in developing countries live in remote areas that are not served by organized health care facilities. Several national surveillance system assessments we reviewed specifically cited this as a problem or identified large portions of their populations as not having access to health care. In Uganda, for example, less than half the population lives within a 3- mile walk of a health facility. Many cases of disease thus go unrecorded. As an epidemiologist with the Armed Forces Medical Intelligence Center commented, because the effective reach of the formal health care systems in most developing countries extends to so little of the population, patients seen at clinics represent merely the “the tip of the iceberg” in terms of disease trends and events. For example, research conducted by the Tanzanian health ministry found that, from 1992 through 1995, 46 percent of all deaths in one district occurred without prior contact with a health facility and 90 percent of all children under age 5 with high fever and seizures—a key symptom of malaria—died at home. Because local health authorities had not previously had a full understanding of disease burdens in their district, they had not chosen to focus on malaria as a top priority. However, according to national officials, the local authorities made malaria a high priority and quintupled the share of resources dedicated to controlling this disease after they learned of the data generated by this research project. Poor roads and communications in many developing countries make it difficult for health care workers to alert higher authorities about outbreaks or quickly transport specimens to laboratories. At least 10 of the 19 assessments of African national surveillance systems that we reviewed found that less than 50 percent of the local health facilities surveyed had either telephones (or other means of communication) or vehicles for transport. Even in facilities that had these resources, performance was hampered by breakdowns and insufficient funds for fuel. One clinic official in Tanzania, who did not have access to a vehicle or telecommunications equipment, informed us that in the event of an emergency, such as the need to report a suspected case of polio or cholera, he hitches a ride on one of the trucks that occasionally pass through his village. He observed that this was a workable alternative for him because his clinic was only about an hour’s drive from the district health office but that his colleagues operated clinics much further away from district headquarters. These obstacles also affect the ability of higher-level officials to give feedback to the health care workers they supervise on the quality of the data being collected. Such feedback, according to public health experts, is critical to motivating health workers to continue investing time and energy in surveillance activities. Surveillance Activities Are Poorly Coordinated Global disease surveillance is also constrained by poor coordination of surveillance activities. Multiple reporting systems, unclear lines of authority in the event of an outbreak, poor integration of laboratories into public health systems, and nonparticipation among private health care providers have combined to further hamper surveillance efforts. While these problems exist in industrialized countries, they are particularly severe in the developing world. The disease-specific focus of control efforts has resulted in the creation of multiple surveillance systems at the national and global levels. The WHO- sponsored assessments of surveillance systems in sub-Saharan Africa found that many countries maintained at least five separate surveillance systems and that two countries had as many as nine systems. For example, in addition to maintaining separate routine surveillance systems for multiple diseases within the country and at the border, Madagascar maintains surveillance systems to support independent programs to control malaria; tuberculosis and leprosy; HIV/AIDS and other sexually transmitted diseases; plague, schistosomiasis, and cysticercosis; and diseases targeted by the Expanded Program on Immunization. While industrialized countries have more resources and expertise to cope with the resulting duplication of effort, multiple reporting systems tax developing countries’ weak public health systems. As we observed during our fieldwork in Africa and our review of the 19 WHO-sponsored assessments, overburdened individuals at the lowest levels of the health system are frequently required to do everything from caring for patients to filling out reporting forms for several disease surveillance programs. These individuals may often have to choose between their responsibilities for patient care and filling out reporting forms. The accuracy, timeliness, and completeness of the disease surveillance data collected and reported may therefore be compromised. The disease-specific nature of these programs also impairs the ability of national governments to analyze overall disease trends. In Madagascar, for example, the WHO-sponsored assessment of the national surveillance system found that there was no central point for analyzing (or responding to) disease information; each of the country’s multiple surveillance programs maintained its own reporting chain. Unclear lines of authority make it difficult to know whom to contact and who is responsible for which tasks in the event of an outbreak. Such problems exist in both industrialized and developing countries. For example, a Canadian government report critiquing the national response to a 1998 salmonella outbreak in that country noted that a key local official did not know who to contact at the national level and that national officials were not sure who at their agency was responsible for handling the issue. As a result, vital information about the scope of the outbreak was delayed. Uncertainty about what to report, when, and to whom was also evident in the 1999 West Nile virus outbreak in New York City. Many of the assessments of African surveillance systems that we reviewed cited weakness in this area as an important problem, as did World Bank and WHO officials. Disease surveillance systems in developing countries do not take full advantage of nor do they coordinate the contributions that laboratories can make to surveillance. Few developing countries have public health laboratories, which means that testing to confirm outbreaks must compete with testing to support individual patient-care decisions. Laboratories and epidemiologists often report to separate sections of a nation’s health ministry, resulting in poor communication between those who test disease specimens to confirm diagnoses and those who analyze disease outbreaks and trends. Finally, private health care providers, who play an increasingly important role in many developing countries, often do not participate in surveillance programs. One health official in an urban area in Tanzania noted, for example, that her efforts to monitor local disease trends were substantially handicapped by the fact that more than 80 percent of the population in the area now seek care through private clinics. Her efforts to obtain surveillance information from these clinics had met with limited success. Another Tanzanian official working in a rural area noted that he had exerted considerable effort in building relationships with traditional healers to improve his awareness of local trends and events and had had some success, but that not all public health officials could be expected to do the same. Uncertain Linkages Between Surveillance and Response Surveillance is further constrained by uncertain linkages between data collection, analysis, and response. In the United States, physicians are often unaware of the need to gather information necessary for surveillance efforts and may not have had any education regarding the criteria used to launch a public health investigation. One WHO official observed that overburdened health care workers in developing countries are frequently not motivated to collect disease data because they do not see any evidence that the information is being applied or because no one has explained to them why it is valuable. A Malawi health official said that some health workers had simply thrown away the registers in which they were supposed to record data on their patients. In Zimbabwe, according to a national health official, clinic data on surges in malaria incidence often do not reach the appropriate authorities until many people have become sick or died because the clerks responsible for transmitting this information are unaware of its urgency. The information generated by many developing country systems often does not produce a response because it is not timely or reliable enough to be useful. For example, during the 1990s, several sub-Saharan African countries introduced broadly targeted health management information systems to consolidate data collection and analysis on disease incidence and a variety of other health issues such as vaccination rates. World Bank and WHO officials commented that, while useful for other purposes, these information systems had often proven too broad in scope, cumbersome in detail, and slow to be used as effective surveillance tools. In fact, many national surveillance assessments we reviewed indicated that, despite attempts to use these systems as a means of simplifying disease reporting, they had become yet another parallel disease reporting system. Several officials with whom we spoke said that routine reporting systems often do not provide data that can be used to make long-term disease control management decisions, even though they were designed with this purpose in mind. For example, an official at the Tanzanian health ministry said that data from the country’s health management information system are not reliable enough to be used for this purpose. Tanzanian government officials also observed that limitations in the routine reporting system have led them to create a separate system for gathering information on disease outbreaks through weekly telephone calls to regional-level officials within the country. In addition, the surveillance systems that developing countries rely upon most heavily (routine reporting by health care providers) cannot, by themselves, fully inform health care decision-makers about disease trends and events. Experts at WHO, CDC, and USAID commented that supplementary efforts, such as long-term demographic surveys and analyses of vital statistics (births and deaths), can make major contributions to understanding disease trends. CDC officials stated that the recordation and use of vital statistics should be a priority for every country and that such activities should be linked to disease surveillance. However, developing countries seldom invest funds in supplementary studies and often do not record vital statistics. Effective outbreak investigations also can make substantial contributions to understanding disease trends. For example, mapping the location of infected households and tracing the contacts of sick people help identify modes of transmission and risk factors. Health authorities can use this information to formulate an appropriate response to the current outbreak and forestall future outbreaks of the same illness. However, developing countries often lack the capacity to conduct thorough outbreak investigations. Weaknesses in Developing Country Systems Impair All Facets of Global Surveillance Weaknesses in developing country systems reduce the ability of public health authorities at every level to understand and control infectious disease threats. These shortcomings limit the success of ambitious international programs such as the polio eradication effort, and impair the routine surveillance of other diseases and the identification and control of outbreaks, newly emerging diseases, and antimicrobial resistance. The surveillance achievements recorded by programs such as the polio eradication effort have been possible only because intensive international assistance has been given to developing countries so that they can participate in these programs. In spite of this assistance, poor surveillance in developing countries has continued to limit the ability of these programs to achieve their goals. For example, according to CDC, four countries in southern Africa were unable to meet international expectations in 1999 for detecting cases of acute flaccid paralysis, a key indicator of polio surveillance quality. Seven countries in the region fell short of the targeted 80-percent rate for collecting stool samples from suspected cases. The African region as a whole performed more poorly than any other, detecting less than the target number of potential polio cases and attaining less than the 90-percent goal for completeness of reporting. According to CDC, completing the global eradication effort is complicated by systemic weaknesses in the remaining endemic areas, which are located primarily in sub-Saharan Africa and South Asia. Ineffective routine surveillance seriously compromises the international community’s ability to understand global disease burdens and trends. As already indicated with regard to yellow fever, cholera, and dengue, the global incidence of many diseases is unknown. One WHO official noted that health authorities in Equatorial Guinea, which lies within the yellow fever endemic zone of Africa, had informed him that their country has never experienced an outbreak of yellow fever. This statement cannot be disproved because no surveillance for yellow fever exists in Equatorial Guinea. Even when adequate data exist to identify gross trends, the data generally are not adequate for in-depth analyses or informed decisions about targeting resources to achieve specific control objectives. Developing countries often cannot investigate or address outbreaks on their own. CDC’s investigative expertise, including laboratory support, is comparatively rare in the rest of the world. Many of the African surveillance assessments we reviewed indicated that outbreaks there are often not thoroughly investigated, if they are investigated at all. Health officials in countries we visited and at WHO headquarters in Geneva noted that serious outbreaks strain developing countries’ relatively weak public health systems, requiring them to request international assistance to cope. For example, India experienced an outbreak of plague in 1994 that resulted in hundreds of cases across the country, 56 deaths, and over a billion dollars in economic damage from the travel restrictions and trade embargoes imposed by other countries. The outbreak was severe in part because India had largely discontinued surveillance for plague. Health authorities did not respond to initial complaints of flea infestation and did not take appropriate measures to contain the outbreak. The disease spread to crowded urban slums where it progressed unchecked to its highly contagious, pneumonic form and became a serious national problem. Shortcomings in developing country systems also limit the global community’s ability to identify and effectively control newly emerging and reemerging diseases. Several factors combine to make the emergence of new pathogens more likely in developing countries. These include accelerating urbanization and overcrowding without benefit of adequate water supply and sewage systems, population displacement due to civil wars and other disasters, and increased human incursion into ecosystems where contact with pathogens that previously affected only animals or insects is more likely to occur. Developing countries are poorly equipped to conduct surveillance for such pathogens. For example, during the 1980s a bacteria long recognized as a cause of routine eye infections evolved into a pathogen capable of causing an extremely serious disease—Brazilian Purpuric Fever. Since its first appearance, cases of this disease have been documented in Brazil and Australia. Experts observe that other cases may have occurred, only to be misdiagnosed as meningococcal disease. According to experts at the State University of New York at Buffalo and CDC, outbreaks of Brazilian Purpuric Fever appear to have waned. However, no organized surveillance exists for this disease, and its actual global distribution is unknown. In Uganda, local health professionals at the scene of the fall 2000 Ebola outbreak did not at first suspect the disease, despite the fact that Ebola outbreaks had previously occurred in two neighboring countries. Although antimicrobial resistance problems have emerged in industrialized countries, such problems are more likely to escape immediate attention and become severe in developing countries. Impoverished developing countries are particularly ripe breeding grounds for the unchecked spread of drug-resistant strains due to their citizens’ poor access to medical facilities; high rates of self-medication; economic, educational, and logistical difficulties in completing full courses of drug treatment; and limited drug oversight by governments. While disease experts generally regard global surveillance for antimicrobial resistance as inadequate, developing countries conduct the least ambitious programs in this area. These countries’ weak laboratories are a key constraint. Impact of Improvement Initiatives Remains to Be Demonstrated The international community has recently launched a number of initiatives that may improve global surveillance. First, the international community has made unprecedented commitments to achieving specific reductions in the burdens imposed by HIV/AIDS, malaria, and tuberculosis. These diseases present complex challenges, and substantial effort will be required to create surveillance systems for these diseases that will permit these initiatives to move forward as their sponsors intend. Second, WHO and other members of the global public health community have launched a number of broader initiatives intended to strengthen global capacity for surveillance of infectious diseases as a group. The impact of both sets of initiatives remains to be seen. Recent International Commitments to Control HIV/AIDS, Malaria, and Tuberculosis Will Require Improved Surveillance Malaria, tuberculosis, and HIV/AIDS have continued to grow as public health threats, especially in developing countries, despite years of organized international control efforts. All three diseases have their most severe impacts in sub-Saharan Africa. Disease experts estimate that about 90 percent of malaria cases and 70 percent of HIV cases occur in sub- Saharan Africa. They believe that if current trends continue, Africa will also have more cases of tuberculosis than any other region by 2005. These diseases share several characteristics that make surveillance and response comparatively difficult. First, they are relatively difficult to identify; laboratory confirmation is required for certainty in diagnosing all three. Malaria, in particular, is easily confused with other febrile illnesses in the absence of laboratory analysis. HIV-positive people often become sick—and die—from “opportunistic” infections. The underlying cause of the patient’s illness may never be recognized. Further, humans can carry the pathogens that cause these diseases for extended periods without exhibiting overt symptoms. This is particularly problematic for HIV- positive persons, who can infect others despite their apparent lack of disease. Second, none of these three diseases elicits a clear and effective response from the human immune system. These immunological complexities have hampered the development of easily applied, effective, and comparatively inexpensive diagnostic tools, preventive measures, or treatments that would simplify surveillance and encourage commitment to control efforts. Vaccines that could effectively prevent these diseases have not yet been developed. Extended multidrug medication regimens are required to cure active tuberculosis, and retard the development of AIDS symptoms in HIV-positive patients. In the case of tuberculosis these regimens take months to complete while, in the case of HIV patients, they must be followed for the life of the patient. In the case of malaria, the limited ability of the human body to develop effective immunity means that persons living in endemic areas may become sick with this disease on repeated occasions throughout their lives and must therefore be treated repeatedly. Notwithstanding these difficulties, the international community has, over the last few years, moved to adopt specific objectives for controlling these three diseases. In 1998, several organizations, including WHO, other U.N. organizations, and the World Bank, inaugurated campaigns to “Roll Back Malaria” and “Stop TB.” Since that time, effective advocacy by many parties has increased support for these initiatives and for international collaboration to combat HIV/AIDS. In July 2000, at the G8 summit in Okinawa, leaders of the major industrialized countries pledged to work toward achieving the following goals by 2010: Under “Roll Back Malaria,” to reduce global burdens of malaria by 50 Under “Stop TB,” to reduce tuberculosis deaths and prevalence by 50 As proposed by the U.N. Secretary General, to reduce the number of HIV/AIDS-infected young people (15 to 24 years old) by 25 percent. In commenting on a draft of this report, the Department of Health and Human Services and the Department of State pointed out that at the July, 2001 G8 summit in Italy, the industrialized countries pledged to provide at least $1.3 billion to support a new Global AIDS and Health Fund that would provide support for achieving these objectives. Public health experts observed that substantial improvements are needed to create the surveillance support necessary to achieve these and other targets. Since baseline estimates of the incidence of these diseases are subject to wide margins of error, the initiatives do not have a firm starting point from which to measure progress. For example, WHO estimates of the global incidence of tuberculosis are based on the work of a panel of disease experts that the organization called upon to analyze available data from 1997. The panel observed that the number of new cases occurring could have been as much as 21 percent lower or 40 percent higher than estimated. Malaria experts observe that, because of the large margin of error in estimates of malaria incidence—which range from 300 million to 500 million cases—and the fact that many malaria cases and deaths are never diagnosed or reported, the Roll Back Malaria campaign also does not have a reliable baseline. HIV/AIDS data are similarly limited. For example, because AIDS typically appears in HIV-positive individuals years after they have been infected, HIV/AIDS surveillance systems commonly rely not only on surveillance for AIDS but on the administration of blood tests to specific populations, such as pregnant women, to provide information on HIV infection rates. However, according to the Joint United Nations Programme on HIV/AIDS and WHO, more than 40 percent of these national “sero-surveillance” systems, especially those in Africa, are of poor quality or completely nonfunctional. Surveillance shortcomings also make it difficult to implement control programs. For example, developing country surveillance systems often cannot identify people who need treatment for these diseases. WHO estimates that, in 1999, the 23 countries with the highest burden of tuberculosis successfully detected only about 44 percent of the active cases in their countries. WHO experts also commented that laboratories in developing countries frequently cannot be relied upon to provide accurate diagnostic tests for malaria. The WHO-sponsored assessment of Uganda’s surveillance system found that almost half of local health facilities could not accurately diagnose this disease. All three diseases tend to be unevenly distributed by region and population group, thus requiring improved surveillance to effectively target control efforts. HIV/AIDS experts, in particular, commented that more surveillance will be required to understand the character of HIV infection patterns and how they vary among disparate populations, including high-risk groups such as sex workers and their clients. HIV experts also observed that more surveillance information is needed on behaviors such as condom use so that effective strategies for limiting HIV transmission can be prepared. Because all three diseases have demonstrated a capacity for developing resistance to drugs, surveillance for antimicrobial resistance is also critically important. In fact, the international effort to eradicate malaria was abandoned in the late 1960s when it became apparent that both the malaria parasites and the mosquitoes that carry them were becoming resistant to the chemicals used for their control. WHO and the International Union Against Tuberculosis and Lung Disease, with support from other organizations, launched a Global Project to monitor Anti- Tuberculosis Drug Resistance in 1994. Under this project, a global laboratory network was created, with internationally recognized laboratories providing support (including quality assurance testing) for lower-capacity facilities. This project has produced information on the magnitude of the threat posed by resistant strains of tuberculosis. However, the most recent report on the project’s results includes data from geographic areas that include only about 28 percent of the reported tuberculosis cases in the world and two-thirds of the 23 high-burden countries targeted by the Stop TB campaign. A WHO tuberculosis expert commented that he would like to see the project’s geographic reach extended. Surveillance for antimicrobial resistance in malaria and AIDS patients is less organized. One malaria expert observed that data on resistance to malaria drugs are scarce, often outdated, and collected in ways that make data comparison and analysis difficult. WHO and CDC officials observed that developing country public health systems need substantial strengthening in multiple areas to permit them to participate effectively in ambitious campaigns such as Roll Back Malaria and Stop TB. These officials observed that programs that are developed to support the new disease-specific commitments should therefore be broadly targeted. Such broadly targeted efforts could facilitate across-the- board improvements in surveillance for all infectious diseases. Broader Initiatives Aimed at Strengthening Global Surveillance The international community has introduced a number of initiatives to strengthen overall global capacity for surveillance of infectious diseases as a group. These include efforts to (1) strengthen global outbreak management, (2) strengthen surveillance capacity within developing countries, and (3) improve surveillance coordination and cooperation at national and regional levels. While available evidence suggests that these initiatives have merit, they are still in their early stages. Strengthening Global Outbreak Management Prior to the mid-1990s, the international public health community’s approach to identifying and responding to major disease outbreaks was ad hoc in nature, resulting in poor responses to several significant outbreaks, including the 1994 plague epidemic in India and the 1995 Ebola outbreak in Zaire. WHO has since established a system for verifying outbreak reports, inaugurated a network to organize and coordinate outbreak responses, and is coordinating a process to revise the International Health Regulations to provide a firmer foundation for international collaboration in identifying and responding to threatening outbreaks. WHO launched an outbreak verification process in 1997 to help identify significant disease outbreaks around the world. This process involves collecting and verifying outbreak reports with national health authorities and others, assessing their significance, and disseminating information. To further this effort, WHO worked with the Canadian government to develop the Global Public Health Intelligence Network, an electronic surveillance system that scans the Internet for reports of infectious disease in news sources, Internet discussion groups, biomedical journals, and elsewhere. WHO officials stated that they do not receive prompt information about every important outbreak because some countries control that information, and the Network only searches the Internet in a few languages. Given that outbreak reports vary in quality, WHO tries to verify reports to ensure that they present issues of potential international importance before calling attention to them. WHO generally focuses on outbreak reports from developing countries, where public health systems are weaker and more likely to require outside assistance. During the verification process, WHO may offer technical assistance, supplies, transport of specimens, or training on control measures, or help organize vaccination programs. Between November 1999 and October 2000, WHO investigated 228 outbreak reports, eventually confirming 169 significant outbreaks. The vast majority of these outbreaks occurred in developing countries; more than 40 percent occurred in sub-Saharan Africa. In April 2000, WHO inaugurated the Global Outbreak Alert and Response Network to help organize and coordinate international outbreak response. Various organizations have volunteered to participate, including national public health institutions such as CDC, as well as U.N. and nongovernmental organizations. While Network procedures for rapidly mobilizing technical and financial support and for governing response teams are still being finalized, WHO officials believe that their efforts have improved international outbreak coordination and response. There is now a central source of verified information on outbreaks, and rapid response teams have been deployed to countries that need assistance in investigating and controlling outbreaks. For example, WHO reported that its request for assistance in an investigation of an apparent acute hemorrhagic fever outbreak in Afghanistan in June 2000 produced offers from five institutions within 12 hours and the placement of a team in-country within a week of the outbreak being verified. A major test of Network operations occurred during the Ebola hemorrhagic fever outbreak in Uganda in the fall of 2000. At the request of the Ugandan government, WHO coordinated the international response, which included more than 20 Network partners. While this system can provide effective assistance when requested by countries experiencing outbreaks, the Network partners cannot require countries experiencing outbreaks to request assistance or to take recommended measures. In 1995, WHO initiated an effort to revise the International Health Regulations to create a firmer legal footing and a stronger institutional commitment to outbreak surveillance and response. WHO plans to have a draft revision ready for international review in late 2002, to be followed by World Health Assembly approval and acceptance by individual countries. Full implementation is projected for 2005. In launching this initiative, WHO officials noted that, for several reasons, the existing regulations’ disease reporting requirements (for cholera, plague, and yellow fever) have been widely ignored. Among other things, the regulations provide little incentive for reporting. Although WHO often organizes international assistance to help countries investigate or control significant outbreaks, the regulations do not commit WHO or the international community to provide such assistance. In addition, the regulations do not protect reporting countries against trade and travel restrictions that national governments may impose against countries affected by serious disease outbreaks. While such restrictions may be justified in some cases, disease experts have found that the restrictions are sometimes excessive. For example, in 1998, the European Commission banned imports of fresh fish from four countries in East Africa during a cholera epidemic despite WHO and U.N. Food and Agriculture Organization statements that the fish posed no health risk if cooked, dried, or canned properly. Although the two organizations advised the Commission that trade restrictions were not necessary or effective in protecting consumers, the ban continued for 6 months. Key changes to the International Health Regulations would include the following: Redefining reporting requirements to replace the focus on identifying all occurrences of a few specific diseases (no matter how minor) with a new focus on identifying all “events of urgent international health importance” (i.e. outbreaks of any disease that may impose adverse consequences on other countries). Authorizing WHO to define a range of acceptable protective measures that may be employed by countries in response to outbreaks. This provision is directed at providing reporting governments with some assurance that they will not be harmed by unreasonable trade sanctions. For example, WHO would provide guidance as to whether goods entering a country from an area experiencing an outbreak should be inspected, treated, destroyed, or refused entry. Obligating WHO—and by extension, the international community—to respond to outbreak reports by helping reporting countries assess and control outbreaks that may have adverse impacts beyond their borders. Defining a set of core requirements for countries in carrying out surveillance, notification, and response. In commenting on a draft of this report, the Department of Health and Human Services stated that the proposed revisions offer an important channel for pursuing improvements in global surveillance; but the department added that many countries will need assistance to achieve basic surveillance, notification, and response capabilities. WHO added that the revision exercise has recently gained impetus through endorsement from the World Health Assembly in its spring 2001 session and that the number of countries actively involved in the negotiations has increased. Initiatives to Strengthen Surveillance Capacity in Developing Countries WHO, CDC, USAID, other foreign assistance agencies, and developing country governments are collaborating in a number of efforts to improve developing country surveillance and response capacity. These include efforts to improve laboratory and epidemiological capacity and to increase disease-mapping capability. While the global health community has focused on creating laboratory systems that can provide reliable support for high-priority efforts such as polio eradication and influenza control, comparatively less effort has been devoted to broader laboratory improvements. Well-functioning laboratory systems need trained personnel, adequate facilities and equipment, quality assurance programs to ensure accurate test results, and participation from laboratories with greater levels of expertise to answer complex or unusual questions. WHO coordinates several broadly targeted training and quality assurance programs designed to strengthen national public health laboratories, make cost-effective laboratory technology available, and develop and refine laboratory standards and reference materials. For example, WHO has organized voluntary quality assessment programs to monitor and improve the quality of laboratory performance in areas such as hematology and bacteriology. These programs, administered by various prominent disease laboratories around the world, periodically send out samples for participating national laboratories to examine and identify. The testing results are scored and feedback is provided to participating laboratories. While the programs involve about 450 laboratories around the world, they do not reach all countries or all laboratories. Further, they are not fully funded by WHO, and the various laboratories charged with operating them have had to cover most of the costs of operating these programs. Some of WHO’s regional offices have also begun investing in programs to strengthen national laboratories in their regions. In 2001, WHO, with support from the city of Lyon, the Government of France, and the Merieux Foundation, established a new program to strengthen laboratory and epidemiological capacities for handling disease outbreaks in developing countries. Intended to serve 45 developing countries over the next 5 years, the program’s first phase began in April 2001, with 15 senior staff from national public health laboratories in 7 French-speaking African countries. During their 2-year course of study, participants will be expected to develop detailed plans for addressing the needs of their laboratories. Plans are for later trainees to come from the Middle East and North Africa, the Baltics and Central Asia, and possibly South Asia and additional African countries. In commenting on a draft of this report, USAID pointed out that it is working with the new program in Lyon to develop a Quality Control/Quality Assurance Program for national laboratories in Africa. International networking is an effective way to provide developing countries with access to more highly specialized laboratory services as well as assistance in improving the quality of their own laboratory services. Such international networks are a prominent feature of some disease-specific initiatives, including polio eradication and influenza control. WHO has created a system of Collaborating Centers, in part to ensure that developing countries can access support services when needed. WHO currently maintains a worldwide system of more than 270 Centers that focus on infectious diseases. However, as shown in figure 2, Collaborating Centers tend to be concentrated in industrialized countries. Relatively few are located in Africa, despite the high burden of infectious diseases on that continent. With 38 Collaborating Centers, CDC is the single largest contributor of expertise and resources to this system. In 1999, WHO issued a report that identified a number of shortcomings in the Collaborating Centers system, including a lack of consistency in the criteria for selecting centers and the absence of a systematic means for evaluating their activities. WHO found that some Collaborating Centers contribute little to international disease control efforts. WHO is amending its procedures for working with the Centers to address these shortcomings through a more rigorous and consistent designation process, joint preparation of Center work plans, closer monitoring and evaluation, and the development of a global database to meet the needs of national and international health authorities. WHO also continues to work with Collaborating Centers and other institutions to encourage the growth of existing networks for sharing information on particular diseases and initiatives to establish additional networks. International public health officials have long recognized the need to develop strong epidemiological skills in countries and institutions around the world. CDC is widely acknowledged as having the strongest institutional capabilities for investigating and resolving complex disease management challenges. Since its founding in 1951, CDC’s Epidemic Intelligence Service has provided approximately 2,300 health professionals from the United States and elsewhere with the skills to investigate disease events and trends and improve surveillance systems. At the request of national governments, CDC, WHO, USAID, the Rockefeller Foundation, and the European Union have helped establish 27 additional training programs in applied epidemiology worldwide, which are modeled after CDC’s Epidemic Intelligence Service. According to CDC, non-U.S. programs, about half of which are located in lower-income countries, had trained over 900 people as of January 2001. The common goals of these programs include (1) developing a cadre of national public health professionals, (2) providing essential epidemiological and public health services to the country during and after training, and (3) building regional and international linkages between countries to support public health response and training. According to public health experts, an underlying goal is to develop an information- based culture for public health decisionmaking in every country. A CDC-sponsored evaluation of five of these programs in 1998 found that epidemiologists trained by the programs have had a positive impact on the quality of their national public health programs. For example, graduates have helped (1) improve surveillance system procedures and outbreak investigations, (2) develop local surveillance capacity, and (3) design research programs that influenced national health policy decisions. According to CDC and WHO staff, graduates of these programs made important contributions to addressing recent outbreaks of Ebola in Uganda and Rift Valley fever in Yemen and Saudi Arabia. Many of the disease experts we spoke with cited continued expansion of these programs as a key element in global efforts to improve surveillance capacity and performance. However, a low mentor-student ratio is one key factor in the success of applied epidemiology training programs, and this places a limit on the speed at which such programs can be expanded. Twenty of the programs currently in existence were inaugurated within the last decade. For example, programs in Brazil and the Indian state of Tamil Nadu have just begun, while a program in China is still in the planning stages. These programs will take many years to have a significant impact. Increasing capacity for mapping disease information CDC, the WHO Regional Office for the Americas, and WHO headquarters (in collaboration with UNICEF) have all developed computer software to generate maps of disease conditions in specific geographic areas that can help inform decisionmaking. Over the past decade, these disease-mapping systems have had a positive effect on surveillance in developing countries, especially in supporting disease-specific initiatives. For example, the WHO/UNICEF HealthMapper application was used to support the guinea worm eradication and river blindness elimination efforts and is beginning to be used in global efforts against malaria and HIV/AIDS. Experts believe that there is great potential for employing such systems to predict disease outbreaks and trends in relation to climate and weather patterns. However, they note that such systems are constrained by the quality of available data on diseases and underlying features such as population distribution and the locations of health facilities and water supplies, as well as limited access to satellite-generated information. Coordinating Surveillance Operations The international community has initiated efforts to expand coordination of surveillance at the national level, especially in developing countries, and within regions. These efforts can help reduce reporting burdens and make better use of limited resources. With assistance from CDC, WHO’s Regional Office for Africa launched the Integrated Disease Surveillance and Response (IDS) initiative in 1998 to improve linkages between surveillance and response by generating more accurate, timely, relevant, and complete data. In commenting on a draft of this report, USAID added that it has also assisted in launching this initiative, making several grants to WHO’s Regional Office for Africa to support relevant activities. Although the World Health Assembly has not officially endorsed IDS, a number of countries and regions of the world are also seeking greater integration of their surveillance operations. IDS is not intended to replace disease-specific programs. Rather, it seeks opportunities for pooling funds and personnel to improve surveillance for multiple diseases. While the long-term goal is to improve coordination among all surveillance programs, the initiative is presently directed primarily at encouraging greater cooperation in surveillance for epidemic- prone diseases such as cholera and vaccine-preventable diseases, such as measles. Evidence suggests that the initiative may have a favorable impact. For example, according to WHO, IDS planning has enhanced coordination and support for surveillance within public health ministries in at least three African countries. CDC found that 26 African countries had already begun to employ polio eradication resources to perform surveillance for other diseases, without impairing the quality of polio surveillance. However, implementing IDS presents significant challenges and will require substantial time and effort. For example, baseline assessments of African surveillance systems began in late 1998. As of April 9, 2001, only 10 of 46 countries in WHO’s Africa region had both completed assessments and developed plans for addressing weaknesses. CDC and WHO took several years to develop generic surveillance guidelines that can be used to put these plans into action. The guidelines were sent to WHO’s Africa Regional Office in the summer of 2001. CDC officials observed that this slow pace reflects the inherent difficulties in creating manageable systems that satisfy multiple stakeholders. For example, IDS requires agreement on issues such as how to reduce reporting burdens by requiring routine reporting of only “essential information.” However, disease-specific program managers typically have a very broad definition of the term “essential information” when it comes to diseases for which they are responsible. CDC officials also noted that the IDS negotiations have required national officials to agree on issues that they have never before addressed, such as defining threshold levels to determine what constitutes an outbreak and creating procedures for outbreak response. Public health authorities and others are also working on creating regional surveillance networks. For example: The Pacific Public Health Surveillance Network was established in 1996 to improve surveillance and response among the Pacific Community’s 22 member states and territories. Network activities include (1) an Internet system for sharing information on disease trends and events, and (2) diagnostic and other types of assistance to isolated health care facilities. The network has begun to function as an outbreak response coordinator and is working to assemble a regional laboratory system to support timely and appropriate outbreak response. Countries in the Amazon basin and the “Southern Cone” of South America have been working since 1998 to create laboratory networks to improve surveillance of new, emerging, and reemerging infectious diseases. Because of these efforts, participating laboratories have identified an increasing number of Hantavirus pulmonary syndrome cases, including in areas where the disease had not previously been recognized. Participating countries are emphasizing integration of epidemiologists and laboratory personnel to advance network goals. With CDC and Department of Defense support, several countries in Southeast Asia are working to establish a regional network to improve outbreak detection and response. Concluding Observations The global disease surveillance framework is dominated by networks directed at providing information on specific disease threats. The framework supplies comparatively good information when demanded by well-supported, goal-oriented disease control initiatives. Surveillance capacity for other diseases is comparatively weak, and these weaknesses are most acute in developing countries. The continued weakness of developing country surveillance systems not only impairs global surveillance operations, but necessitates the application of substantial resources to create effective global systems each time the international community identifies an additional priority disease target. It also requires institutions such as CDC to devote resources to respond to outbreaks in developing countries that exceed local authorities’ capacity. To date, while facilitating the relatively rapid achievement of disease-specific results, the creation of additional surveillance systems to serve new initiatives has left developing countries’ underlying surveillance problems unresolved. International public health officials concerned about the overall threat of infectious disease are seeking to take advantage of the global community’s apparent willingness to commit itself to achieving measurable progress against three major disease threats—HIV/AIDS, tuberculosis, and malaria—to support broader systemic improvements in developing country surveillance and response capacity. These broad improvements may eventually reduce the need for disease-specific campaigns. However, given the need to demonstrate progress against these three diseases in particular, the extent to which the global public health community can manage the new disease-specific initiatives in a manner that significantly improves surveillance for all infectious disease threats, remains to be demonstrated. USAID’s and the Department of Health and Human Services’ comments on a draft of this report offer additional perspectives on the challenges to be faced in developing strategies for responding to specific disease threats while also addressing overall weaknesses in surveillance capacity. USAID noted the failure of past disease-specific initiatives (like smallpox eradication) to leave a lasting positive impact on surveillance capacity in developing countries. The agency is attempting to insure that its ongoing polio-eradication activities advance the eradication program while also upgrading developing countries’ capacity for monitoring and responding to other diseases. USAID also observed that many of the weaknesses in developing country systems documented in this report require donor attention outside the range of disease specific programs. The Department of Health and Human Services observed that while expanded efforts to improve surveillance and response capacity for HIV/AIDS, malaria, and tuberculosis are clearly warranted, other significant infectious disease threats also need attention. The department concluded that both disease- specific and cross-cutting programs are needed, and that these programs can and should be carried forward in ways that are mutually supportive. Agency Comments and Our Evaluation We received written comments on a draft of this report from the Department of Health and Human Services, the Department of Defense, WHO, USAID, the National Aeronautics and Space Administration, and the World Bank. The World Bank’s letter was transmitted through the Department of the Treasury. These comments are reprinted in appendixes III through VIII, along with our evaluations, where appropriate. The Department of State provided oral comments. In addition, WHO’s Department of Communicable Disease Surveillance and Response and CDC provided technical comments, which we have incorporated where appropriate. In general, the agencies concurred with the report’s findings. The Department of Health and Human Services commented that the report presents an accurate and thorough evaluation of global infectious disease surveillance. In their oral comments, officials from the State Department’s Bureaus for International Organization Affairs and Oceans and International Environmental and Scientific Affairs stated that the report accurately portrayed the issues and obstacles that the international community faces in dealing with infectious disease surveillance. The Department of Health and Human Services and USAID elaborated upon the reports’ concluding observations concerning the challenges to be faced in pursuing both disease-specific and more-broadly focused improvements in surveillance capacity. We expanded our concluding observations to reflect these comments. USAID, the Department of Defense and, to a lesser extent, the World Bank, WHO, and the Department of Health and Human Services provided additional information on their contributions to building global surveillance capacity. USAID and the Department of Defense, in particular, said that the draft report did not adequately describe their efforts to improve global surveillance. USAID highlighted its efforts to assist developing countries in developing surveillance capacity outside the bounds of disease-specific initiatives. The Department of Defense cited relevant activities being undertaken through the Department’s Global Emerging Infections Surveillance and Response System. The World Bank pointed out that as part of its emphasis on health, it is actively working with a number of governments to strengthen national surveillance system. The Department of Health and Human Services cited CDC’s global strategy paper Working with Partners to Improve Global Health: A Strategy for CDC and ATSDR—a document which provides extensive information on CDC activities that contribute to strengthening global surveillance capacity. Where appropriate, we added information on these agencies’ efforts to the report. However, the report was not designed to provide a comprehensive accounting of all worldwide efforts. We refer the reader to the appendixes for additional information as provided by the agencies. We are sending this report to interested congressional committees, the Secretary of the Treasury, the Secretary of State, the Secretary of Health and Human Services, the Secretary of Defense, the Administrator of USAID, the Administrator of the National Aeronautics and Space Administration, and the Director General of the World Health Organization. We will also make copies available to other interested parties on request. Please contact me on (202) 512-8979 if you or your staff have any questions concerning this report. An additional GAO contact and staff acknowledgements are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology At the request of the Chairmen and Ranking Members of the Senate Subcommittee on Foreign Operations, Committee on Appropriations, and the Senate Subcommittee on African Affairs, Committee on Foreign Relations, we evaluated the global infectious disease surveillance framework. Specifically, we (1) examined the surveillance framework’s evolution and current operations, (2) identified factors that constrain its performance, and (3) assessed several initiatives designed to improve global infectious disease surveillance and response. To determine the surveillance framework’s evolution and current operations, we interviewed officials responsible for international surveillance-related activities at World Health Organization (WHO) offices, including WHO headquarters in Geneva, Switzerland; the Pan-American Health Organization (the WHO Regional Office for the Americas) in Washington, D.C.; and the Regional Office for Africa in Harare, Zimbabwe. We interviewed officials at various U.S. agencies, including the Centers for Disease Control and Prevention (CDC), the National Institutes of Health (both of which are constituent elements of the Department of Health and Human Services), the U.S. Agency for International Development (USAID), the Armed Forces Medical Intelligence Center, the Walter Reed Army Institute of Research, the National Aeronautics and Space Administration, and the White House Office of Science and Technology Policy; and at multilateral development institutions, including the World Bank. We interviewed disease experts in academia and officials at nongovernmental organizations such as the Association of Public Health Laboratories. We reviewed the International Health Regulations, as well as documents and studies from WHO and other sources pertaining to international efforts to control specific diseases and guide surveillance. We also attended conferences dealing with international infectious disease control and surveillance issues. To identify factors that constrain the performance of the global disease surveillance framework, we interviewed the officials listed above and conducted fieldwork in four African countries—Malawi, Tanzania, Uganda, and Zimbabwe. We selected these countries from a larger group of African countries that had recently conducted assessments of their national disease surveillance systems. We limited our fieldwork to Africa because of interest expressed in this region by the requesters of this work, as well as Africa’s infectious disease burden, the weak condition of most African health care systems, and the concerted efforts under way to improve surveillance in this region. While in Africa, we interviewed officials at national health ministries; multilateral agencies, including WHO country and regional offices, the World Bank, and the African Development Bank; foreign assistance and technical agencies from the United States and other countries, including USAID and CDC; and nongovernmental organizations active in the health sector. We reviewed documentation on surveillance systems in each country and discussed these countries’ experiences with recent disease outbreaks. We also visited health facilities in each country, including central and district hospitals and laboratories, research institutions, local clinics, and designated surveillance sites for specific diseases such as malaria. At each site, we observed conditions and discussed with knowledgeable officials the ways in which surveillance is conducted, the extent to which surveillance data are analyzed and used, and factors that constrain surveillance activities. In addition, we systematically reviewed the 19 assessments of surveillance systems in African countries that WHO, together with CDC and national health authorities, had completed as of April 2001. We also reviewed studies of surveillance problems in developing and industrialized countries, including the United States and Canada. To assess initiatives designed to improve global infectious disease surveillance and response, we interviewed WHO, World Bank, CDC, USAID, and other officials to identify and discuss key initiatives currently under way to improve regional and global surveillance. When pertinent, we also asked national officials we met during our fieldwork about their involvement in these initiatives, particularly WHO’s Integrated Disease Surveillance and Response effort in the Africa region. We reviewed documents describing the purpose, status, and outcomes to date (where appropriate) of these programs. For our review of WHO efforts to improve international outbreak detection and response, we collected and analyzed information from WHO on disease outbreaks that had been entered in its Outbreak Verification List database from November 1999 through October 2000, including detailed case histories of the international response to a small number of these outbreaks. We also collected and reviewed information on outbreaks from other sources—including ProMED, an Internet service of the International Society for Infectious Diseases. We did not address specific surveillance problems that arise in countries or regions affected by armed conflict or the complex humanitarian emergencies that such conflicts often produce. As noted in our July 2000 report on surveillance, health care to populations affected by such emergencies is typically provided by international and nongovernmental organizations rather than by national governments, and these organizations face obstacles and pressures that are not faced by public health systems functioning in nonemergency conditions. Since this report focused on the development and application of surveillance information, we did not explore the feasibility of improvements in diagnostic, preventive, or treatment technologies. We conducted our work from July 2000 through June 2001 in accordance with generally accepted government auditing standards. Appendix II: Disease Information This appendix provides descriptive information on the diseases mentioned in the body of this report. The information is derived primarily from WHO and CDC documents. Brazilian purpuric fever, first observed in 1984, is caused by an evolved form of a bacterium that causes a common eye infection, conjunctivitis. In its evolved form, this pathogen can invade the bloodstream and cause a lethal infection characterized by high fever, shock, and a severe bleeding disorder. Outbreaks of the disease have appeared to wane. The factors that caused the disease to suddenly appear and then seem to disappear have yet to be determined. According to disease experts, northern Africa and other parts of the world where the original form of the bacterium in question is common are potentially at risk for epidemics of this disease. Chagas disease is caused by a parasite transmitted by insects, by transfusions of contaminated blood, or from mother to fetus. The acute phase of the disease often has no symptoms or an inflammation at the site of the infection and flu-like symptoms. If caught in its early stages, the parasite can be seen in the blood and the disease can be cured with drugs. After that, the parasite moves into body tissue, where it cannot be treated and can cause severe, life-threatening conditions 10 to 30 years later, including heart disease. Up to 18 million people in 18 countries in South and Central America are infected. As many as 100,000 infected people, mostly immigrants, are estimated to reside in the United States. Cholera is caused by a water- and food-borne bacterium. Infection results in acute watery diarrhea, leading to extreme dehydration and death if not addressed. Known vaccines and antibiotics have only limited impact on the disease; treatment focuses on rehydration. According to WHO, recent cholera outbreaks have killed 3.6 percent of those who become ill worldwide. Cholera is endemic in more than 80 countries. During the 1990s, global cholera reports varied from about 100,000 to about 600,000 cases per year. Cysticercosis is a parasitic infection caused by the pork tapeworm, whose eggs may be ingested in contaminated food and water. Inside the human body, the larvae hatch and form cysts in the organs, particularly the muscles, eyes, and brain. Although most cases are asymptomatic or mild, patients may experience vision problems, headaches, seizures, and brain swelling. The infection can be treated with drugs and sometimes surgery. The disease occurs worldwide but is found most often in rural, developing countries with poor sanitary conditions and where pigs are allowed to roam freely. Dengue fever, a mosquito-borne infection caused by four distinct but closely related viruses, is a severe, flu-like illness with specific symptoms that vary based on the age of the victim. Dengue hemorrhagic fever is a potentially lethal complication that may include convulsions. There is no vaccine for dengue fever, nor is there any treatment beyond supportive therapy. With treatment, fatality rates can be less than 1 percent; without it, they can exceed 20 percent. Dengue is endemic in more than 100 countries. Diphtheria is a respiratory disease caused by a virus-infected bacterium. Occurring worldwide, the disease is spread through human-to-human contact. Symptoms range from mild to severe. Diphtheria can be complicated by damage to the heart muscle or peripheral nerves. An effective vaccine is typically provided through national childhood vaccination programs. The disease is fatal 5 to 10 percent of the time, even when treated by administration of antibiotics and diphtheria antitoxin. Untreated, the fatality rate can be much higher. Ebola hemorrhagic fever, a viral disease, is transmitted by direct contact with the body fluids of infected individuals, causing acute fever, diarrhea that can be bloody, vomiting, internal and external bleeding, and other symptoms. There is no known cure, although some measures, including rehydration, can improve the odds of survival. Ebola kills more than half of those it infects. Identified for the first time in 1976, the Ebola virus is still considered rare, but there have been a number of outbreaks in central Africa. Guinea worm disease, formally known as dracunculiasis, is transmitted by drinking water contaminated with parasite larvae. The mature parasite travels through the body, usually emerging through the foot or leg. Perforation of the skin is accompanied by fever, extreme pain, nausea, and vomiting, and an infected person can stay ill for several months. Fatalities are rare, but secondary infection and permanent deformity can occur. There is no vaccine or drug to prevent infection or kill the worms; however, transmission of the disease can be halted through education and the provision of safe drinking water. The disease has been eradicated from several countries, but remains present in 13 African nations, according to CDC. Hantavirus pulmonary syndrome is caused by several strains of a virus that is transmitted by exposure to infected rodents. Symptoms include fever, fatigue, muscle aches, coughing, and shortness of breath; the onset of respiratory distress often leads to death. There is no specific treatment for the disease, other than appropriate management of respiratory problems. The virus was first identified in the southwestern United States in 1993, but several hundred cases have since been confirmed in other U.S. locations, Canada, and several countries in South America. Hepatitis B is a viral infection of the liver that is readily transmitted by contact with the body fluids of an infected person. In many developing countries, most children become infected. The virus may cause an acute illness, as well as a life-long infection that carries a high risk of serious illness or eventual death from liver cancer or cirrhosis. An effective vaccine is available, and WHO has recommended that it be added to routine childhood immunization programs in all countries. About 2 billion people worldwide have been infected with the virus, and about 350 million people remain chronically infected. Human immunodeficiency virus (HIV) causes acquired immunodeficiency syndrome (AIDS), a disease of the immune system. HIV is transmitted through contact with the body fluids of an infected person or from mother to baby. Infected adults may be asymptomatic for 10 years or more. Because the immune system is weakened, there is eventually greater susceptibility to opportunistic diseases such as pneumonia and tuberculosis. Drugs are available that can prevent transmission from pregnant mothers to their unborn children and can help slow the onset of AIDS. As of December 2000, an estimated 36.1 million people worldwide were living with HIV/AIDS and about 21.8 million had died. Influenza, or flu, is a highly contagious respiratory infection caused by three types of virus, of which two (types A and B) can reach epidemic proportions and are found worldwide. Symptoms include fever, cough, sore throat, runny or stuffy nose, headache, muscle aches, and often extreme fatigue that may last 1 to 2 weeks. Severe complications such as pneumonia sometimes occur in children, the elderly, and other vulnerable populations. There is an influenza vaccine, but the viruses change so quickly that the vaccine must be updated every year. Several drugs exist to prevent and treat influenza. Leprosy is a chronic bacterial infection. The exact mode of transmission is not fully understood. Primarily affecting the skin, nerves, and mucous membranes, leprosy causes deformities of the face and extremities after many years but can be cured with drugs. About 680,000 new cases were reported in 1999. India, Myanmar, and Nepal account for about 70 percent of all leprosy cases. Lyme borellosis, or Lyme disease, is a bacterial illness transmitted by ticks. The pathogen was first detected in the United States in 1982 and identified as the cause of the disease. The area around the tick bite sometimes develops a “bull’s eye” rash, typically accompanied by fever, headache, and musculoskeletal aches and pains. There is an effective vaccine for adults at high risk. If untreated by antibiotics, arthritis, neurologic abnormalities, and—rarely—cardiac problems follow. The disease is rarely if ever fatal and is endemic in North America and Europe. Lymphatic filariasis is a parasitic disease transmitted by mosquitoes. The infection causes severe pathology of the lymph system resulting in elephantiasis, or gross swelling, of the limbs and genitals and organ damage. Diagnostic tools have improved, and more recently drug treatment options have replaced mosquito control as a strategy for eliminating the disease. At least 120 million people in 80 countries worldwide are infected in both rural areas and densely populated urban slums. Malaria is a parasitic disease, transmitted by mosquitoes and endemic in 101 countries and territories. Symptoms include fever, shivering, joint pain, headache, repeated vomiting, severe anemia, convulsions, coma, and in severe cases death. Malaria is becoming increasingly resistant to known primary drug treatments. About 40 percent of the world population is considered at risk for malaria. Ninety percent of malaria cases are in sub- Saharan Africa, but the disease is now reemerging in countries where it was once under control. Measles, a highly contagious viral disease, often strikes children and causes fever, conjunctivitis, congestion, and cough, followed by a rash. This disease is transmitted by human-to-human contact, and secondary infections often cause further complications. Sustained efforts to immunize children have reduced the prevalence of this disease, but it still occurs worldwide, with an estimated 30 million cases leading to approximately 900,000 deaths every year. Meningitis, a condition that may be caused by several disease agents, is an infection and severe inflammation of the fluid membranes surrounding the brain and spinal cord. Meningococcal meningitis, caused by a particular type of bacteria, is transmitted by human-to-human contact and is characterized by sudden onset of fever, headache, neck stiffness, and altered consciousness. There is a vaccine for this disease, but it loses its effectiveness over time and must be repeated. Untreated epidemics can incur fatality rates of over 50 percent but epidemic fatality rates in the last 30 years have generally been in the 8 to 12 percent range. Epidemics of meningococcal meningitis are a frequent occurrence in Africa’s “meningitis belt,” which stretches from Senegal to Ethiopia. An estimated 500,000 cases and 50,000 deaths occur each year due to meningococcal meningitis. Pertussis, or whooping cough, is a highly contagious bacterial disease spread through respiratory droplets from an infected person. Symptoms include runny nose and sneezing, a mild fever, and a cough that gradually becomes more severe, turning into paroxysms of coughing that end in vomiting and exhaustion. Pertussis is treatable with antibiotics, and the pertussis vaccine is commonly administered as part of routine childhood immunization programs. Twenty million to 40 million cases with 200,000 to 300,000 deaths are reported worldwide every year. Most occur in developing countries. Plague, a severe bacterial infection, is usually transmitted to humans by infected rodent fleas (bubonic plague) and uncommonly by person-to- person respiratory exposure (pneumonic plague). Symptoms of bubonic plague include swollen, painful lymph glands (buboes), fever, chills, headache, and exhaustion. People with pneumonic plague develop cough, bloody sputum, and breathing difficulty. Plague is treatable with antibiotics. However, unless diagnosed and treated early, it is highly fatal. Approximately 1,000 to 4,000 cases of plague are reported each year, but these figures represent only a portion of the actual number of cases. Poliomyelitis, or polio, is a virus transmitted through human-to-human contact. In most cases, there are no symptoms or only mild, flu-like symptoms. Five to 10 percent of cases can lead to aseptic meningitis, while only 1 percent of infections lead to the acute flaccid paralysis associated with polio. Although there is no cure, an effective vaccine is included as part of routine childhood immunizations. Fewer than 3,500 confirmed cases were reported in 2000, with transmission still occurring in up to 20 countries. Rift Valley fever is a viral disease that primarily affects animals— including domesticated livestock—but can be transmitted to people by mosquitoes or contact with the body fluids of infected animals. Rift Valley fever usually causes a flu-like illness lasting 4 to 7 days, but about 1 percent of cases develops into a more severe hemorrhagic fever that has an approximately 50-percent fatality rate. An antiviral drug has been identified and is being tested, and vaccines are under development. The disease has occurred in many parts of Africa and, in September 2000, was for the first time reported outside of Africa, in Saudi Arabia and Yemen. River blindness, or onchocerciasis, is a parasitic disease. Blackflies transmit the larvae of parasitic worms to humans, where they grow into adult worms with a lifespan of 12 to 15 years. These worms spawn millions of microscopic parasites that travel throughout the body causing unbearable itching, skin disfigurement, and vision impairment or blindness. Treatment with the drug ivermectin kills the infant parasites but has very limited if any effect on adult worms. The disease is endemic in 37 countries, with nearly all cases in Africa. Salmonella infection, or salmonellosis, is caused by a group of bacteria that may be present in contaminated foods—often raw or undercooked foods of animal origin. It causes acute diarrheal illness, for which treatment is usually not required. In some cases, however, the infection can spread in the bloodstream and cause death unless antibiotics are used. Over 2,200 strains of Salmonella bacteria have been identified, including some that have developed antibiotic resistance and are hence more difficult to control. The disease is common in both developed and developing countries. Schistosomiasis, known in some regions as Bilharzia, is caused by five species of parasitic flatworms, called schistosomes. The flatworms, which are carried during part of their lifecycle by fresh water snails, penetrate the skin when people swim or wade in contaminated water. The flatworms grow inside the blood vessels and produce eggs that can damage the intestines, bladder, and other organs and eventually cause bladder cancer, kidney failure, or serious complications of the liver and spleen. Safe, cost- effective drugs are available to treat the disease. Schistosomiasis is endemic in more than 70 developing countries, infecting an estimated 200 million people, 20 million of whom have severe illness. Over 80 percent of the cases are found in Africa. Shigellosis is a highly contagious, diarrheal disease caused by four strains of bacteria. One of these strains, an unusually virulent pathogen, causes large-scale, regional outbreaks of dysentery (bloody diarrhea) with mortality rates of 5 to 15 percent. Transmitted by human-to-human contact and contaminated food and water, this disease is common in crowded areas with poor sanitation and unsafe water supplies. In addition to diarrhea, patients experience fever, abdominal cramps, and rectal pain. The disease is treatable by rehydration and antibiotics, but antimicrobial resistance has become widespread. All types of shigellosis together cause an estimated 600,000 deaths per year, mostly in developing countries. Smallpox is a highly contagious viral disease transmitted from person to person, with a high mortality rate and a history of epidemics throughout the world. Patients experience fever, aching, and prostration, followed by a painful rash that spreads over the entire body and eventually leaves pitted scars and sometimes causes blindness. There is no effective treatment for the disease; however, the development of a vaccine enabled the worldwide eradication of smallpox by 1977. At the start of the eradication campaign in 1966, an estimated 10 million to 15 million cases occurred globally each year, resulting in more than 2 million deaths. Tetanus, or lockjaw, is caused by a bacterium found in the intestines of many animals and in the soil. It is transmitted to humans through open wounds. Neonatal tetanus is a particular problem in newborn infants due to unsanitary birthing practices. Symptoms include generalized rigidity and convulsive spasms of the skeletal muscles. Tetanus can be treated with an antitoxin, and there is an effective vaccine, commonly included in childhood vaccination programs. It is fatal about 30 percent of the time and occurs worldwide. Neonatal tetanus causes an estimated 270,000 deaths each year, mostly in developing countries. Tuberculosis is a bacterial disease that is usually transmitted by contact with an infected person. People with healthy immune systems can become infected but not fall ill—more than one-third of the world’s population is thought to be infected. Symptoms of tuberculosis can include a bad cough, coughing up blood, pain in the chest, fatigue, weight loss, fever, and chills. Several drugs can be used to treat tuberculosis, but the disease is becoming increasingly drug resistant. The available vaccine, commonly administered to children, has a limited effect. The disease is estimated to kill 2 million people each year. West Nile fever is a mosquito-borne viral disease. Symptoms include fever, head and body aches, rash, and, in more serious cases, stupor, coma, convulsions, and paralysis. Death occurs in 3 to 15 percent of cases. There is no vaccine for the West Nile virus, and no specific treatment besides supportive therapies. The disease occurs in Africa, Eastern Europe, West Asia, the Middle East, and, since 1999, the United States. Yellow fever is a mosquito-borne viral disease whose symptoms include fever, muscle pain, headache, loss of appetite, and nausea. Fifteen percent of patients progress to a toxic phase, which can include jaundice, abdominal pain, and bleeding from the mouth, nose, eyes, or stomach. The kidneys deteriorate and may fail. Half of patients who enter this phase die. There is no treatment for yellow fever beyond supportive therapies. A safe and highly effective vaccine for yellow fever is available but is often not included in national vaccination programs. Yellow fever is endemic in more than 40 countries in Africa and Central and South America and causes an estimated 200,000 cases of illness and 30,000 deaths each year. Appendix III: Comments From the Department of Health and Human Services Appendix IV: Comments From the World Health Organization Appendix V: Comments From the United States Agency for International Development GAO Comments 1. We reviewed the examples of relevant USAID activities provided on pages 3-5 of the agency’s written comments and inserted into the report references to those activities that could be included in the text. For example, we added a reference to USAID support to our existing discussion on “Coordinating Surveillance Operations.” 2. We revised the draft report’s concluding observations to reflect USAID’s subsequent comments that past disease-specific initiatives have failed to improve overall developing country surveillance capacity, many of the weaknesses of developing country programs identified in the report require donor attention outside the range of disease specific programs, if the balance of resource flows between disease-specific surveillance initiatives and routine surveillance remains heavily in favor of the former, then the ability of the donor community to support overall system strengthening will continue to be severely inhibited. 3. We retained the original language after consulting with experts on these diseases at CDC and the Case Western Reserve University School of Medicine. 4. We retained the original wording, with the qualification that the information cited was accurate as of April 9, 2001. As of this date, after detailed communications with WHO’s Africa Regional Office, we had received 19 completed assessments and 10 completed plans of action. We were informed, in addition, that health officials had conducted fieldwork in a 20th country, Kenya, but that their assessment report was not yet available. No change was made to reflect the comment that the goal of the Integrated Disease Surveillance and Response initiative “involved only 23 countries that requested inclusion in the initiative.” No reference to such requests was made to us during the course of our work with WHO or the countries involved in the initiative. Appendix VI: Comments From the National Aeronautics and Space Administration Appendix VII: Comments From the World Bank Appendix VIII: Comments From the Department of Defense Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the person named above, key contributors to this report were Ann Baker, Lynn Cothern, Kay Halpern, Lynne Holloway, John Hutton, Bruce Kutnick, and Tom Zingale.
According to the World Health Organization, infectious diseases account for more than 13 million deaths every year, including nearly two-thirds of all deaths among children under age 5. Infectious diseases present a substantial threat to people in all parts of the world, and this threat has grown in volume and complexity. New diseases have emerged, others once viewed as declining in significance have resurged in importance, and many have developed substantial resistance to known antimicrobial drugs. Infectious disease surveillance provides national and international public health authorities with information that they need to plan and manage efforts to control these diseases. In the mid-1990s, public health experts in the United States and abroad determined that global infectious disease surveillance was inadequate, and both the World Health Assembly and the President of the United States called for the development of an effective global infectious disease surveillance and response system. The strongest influence on the evolution of the current global infectious disease surveillance framework has been the international community's focus on specific diseases or groups of diseases. The international community has created diverse surveillance programs to support global and regional efforts to control particular diseases. Surveillance systems in all countries suffer from a number of common constraints. However, these constraints have their greatest impact in the poorest countries, where per capita expenditure on all aspects of health care amounts to only about three percent of expenditures in high-income countries. Surveillance in developing countries is often impaired by shortages of human and material resources. The international community recently launched several initiatives that may improve global surveillance. The community has committed itself to reducing the global burdens imposed by three diseases--tuberculosis, human immunodeficiency virus/acquired immunodeficiency syndrome, and malaria. The community has also begun more broadly targeted initiatives to upgrade laboratories, strengthen epidemiological capacity, and otherwise improve surveillance for infectious diseases as a whole.
Background Individual Market Insurance under PPACA Beginning January 1, 2014, PPACA required that health insurance plans, whether sold on or off an exchange, offer a comprehensive package of items and services—known as essential health benefits. At the same time, PPACA required most individuals to maintain minimum essential coverage for themselves and their dependents or pay a tax penalty—this requirement is commonly referred to as the individual mandate. Individuals who do not have other insurance coverage, such as from an employer, may satisfy this requirement by maintaining coverage under health plans offered in the individual market. Certain PPACA provisions affected the way individual market plans are marketed for consumers. In particular, PPACA standardized health insurance plans into four “metal” tiers of coverage—bronze, silver, gold, and platinum—which reflect out-of-pocket costs that may be incurred by an enrollee. Bronze plans tend to have the lowest premiums but leave consumers subject to the highest out-of-pocket costs when they receive health care services, while platinum plans tend to have the highest premiums and the lowest out-of-pocket costs. The generosity of each metal tier is measured by the plan’s actuarial value (AV). AV is expressed as the percentage of covered medical expenses estimated to be paid by the insurer for a standard population and set of allowed charges for in-network providers. The higher the AV percentage, the lower the expected enrollee cost sharing. For example, for a plan with an AV of 70 percent, it is expected that, on average, enrollee cost sharing under that plan will be 30 percent of the cost of care, while for a plan with an AV of 80 percent, it is expected that, on average, enrollee cost sharing under that plan will be 20 percent of the cost of care. PPACA includes standards related to AV and assigns a specific actuarial value to each of the four metal tiers: bronze (AV = 60 percent); silver (AV =70 percent); gold (AV = 80 percent); and platinum (AV = 90 percent). If an insurer sells a plan on an exchange, it must at least offer one plan at the silver level and one plan at the gold level. Insurers are not required to offer bronze or platinum versions of their plans in order to participate on exchanges. PPACA provisions also affected the way individual market plans are priced for consumers. For example, PPACA prohibited health insurers from varying health insurance plan premiums on the basis of factors other than age, geographic location and tobacco use—for which limits were established. For example, the age factor used to adjust premiums may vary by no more than a 3 to 1 ratio for adults aged 21 and older and must use a uniform age rating curve to specify the rates across all adult age bands. Premium variation based on health status or gender was effectively prohibited. Incentives to Shop for Plans Offered through Exchanges PPACA does not require insurers to offer plans through the state or federal exchanges. Similarly, it does not require consumers to purchase plans through the exchanges; however, there are incentives for many consumers to do so. For example, certain consumers earning from 100 to 400 percent of the federal poverty level are eligible to receive premium tax credits that can reduce premium costs, but only for plans purchased through an exchange. Similarly, certain consumers earning from more than 100 percent to 250 percent of the federal poverty level are eligible to receive additional subsidies that help them pay for the out of pocket costs, but only for silver plans purchased through an exchange. Also, the SBEs and FFE allow consumers to comparison shop for plans and enroll in a plan online, whether or not consumers are eligible for premium tax credits or cost sharing subsidies. Individual Market Consumers Generally Had Access to More Plans in 2015 Compared to 2014, and the Lowest-Cost Plans Were Available through Exchanges in Most Counties in Both Years Individual market consumers in every county in our analysis had access to a variety of plan options each year, and the number of plans available to consumers generally increased from 2014 to 2015. For example, in 28 states for which we had reliable data for all plans (offered either on or off exchanges), the percentage of counties for which six or more plan options were available to consumers increased from 2014 to 2015 for three of the metal tiers—bronze, silver, and gold—and in 2015 consumers in every county in these states had access to six or more plans in each of these three metal tiers. Further, in 2015, among the 38 states where we focused our analysis on plans offered on an exchange, we found that consumers in 88 percent of the counties had access to six or more bronze exchange plans, consumers in 94 percent of counties had access to six or more silver exchange plans, and consumers in 71 percent of counties had access to six or more gold exchange plans. Not all consumers had access to platinum plans, however, the availability of platinum plans generally also increased from 2014 to 2015. (See table 1.) We also found that the lowest-cost plan options available in a county were available on an exchange in a majority of the counties included in our analysis. For example, among the 1,886 counties in the 28 states for which we had sufficiently reliable data for plans both on and off an exchange, we found that the lowest-cost silver plan option for a 30-year old was available on an exchange in 63 percent of the counties in 2014 and in 81 percent of the counties in 2015—an increase of 18 percentage points. The Range of Premiums Available to Consumers Varied among the States and Counties in Our Analysis in Both 2014 and 2015 Premiums Varied Widely among the States in Our Analysis, and from 2014 to 2015 Premiums Were More Likely to Increase than Decrease The premiums for the lowest-cost plan options available in each state included in our analysis varied significantly from state to state. For example, in Arizona (a state for which the lowest-cost premiums were among the lowest in the country) the lowest-cost silver plan options for a 30-year-old were $147 per month in both years for plans both on and off an exchange. By contrast, in Maine (a state for which the lowest-cost premiums were among the highest in the country) the lowest-cost silver plan options for a 30-year-old were $252 in 2014 and $237 in 2015 for plans both on and off an exchange. Based on the full premium costs, on an annual basis in 2015 a 30-year-old in Arizona who was not eligible for a premium tax credit could have spent $1,082 less on the lowest-cost silver plan available to them compared to what the same consumer in Maine could have spent on the lowest-cost silver plan available to them. The findings were similar when we conducted our analysis of plans offered on exchanges in 38 states. Because each state in our analysis uses a uniform age rating curve to specify the rates across all adult age bands, each state would have the same relative differences in premiums for all adult age categories. (See table 2.) The premiums for the median-cost plan options available in each state included in our analysis also varied widely. For example, in both years Hawaii had among the lowest median premium costs for silver plans offered to a 30-year-old either on or off the exchange—$217 per month in 2014 and $180 per month in 2015. By contrast, in both years Colorado had among the highest median premium costs for such plans—$343 per month in 2014 and $369 per month in 2015. In most states, the costs for the minimum and median premiums for silver plans increased from 2014 to 2015. For example, in the 28 states included in our analysis, from 2014 to 2015 the minimum premium values for silver plans available to a 30-year-old increased in 18 states, decreased in 9 states, and remained unchanged in 1 state. Similarly, in these same states the median premium values for silver plans available to 30-year-old increased in 19 states, decreased in 8 states and remained unchanged in 1 state. Further, in general, states with higher than average minimum premiums in 2014 were more likely to have declines in 2015 premiums than the states with lower than average minimum premiums in 2014. For example, the average minimum monthly premium value for the silver plan option for a 30-year-old in 2014 in the 28 states included in our analysis was $193 for plans offered on or off an exchange. Of the 13 states with 2014 premiums for this group of consumers that were greater than $193, eight had lower minimum premiums in 2015. Of the 15 states with 2014 premiums for this group of consumers that were less than $193, only one state had a lower minimum premium in 2015. When analyzing premium costs at the county level, we found that from 2014 to 2015, premiums were more likely to increase than decrease. For example, our analysis of the minimum premiums for silver plans in states where we analyzed data on plans offered either on or off exchanges found that premiums for a 30-year-old increased by 5 percent or more in 51 percent of the counties. During the same time period, premiums for these plans decreased by 5 percent or more in nearly 17 percent of the counties, and increased or decreased by less than 5 percent in 32 percent of the counties. The findings were similar when we repeated this analysis using the median premium value in each county and when we limited the analysis to plans offered on exchanges. Because each state in our analysis uses a uniform age rating curve to specify the rates across all adult age bands, each state would have the same relative differences in premiums for all adult age categories. (See table 3.) The Ranges of Premiums Available to Consumers Were Much Narrower in Some States Compared to Others We found that the range of premiums—from the lowest to highest cost— available to consumers differed considerably for the states included in our analysis. For example, our analysis of premiums for silver plans available to a 30-year-old either on or off an exchange found that the ranges of premiums available were much more narrow in Rhode Island compared to Arizona. In Rhode Island, 2014 premiums for plans ranged from a low of $241 per month to a high of $266 per month, a difference of 10 percent, and in 2015 ranged from a low of $217 per month to a high of $285 per month, a difference of 32 percent. By contrast, in Arizona, 2014 premiums for these plans ranged from a low of $147 per month to a high of $508 per month, a difference of 244 percent, and in 2015 ranged from a low of $147 per month to a high of $545 per month, a difference of 270 percent. In addition, between 2014 and 2015, the range from the lowest- to highest-cost premiums available by state became wider in 18 out of the 28 states included in this analysis. The findings were similar when we conducted our analysis for plans offered on exchanges in 38 states. We also found that the percentage difference between the minimum and maximum premium in the states included in our analysis was generally higher in states where the average number of plans available per county was higher. For example, in both years, states with an average of 30 or more plans per county had among the widest ranges between the lowest and highest premium amounts. By contrast, states with an average of 15 or fewer plans per county had among the narrowest ranges between the lowest and highest premium amounts. (See table 4.) Because each state in our analysis uses a uniform age rating curve to specify the rates across all adult age bands, each state would have the same relative differences in premiums for all adult age categories. When analyzing ranges at the county level, we found that among all the counties in our analysis, the range of premiums available for different plan options generally widened from 2014 to 2015. For example, among the 1,886 counties in the 28 states for which we had sufficiently reliable data on all plans offered on or off an exchange, we found that the range in silver plan premiums for a 30-year-old in 2015 was wider in 79 percent of the counties compared to 2014. The findings were similar when we conducted our analysis for plans offered on exchanges in 38 states. In the interactive graphic linked to below, we provide files showing the range of health insurance premiums, by county, that were available to selected categories of consumers for exchange plans and all plans (whether or not they were available on an exchange)—for both 2014 and 2015. Twenty-five states include complete sets of data for both years. Twenty-four states include partial data if certain date elements were not sufficiently reliable to report. For example, in a state where we had information for fewer than 70 percent of exchange plans in a given year, we do not report any values for exchange plans for that state in that year. We do not include any data for the state of Washington or the District of Columbia because the data were either not available or not sufficient in both years. See figure 1 for an illustration of premium information available via the interactive map available at the website. This graphic can be viewed by linking to the interactive map found at http://www.gao.gov/products/GAO-15-687. Agency Comments We received technical comments on a draft of this report from the Department of Health and Human Services and incorporated them as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: States Included and Excluded from GAO’s Analyses Appendix I: States Included and Excluded from GAO’s Analyses offered on an exchange for facilitated exchange (FFE) or state based “all plans” analyses exchange (SBE) exchange(Y or N) (Y or N) N offered on an exchange for facilitated exchange (FFE) or state based “all plans” analyses exchange (SBE) exchange(Y or N) (Y or N) We excluded 2014 data for Virginia from our analyses because, even though the percentage of plans that reported premiums data was greater than 70 percent of the universe of plans in 2014, there were several outliers that made the data unreliable for this year. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals making key contributions to this report include Gerardine Brennan, Assistant Director; Todd Anderson; George Bogart; Matthew Byer; and Laurie Pachter.
PPACA, as of 2014, changed how insurers determine health insurance premiums and how consumers shop for individual market health insurance plans. For example, PPACA prohibited insurers from denying coverage or varying premiums based on consumer health status or gender. At the same time, PPACA required health plans to be marketed based on their metal tiers (bronze, silver, gold, and platinum), which helps consumers compare the relative value of each plan; it also required the establishment of health insurance exchanges in each state, through which consumers can compare and select from among participating health plans. GAO was asked to examine variation in the health plan options and premiums available to individuals under PPACA, and how the options available in 2014 compared to those in 2015. GAO examined: (1) the numbers of health plans available to individuals and how they changed from 2014 to 2015, and (2) the range of health insurance premiums in 2014 and 2015, and how they changed for individuals in each state and county for selected consumers. GAO analyzed data from the Centers for Medicare & Medicaid Services (CMS); reviewed applicable statutes, regulations, guidance, and other documentation; and interviewed officials from CMS. Comparisons across years were conducted for states that had sufficiently reliable data in both years—including comparisons of plans offered either on or off an exchange in 28 states (1,886 counties) and comparisons of plans offered only on an exchange for 38 states (2,613 counties) although GAO is reporting some data on 49 states. As of 2014, key provisions of the Patient Protection and Affordable Care Act (PPACA) resulted in the establishment of health insurance exchanges in each state and changed how insurers determined health insurance premiums. Individual market consumers generally had access to more health plans in 2015 compared to 2014, and in both years the lowest-cost plans were available through exchanges in most of the 1,886 counties GAO analyzed in the 28 states for which it had sufficiently reliable data for plans offered either on or off an exchange. In addition, consumers in most of the counties analyzed had six or more plans to choose from in three of the four health plan metal tiers (bronze, silver, and gold) in both 2014 and 2015, and the percentage of counties with six or more plans in those metal tiers increased from 2014 to 2015. Consumers had fewer options regarding platinum plans, although the availability of platinum plans generally also increased from 2014 to 2015. The lowest-cost plan available in a county was available on an exchange in most counties. For example, among the 1,886 counties analyzed, GAO found that the lowest-cost silver plan for a 30-year-old was available on an exchange in 63 percent of these counties in 2014 and in 81 percent of these counties in 2015—an increase of 18 percentage points. The range of premiums available to consumers in 2014 and 2015 varied among the states and counties GAO analyzed. For example, in Arizona the lowest-cost silver plan option for a 30-year-old was $147 per month in both years, but in Maine, the lowest-cost silver plan options for a 30-year-old were $252 in 2014 and $237 in 2015. In the 28 states included in GAO’s analysis, from 2014 to 2015 the minimum premiums for silver plans available to a 30-year-old increased in 18 states, decreased in 9 states, and remained unchanged in 1 state. At the county level, GAO found that premiums for the lowest-cost silver option available for a 30-year-old increased by 5 percent or more in 51 percent of the counties in the 28 states. GAO also found that the range of premiums—from the lowest to highest cost—differed considerably by state. For example, in Rhode Island, 2014 premiums for silver plans available to a 30-year-old either on or off an exchange ranged from a low of $241 per month to a high of $266 per month, a difference of 10 percent, and in 2015 ranged from a low of $217 per month to a high of $285 per month, a difference of 32 percent. By contrast, in Arizona, 2014 premiums for these plans ranged from a low of $147 per month to a high of $508 per month, a difference of 244 percent, and in 2015 ranged from a low of $147 per month to a high of $545 per month, a difference of 270 percent. An interactive graphic reporting by state and county the minimum, median, and maximum premium values for all individual market plans (either on or off the exchange) and for exchange-only plans, is available at http://www.gao.gov/products/GAO-15-687 . It includes either data for both years, or partial data (e.g., data for one of the two years) for 49 states. GAO received technical comments on a draft of this report from the Department of Health and Human Services and incorporated them as appropriate.
Background The Forest Service, through its 9 regional offices, and BLM, through its 12 state offices, award contracts to individuals or companies to harvest and remove timber from the federal lands under their jurisdiction. The contracts set forth specific terms and provisions of the sale, including the estimated volume of timber to be removed, the period for removal, the price to be paid to the government, and the environmental protection measures to be taken. For contracts valued at $2,000 or more, for fiscal years 1990 through 1995, the Forest Service reported that it had awarded almost 24,500 timber sale contracts valued at about $27 billion; for fiscal year 1996, data from the Forest Service showed that it had awarded over 8,000 timber sale contracts valued at more than $4 billion as of the end of April 1996. BLM had about 200 contracts valued at more than $173 million. Both the Forest Service’s and BLM’s regulations and procedures specify that the agencies can extend the time for completing a timber sale contract under certain circumstances and that they can modify, suspend, cancel, or partially cancel a timber sale contract for various reasons, including the need to protect threatened or endangered species and their habitat. The Forest Service’s and BLM’s procedures outline similar steps to take when deciding whether to suspend or cancel a timber sale contract. For example, within the Forest Service and BLM, the contracting officer can suspend a contract to protect threatened or endangered species. However, only the Chief of the Forest Service and BLM’s state directors are authorized to cancel contracts for environmental reasons. (App. I provides additional information on the Forest Service’s and BLM’s procedures.) Various Actions That Occurred Around 1990 Have Resulted in Federal Liability From October 1992 through June 1996, the Forest Service and BLM paid more than $6.6 million in claims for 49 contracts that were suspended or canceled to protect threatened or endangered species. The Forest Service had 48 of the claims; BLM had 1. The agencies have paid the purchasers for the value of replacement timber, interest, lost profits, and unrecovered costs incurred under the contracts. BLM settled its claim of almost $228,000 (plus interest) by modifying another contract held by the purchaser to reduce the amount paid to the government without changing the original volume to be harvested. Data from the Forest Service show that it settled 48 claims (totaling almost $6.5 million) from timber management appropriations. According to timber management officials, the Forest Service attempts to find replacement timber when it must cancel all or a portion of a sale. However, the data that the Forest Service provided to us do not indicate whether the agency took such action to settle past claims for contracts suspended or canceled to protect threatened or endangered species. According to Forest Service and BLM officials and attorneys representing some timber sale purchasers, the agencies rarely suspended or canceled timber sale contracts before the 1990s to protect threatened or endangered species. Before that time, public interest groups raised little opposition to timber sales, particularly as they affected threatened or endangered species. After that time, new scientific information about forest ecosystems came to light, environmental advocacy groups became more aggressive and effective, the public and the media focused greater attention on environmental issues, and the listing of new threatened or endangered species by the Fish and Wildlife Service under the Endangered Species Act led to the suspension or cancellation of timber sale contracts. One central issue in the Pacific Northwest concerned whether and how much of the remaining old-growth forests should remain available for timber harvesting or be left undisturbed, in part to protect the habitat of the Northern spotted owl, marbled murrelet, various salmon populations, and other species. In addition, in the early 1990s, various environmental groups brought legal actions to suspend or cancel timber sale contracts. For example, in May 1991, the U.S. District Court for the Western District of Washington ordered the Forest Service to stop selling timber in much of the area inhabited by the Northern spotted owl until the agency had prepared a management plan and environmental impact statement for the species. Similarly, in June 1992, the U.S. District Court for the District of Oregon ordered BLM not to proceed with timber sale contracts because the agency had not prepared a supplemental environmental impact statement. The primary cause of the suspensions was that the Forest Service and BLM had failed to produce plans that satisfied the requirements of such laws as the National Forest Management Act of 1976, the Endangered Species Act of 1973, or the National Environmental Policy Act of 1969. Purchasers who disagree with a Forest Service or BLM decision to suspend or cancel a timber sale contract may submit a claim to the responsible agency’s contracting officer for a decision. Under the Contract Disputes Act, appeals from the contracting officer’s decision may be filed with the respective agency’s Board of Contract Appeals or the U.S. Court of Federal Claims. Either party may appeal a decision of one of these bodies to the U.S. Court of Appeals for the Federal Circuit. Generally, claims arising from the suspension or cancellation of timber sale contracts to protect threatened or endangered species have resulted from disagreements between the federal agencies and the purchasers over the types and amounts of compensation to which the purchasers are entitled. Once a court decides the merits of a case, the purchasers can seek reimbursement for their attorneys’ fees under the Equal Access to Justice Act if the purchasers meet the criteria defined in the act. If the court awards attorneys’ fees, the Forest Service or BLM generally has to pay the fees from appropriations. Significant Uncertainty Exists About the Amount of Future Liability Any estimate of future liability must be viewed with uncertainty. The outcome of ongoing and future litigation is unpredictable and could result in the award of more or less in damages than the purchasers claim. In September 1996, for example, USDA’s Board of Contract Appeals awarded a purchaser over $4.2 million (plus interest) on a $10 million claim for five timber sale contracts that the Forest Service had suspended in September 1992. Also uncertain are the results of countersuits that could be filed by the Forest Service or BLM, the success of the agencies’ offers to replace timber in lieu of paying damages, and the settlement of claims that have not yet been filed. Claims pending against the Forest Service and BLM for contracts suspended or canceled to protect threatened or endangered species totaled almost $61 million and about $2.2 million, respectively, as of October 1996. Purchasers have filed claims for such expenses as property taxes and insurance; the salaries of officers and watchmen; depreciation; idle equipment, including logging trucks, skidders, loaders, graders, and other assorted vehicles; interest; and the value of replacement timber. When pending claims, the agencies’ “best estimates” of potential future liability, and other information are considered, the Forest Service’s potential future liability as of October 1996 could be at least $259 million; BLM officials estimate that the agency’s potential future liability could be between $37 million and $42 million. Table 1 shows the number and amount of the pending claims and the “best estimates” of potential future liability. Forest Service According to timber management officials, purchasers had not, as of October 1996, filed claims for the additional $198 million. For example, $170 million of the $198 million potential future liability represents an estimate of potential claims that are still up in the air because of a recent settlement agreement. On September 17, 1996, the Justice Department and 15 timber companies (44 section 318 sales) agreed that the Forest Service would provide alternative timber to the companies between 1997 and 1999, after completing environmental analyses related to the replacement timber being offered. Under the agreement, the purchasers waive all rights to file claims for delays in providing replacement timber that occur after the date of the agreement. Given the above settlement agreement and a multiplicity of court cases, it is difficult, according to Forest Service officials, for the agency to determine a reasonable damage estimate. Forest Service officials added that the September 1996 settlement agreement does not preclude environmental groups from filing suits to prevent the sale of replacement timber that could be offered to the purchasers. This possibility adds further uncertainty to estimates of the Forest Service’s future liability. Funding for Pending and Future Settlements May Be a Problem According to Forest Service officials, the agency may not have the funds to settle pending and future claims. In the past, the agency has not requested a specific appropriation to settle claims but has reprogrammed funds or used funds carried over from prior fiscal years. However, the amount of settled claims was significantly smaller than the amount of potential future claims. Timber management officials are concerned that a large judgment against one or several forests could cause the Forest Service to stop all or some programs in the forests or request supplemental appropriations to pay the damages. Under the Forest Service’s policies, the cost of administering a timber sale contract is a cost of the forest on which the sale occurs, and any costs associated with claims are to be covered by the forest’s funds. Therefore, the applicable forest would first pay the damages out of current appropriations from the account of the program responsible for administering the contract—for example, timber management or salvage sales. The policy also states that the involved forest and region may have to reprogram funds to cover these costs. The Forest Service has the discretion to reprogram funds within the National Forest System Appropriations Account. In fiscal year 1996, the Congress appropriated $1.3 billion for that account. The policy further notes that funds must be available to avoid violating the Anti-Deficiency Act and states that contracting officers should not issue a decision on a claim unless funds are available. The policy also states that the Forest Service does not have the authority to use either the Timber Sale Deposit Fund or the National Forest Fund accounts to pay for settling claims because federal law requires the deposit of all receipts from timber sales into a miscellaneous receipts account of the U.S. Treasury except in specific situations that do not include payments for contract claims. Officials from the Forest Service’s Northwest Region told us that they have considered other options to fund future damages. For example, the Forest Service and the purchaser could mutually agree to the amount of the damages and establish credits that could be transferred to existing contracts or held in a “bank” for future contracts. Second, the agency could allow the forests to offset the damages when a purchaser has defaulted on other contracts. Timber management officials said they had discussed these two options with regional, headquarters, and USDA officials. They noted, however, that neither the legislation nor the regulations applicable to the Forest Service allow the agency to implement these options. Third, the Congress could enact legislation allowing the Forest Service to use funds, derived from the sale of timber under contracts that were subsequently suspended, that the agency had deposited in the Timber Sale Deposit Fund or the National Forest Fund. For claims that purchasers have already filed, Forest Service officials estimate that some of the damages may come due in fiscal year 1997. Because some claims have only recently been filed and because USDA’s Board of Contract Appeals or the Court of Federal Claims can take between 2 and 5 years to issue a ruling, some of the liability may not be realized until beyond 1997. In addition, countersuits and appeals could further delay the date when the liability could become due. As of October 1, 1996, BLM had one claim pending for almost $2.2 million. In 1992, the U.S. District Court for the District of Oregon ordered BLM to halt timber sale contracts because of concerns about the Northern spotted owl. As a result of this decision, BLM suspended 23 timber sale contracts. The purchaser of 1 of the 23 suspended contracts has filed a claim with the Department of the Interior’s Board of Contract Appeals for about $2.2 million. Attorneys from the office of DOI’s regional solicitor in Portland, Oregon, could not estimate when the Board would render a decision on this claim. According to an attorney from the regional solicitor’s office, if the Board’s decision is favorable to the purchaser, some or all of the remaining 22 companies could file similar claims. The court lifted the injunction in January 1995, and the companies have been harvesting timber since that time. According to a BLM official and an attorney from the regional solicitor’s office, the potential liability could be between $35 million and $40 million if the other 22 purchasers filed and were successful in their claims. These officials pointed out, however, that BLM has not conducted any analyses to support the estimated liability. A BLM official noted that the 22 purchasers have probably finished harvesting timber from the sales. Therefore, if the purchasers were successful in their claims, BLM would first determine whether the purchasers had other contracts with the agency and, if so, attempt to negotiate a settlement that would modify existing contracts to reduce the price by the damages awarded without changing the original amount of timber to be harvested. According to an official, if that action was not sufficient to settle the damages or if the purchasers did not agree to the settlement, BLM would have to fund the damages from appropriations. The Forest Service Can Further Limit Its Future Liability; BLM Has Completed Its Action Both the Forest Service and BLM have taken some actions to minimize the future potential liability arising from the suspension or cancellation of timber sale contracts to protect threatened or endangered species. Although BLM has completed the actions to limit its liability, the Forest Service has spent years drafting and redrafting proposed changes to its regulations and standard contract language. The Forest Service has not finalized either document. In commenting on a draft of this report, officials from USDA’s Office of General Counsel and the Forest Service’s Timber Management staff pointed out that changes in environmental laws, the increase in the number of lawsuits, and the impact of the resulting court rulings contributed to the delay in finalizing draft regulations and a draft timber sale contract. Forest Service The existing Forest Service timber sale contract includes three provisions that can come into play when the agency suspends or cancels a timber sale contract to protect threatened or endangered species. The following summarizes the three provisions: If the Forest Service suspends a contract to prevent serious environmental degradation or resource damage or to comply with a court order, the purchaser’s claim is limited to the out-of-pocket expenses incurred as a direct result of the suspension. The contract specifies that such out-of-pocket expenses do not include lost profits, the cost of replacement timber, or any other anticipated losses. If the Forest Service cancels a contract to be consistent with a forest plan, to comply with a court order, or to respond to a determination that continued timber harvesting would seriously degrade the environment, cause resource or cultural damage, and/or jeopardize sensitive, threatened, or endangered species, the purchaser is entitled, under provision CT9.5, to out-of-pocket expenses and to reasonable compensation for the cost of acquiring comparable timber to replace that lost through the cancellation. If, for the same reasons, the Forest Service cancels a contract, the purchaser is entitled to out-of-pocket expenses but not, under provision CT9.52, to compensation for the value of replacement timber. However, the language of CT9.52 differs from the language in the Forest Service’s regulations addressing the cancellation of contracts for environmental protection. Those regulations state that the Forest Service will provide reasonable compensation to the purchaser for unrecovered costs and for the value of replacement timber. One purchaser has filed a suit with the U.S. Court of Federal Claims alleging, among other things, that CT9.52 is not consistent with the Forest Service’s regulations. Officials from USDA’s Office of General Counsel said that it is not atypical for a case in the U.S. Court of Federal Claims to take several years to be resolved. Draft Cancellation Regulations Would Use a New Formula to Calculate the Value of Replacement Timber In August 1990, the Forest Service published proposed regulations on canceling timber sale contracts. The Forest Service did not issue final regulations because it identified additional changes that were needed and litigation was occurring at that time. In 1992, the Forest Service again revised its regulations on cancellations. The 1992 revision incorporated, among other things, the contract provisions affecting (1) the protection of endangered species’ habitat and (2) the settlement that will be provided when the agency cancels a timber sale contract to protect threatened or endangered species. The agency had expected to publish the proposed regulations for comment in January 1994. (App. II provides additional details on the proposed regulations.) Since that time, USDA has been reviewing the proposed regulations. According to Forest Service officials, on September 26, 1996, USDA gave its approval for the Forest Service to send proposed regulations to the Office of Management and Budget (OMB) for its review and approval. At an October 1996 meeting, according to officials from the Forest Service and USDA’s Office of General Counsel, OMB asked the Forest Service to provide additional information on the economic impact of the proposed regulations. The Deputy Director for Timber Management told us that the Forest Service expects to provide the required analysis to OMB by December 1996 but could not estimate when proposed regulations would be published for public comment. Draft Timber Sale Contract Would Reduce the Government’s Risk In January 1988, the Forest Service completed its consolidated revision of the two most frequently used timber sale contracts, which had not been revised since the fall of 1973, and provided the draft to USDA for its review. In 1993, the Forest Service and USDA initiated a second effort to revise the timber sale contract, but neither draft has been published for comment or implemented because USDA is still reviewing it. If the Forest Service had finalized the contract, USDA’s Associate General Counsel, Natural Resources Division, and other officials from that office believe that some of the current liability might be greatly reduced because the proposed contract would give the government more flexibility to modify contracts and delete timber areas affected by threatened or endangered species. On September 27, 1996, Forest Service officials told us that the Chief had approved the proposed contract and was planning to meet with USDA’s Under Secretary for Natural Resources and the Environment about it. They could not estimate when or whether USDA would approve releasing a proposed contract for public comment. (App. II provides additional details on the draft contract.) BLM has not had many claims for contracts suspended or canceled to protect threatened or endangered species. BLM’s contract provisions and regulations allowing extensions of the completion date have significantly limited cancellations. In addition, beginning in 1996, BLM’s contracts have limited purchasers’ damages to unrecovered costs when cancellations have resulted from the Endangered Species Act. Since 1984, BLM’s timber sale contract has included a provision that allows the agency to suspend a contract to protect threatened or endangered species. If its subsequent analysis shows that mitigating actions can address the concerns or that the concerns no longer exist, BLM can extend the contract’s completion date for time equal to the operating time that has been lost because of the suspension. According to a BLM official, valid reasons for granting extensions include delays necessitated by the Endangered Species Act, court injunctions by parties outside of the contract, and reviews of cultural resources. From 1992 through June 1996, BLM extended the completion dates for 52 contracts, primarily to comply with the Endangered Species Act. If its subsequent analyses show a continuing problem, BLM attempts to negotiate a modification with the purchaser. If unsuccessful, BLM cancels the contract. According to BLM officials, the agency has not canceled any timber sale contract since at least fiscal year 1992. In 1991 and again in 1994, BLM revised the provision and expanded the circumstances under which the agency could suspend operations to consult or reinitiate consultation with the Fish and Wildlife Service and other agencies. For example, BLM could suspend operations while certain raptors and owls were nesting or upon discovering “survey and manage species” identified for protection in a resource management plan. Following the suspension, the purchasers can resume timber harvesting operations; BLM does not incur any liability. In addition, BLM’s regulations and standard timber sale contract provisions are consistent and do not require the agency to compensate purchasers for the value of replacement timber when contracts are canceled or partially canceled. However, until 1996, BLM’s contract was silent on the types of damages that purchasers could claim following a cancellation or partial cancellation. Rather, BLM relied on its contracting officers, DOI’s Board of Contract Appeals, and the courts to determine whether payment for damages was warranted and how much was to be awarded. Since March 1996, BLM’s contract has included a provision that limits the purchaser’s damages to the actual costs that have not been recovered by the value of the timber removed from the contract area. For example, if the purchaser builds a road to harvest a sale, BLM would compensate the purchaser only for that portion of the road’s construction costs applicable to the portion of the sale that had been canceled. BLM uses a formula to determine the government’s liability. BLM compensates purchasers for such unrecovered costs only when contracts are canceled or partially canceled to protect species listed under the Endangered Species Act. According to a BLM official and an attorney from the office of DOI’s regional solicitor in Portland, Oregon, BLM had considered adding provisions to reduce the federal liability, such as unilaterally canceling a timber sale contract for the convenience of the government, but they noted that such actions may not be in the agency’s best interest, since the more restrictive a contract, the less purchasers are likely to bid for the timber. As of September 1996, BLM officials with whom we met said that they believe the agency has sufficient protection and has no plans to either expand its contract provision or take any other action that could further limit the agency’s future liability. BLM’s past success in negotiating a noncash settlement with a purchaser seems to support the agency’s belief. Conclusions BLM has suspended or canceled significantly fewer timber sale contracts than the Forest Service, and BLM has consistently taken actions to protect itself from the damages that could arise from suspending or canceling timber sale contracts to protect threatened or endangered species. In contrast, as evidenced by the current lawsuit alleging inconsistency in the language of its cancellation regulations and standard timber sale contract provisions, the Forest Service’s actions have not fully protected the agency. Although complying with various environmental laws before offering a timber sale would help protect the agencies against future suspensions or cancellations and the damages that could follow, effective regulations and a contract that has the same suspension and cancellation language as the regulations would go a long way to further minimize the potential for future claims, lawsuits, and damages. Many years have passed since the Forest Service started to develop proposed regulations and a revised contract that would minimize the agency’s future liability. However, the Forest Service has not yet released these documents for public comment—a process that would help it identify the issues that still need to be addressed. Although various circumstances contributed to the delay in issuing draft regulations and a contract for public comment, the dialogue that would result from the expeditious publication of both documents may lead to more consistent actions by the industry and a better understanding of the regulatory and contractual requirements by both the industry and the public. Recommendation We recommend that the Secretary of Agriculture direct the Chief of the Forest Service to expeditiously release for public comment proposed regulations for canceling timber sale contracts and a revised timber sale contract. Agency Comments We provided USDA, the Forest Service, DOI, and BLM with a draft of this report for comment. We met with officials from these agencies, including attorneys in USDA’s Office of General Counsel; the Deputy Director, an assistant director, and other members of the Forest Service’s Timber Management staff; an assistant director from BLM’s Lands and Renewable Resources staff; a forester in BLM’s Oregon State Office; and attorneys from the office of DOI’s regional solicitor in Portland, Oregon. These officials agreed with the report’s findings, and USDA and the Forest Service agreed with the recommendation. USDA and Forest Service officials also provided us with additional reasons for the delay in issuing new cancellation regulations and a new timber sale contract and expressed concern about the specificity of the information provided on these documents. Officials from USDA, the Forest Service, and BLM and attorneys from the office of DOI’s regional solicitor in Portland, Oregon, suggested clarifications to our report that we incorporated as appropriate. Attorneys in USDA’s Office of General Counsel and the Deputy Director, Timber Management, noted that changing environmental circumstances, the tremendous increase in the number of lawsuits, and the decisions resulting from the numerous lawsuits filed by public interest groups and the timber industry over the last 8 years have resulted in draft cancellation regulations and a draft timber sale contract that differ significantly from the Forest Service’s original proposals. They also noted that these and other issues contributed to the delay in moving forward with the two proposals. We incorporated these views in the draft where appropriate. USDA and Forest Service officials were concerned about the specificity of the information that we had provided on the draft regulations and draft timber sale contract because the agency has not released the two documents for public comment. They explained that both proposals are very sensitive and would shift some of the risk from the government to the industry. We modified some information on the draft regulations and contract to provide a more general discussion of their expected impact. We performed our work from May 1996 through September 1996 in accordance with generally accepted government auditing standards. Appendix III contains details on the scope and methodology of our review. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Secretaries of Agriculture and the Interior and the Director, Office of Management and Budget. We will make copies available to others upon request. This work was performed under the direction of James K. Meissner, Associate Director for Timber, who can be reached at (206) 287-4810 if you or your staff have any questions about this report. Other major contributors to this report were Mary Ann Kruslicky and John S. Kalmar, Jr. Agencies’ Procedures for Suspending or Canceling Timber Sale Contracts The Bureau of Land Management’s (BLM) and the Forest Service’s procedures outline similar steps that the agencies should take when deciding to suspend or cancel a timber sale contract for environmental reasons, including concerns about threatened or endangered species. BLM has delegated the responsibility for suspending a contract to the contracting officer. The Forest Service has delegated the responsibility to suspend a contract to the forest supervisor, who can redelegate the authority to the contracting officer. Table I.1 summarizes the agencies’ procedures for suspending a contract. The procedures for canceling a timber sale contract differ from those for suspending one. Within BLM, the state director is authorized to cancel a timber sale contract to prevent environmental degradation. According to an official, BLM’s canceling of a timber sale contract is a “rare occurrence.” None of the information provided to us by BLM indicated that it had canceled a timber sale contract since fiscal year 1992 to protect threatened or endangered species. Within the Forest Service, only the Chief can cancel a timber sale contract upon determining that its continuation would cause serious environmental degradation. In addition, the courts have ordered or the Forest Service has agreed to voluntarily cancel timber sale contracts. Table I.2 summarizes BLM’s and the Forest Service’s procedures when canceling a timber sale contract. Forest Service’s Draft Cancellation Regulations and Timber Sale Contract Draft Regulations Would Make Compensation Similar to That of Other Agencies In an effort to limit the financial liability of the Forest Service when it must, for reasons of public policy or statutory direction, cancel a timber sale contract, in August 1990, the agency published proposed regulations concerning the cancellation of timber sale contracts. The Forest Service did not issue final regulations because it subsequently identified additional changes that should have been included in the 1990 proposed regulations and because litigation was occurring at that time. In 1992, the Forest Service again instituted an effort to incorporate two of its contract provisions—one to protect the habitat of endangered species and the other to specify the settlement that will be provided when the agency cancels a timber sale contract to protect threatened or endangered species—into its regulations. The Forest Service had expected to publish the proposed regulations for public comment in January 1994. According to Forest Service officials, on September 27, 1996, the U.S. Department of Agriculture (USDA) gave its approval for the Forest Service to send proposed regulations to the Office of Management and Budget (OMB) for its review and approval. They also noted that OMB is required to complete its review within 30 days and said that USDA would then have to approve the Forest Service’s publication of the regulations for public comment. The July 1996 version of the Forest Service’s draft regulations that we reviewed would clarify when, why, and by whom contracts may be canceled; remove redundant provisions; provide a new formula for compensation when the government must cancel timber sale contracts; and limit the financial liability of the United States on certain contracts. The regulations would also include the language of the Forest Service’s Settlement for Threatened and Endangered Species contract provision (CT9.52) and change the formula for calculating compensation for the value of replacement timber to be similar to that of other agencies. In the preamble to the draft regulations, the Forest Service states that assuming most of the risk is no longer in the public interest nor is it fiscally feasible, given the increasing uncertainties surrounding national forest timber sales. The preamble notes that the draft regulations would give agency officials the flexibility to adjust management activities on National Forest System land. The preamble also notes that these and other changes are necessary because the agency cannot continue to bear most of the financial risk and burden of contract cancellations arising from its compliance with the increasingly complex and rigorously enforced environmental laws and regulations. Draft Timber Sale Contract Would Reduce the Government’s Risk In January 1988, the Forest Service completed its draft consolidated revision of the two most frequently used timber sale contracts (2400-6 and 2400-6T), which had not been revised since the fall of 1973. The draft was not published for comment and was never implemented. In its April 1993 Timber Cost Efficiency Study—Final Report, the Forest Service indicated that it would revise its timber sale contracts. In July 1993, at the direction of the Assistant Secretary of Agriculture for Natural Resources and Environment, USDA and the Forest Service began a second initiative to develop a revised timber sale contract. We reported in April 1994 that the Forest Service had sent the revised contract to the Secretary of Agriculture in January 1994, expecting it to be issued by October 1994. As of October 1996, the Forest Service had not published the revised contract for public comment. According to officials from the Forest Service and USDA’s Office of General Counsel, interested parties, including the industry, should have a chance to comment on the revision because they may be able to suggest changes that will improve the contract or identify aspects of it that will not work on the ground. On September 27, 1996, Forest Service officials told us that the Chief had approved a proposed contract and was planning to meet with USDA’s Under Secretary for Natural Resources and the Environment about it. Officials could not estimate when or whether USDA would approve the Forest Service’s release of a draft contract for public comment. In developing the draft contract, USDA and the Forest Service reviewed timber sale contracts used by several states, the Department of the Interior, and private parties to sell private timber and reviewed court decisions to identify specific ambiguities and weaknesses in the current timber sale contracts. According to the conclusion, the draft contract would distribute liability and risk “more equitably” between the parties and bring the contract into conformity with standard business practices. For example, the draft contract would permit the Forest Service to modify the contract to protect natural resources, including threatened or endangered species, and to adjust the contract’s terms to give the purchaser additional harvesting time or money in consideration of such modifications. Currently, the Chief of the Forest Service may cancel a contract to comply with a court order or upon determining that the contract’s continuation would degrade the environment, be inconsistent with land management plans, damage cultural resources, or jeopardize threatened or endangered species. Although the Forest Service’s regulations provide other circumstances under which the Chief may cancel, the existing contracts do not include the additional actions. The draft contract would incorporate the other circumstances specified in the regulations and permit the Chief to also cancel a contract (1) if continued operations would violate a federal law or conflict with the management of other forest resources, (2) upon a physical change in the sale area or damage that materially diminishes the value of the timber, and (3) upon a final determination that the purchaser had violated environmental quality regulations on a national forest. The Forest Service would reimburse the purchaser for any unrecovered out-of-pocket expenses. In the cost and benefit analysis supporting the draft contract, the Forest Service concluded that the agency and the timber industry would realize a net benefit of more than $7 million from the contract’s implementation. The Forest Service also estimated that the contract revision could affect about 3,000 timber sales each year, the costs for purchasers to administer the revised contract would increase about 10 percent over the costs of administering the current contracts, the government would receive about $26 million less for the timber sold, litigation costs to both parties would be reduced by about $330,000 annually, and damages would be reduced by between $20 million and $30 million over the next 4 to 5 years. Although major differences exist between federal and state laws, regulations, and guidelines, we noted that the timber sale contract used by Oregon seems to restrict the types of damages more than the Forest Service’s contract, yet the state is able to market and sell large volumes of timber. For example, under Oregon’s timber sale contract, the state can terminate the contract in whole or in part whenever such action is in the state’s best interest. If Oregon terminates a part or all of a timber sale contract, the purchaser is not entitled to lost profits, the cost of replacement timber, or any other consequential damages. Also, any interest earned on moneys deposited by the purchaser remains with the state. The vice presidents of two industry organizations with whom we met told us that Oregon has only about 2 to 3 percent of the timber sales in the state (the Forest Service has about 15 percent), deals with only one or two forests, sells to a limited group of purchasers, and awards its contracts for short terms; therefore, purchasers are willing to accept more restrictive provisions. They also noted that since Oregon uses timber sale revenues for such activities as schools, the state has an incentive to resolve problems rather than suspend or cancel contracts and incur damages. Objectives, Scope, and Methodology The Chairman, Subcommittee on Forests and Public Land Management, Senate Committee on Energy and Natural Resources, asked us to determine (1) the amounts and types of damages awarded to purchasers whose timber sale contracts have been suspended or canceled and the ways the agencies have paid the damages, (2) the amounts and types of claims pending against the Forest Service and Bureau of Land Management (BLM) and the sources of funds from which the agencies expect to pay the claims, and (3) the actions that the Forest Service and BLM are taking to minimize the future liability arising from suspended or canceled timber sale contracts. As agreed with the Chairman’s office, we limited our work to timber sale contracts that have been suspended or canceled to protect threatened or endangered species and to claims settled or pending between October 1992 and June 1996. To provide the most current information, we updated the data on pending claims through October 1, 1996. To obtain the information in this report, we reviewed relevant Forest Service and BLM regulations, policies, and procedures related to awarding, suspending, and canceling timber sale contracts. We reviewed reports by GAO, the Congressional Research Service, and the U.S. Department of Agriculture’s (USDA) and the Department of the Interior’s (DOI) Offices of Inspector General on various aspects of the Forest Service’s and BLM’s timber programs. We also obtained legal briefs that had been submitted to the U.S. Court of Federal Claims on some of the pending lawsuits and reviewed rulings issued by that court as well as by USDA’s Board of Contract Appeals, the U.S. District Court for the Western District of Washington, and the U.S. District Court for the District of Oregon. We visited the Forest Service’s and BLM’s offices responsible for timber sale contracts in the Pacific Northwest. We selected this location because almost all of BLM’s timber sale contracts are awarded by its Oregon State Office and 35 percent of the Forest Service’s timber sale contracts over $2,000 are awarded by its Pacific Northwest Region. We also met with the Vice President of the Northwest Forestry Association, which represents timber companies located in Oregon and Washington State, and the Vice President of the Independent Forest Products Association, which represents timber companies throughout the United States. To determine the types and amounts of damages awarded to purchasers for suspended or canceled timber sale contracts, we reviewed the applicable provisions of each agency’s timber sale contract. We also met with timber management officials at the Forest Service to discuss the types and amounts of damages paid to timber purchasers. In addition, at our request the Forest Service gathered data from each forest on the claims paid and the source of the funds used to pay the claims. We also contacted all 12 BLM state offices to determine the damages that were paid since fiscal year 1992 for timber sale contracts that were suspended or canceled to protect threatened or endangered species. For the one claim paid by BLM, we obtained information from the state office and the office of DOI’s regional solicitor in Portland, Oregon. In documenting the types and amounts of claims pending against the Forest Service and BLM and the sources of funds from which the agencies expect to pay the claims, we relied on information provided by the forests and BLM’s state offices. A Forest Service timber management official requested each forest to gather information on the claims that were pending for timber sale contracts that had been suspended or canceled to protect threatened or endangered species. We reviewed the data and compared them with the data the Forest Service had provided to the Subcommittee. We discussed the pending claims with officials from USDA’s Office of General Counsel and gathered data from a local law firm that represents several timber purchasers to determine the rationale for their clients’ claims and the amounts they are seeking in damages. We also contacted attorneys in six of the Forest Service’s nine regional offices to determine the reasons that purchasers filed claims. For BLM’s one pending claim, we gathered supporting documentation and discussed the basis for the claim with a BLM state office official and an attorney in the office of DOI’s regional solicitor in Portland. We discussed with Forest Service and BLM officials the actions the agencies can take to minimize the future liability arising from suspended or canceled timber sale contracts. We obtained copies of the provisions in each agency’s timber sale contract that have been used to limit the agencies’ liability when timber sale contracts have been suspended or canceled to protect threatened or endangered species. We discussed the provisions’ merits with officials from USDA’s Office of General Counsel as well as with BLM state officials and regional solicitors. We reviewed drafts of the Forest Service’s July 1996 regulations and contract revisions. The draft regulations are aimed at reducing the Forest Service’s liability for canceled timber sales and the draft timber sale contract would assign risk differently between the Forest Service and purchasers. We discussed the history and relevance of both proposals with officials from the Forest Service and USDA’s Office of General Counsel as well as with private attorneys who had drafted a proposed timber sale contract at the industry’s request. Finally, we discussed with BLM state officials and an attorney in the office of DOI’s regional solicitor in Portland the actions that BLM has taken or plans to take to limit its future liability for timber sale contracts that are suspended or canceled to protect threatened or endangered species. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the federal government's liability when the Forest Service and the Bureau of Land Management (BLM) suspend or cancel timber sale contracts to protect threatened or endangered species, focusing on what: (1) amounts and types of damages have been awarded to purchasers and how the Forest Service and BLM paid for the damages; (2) amounts and types of claims are pending against the Forest Service and BLM, and how the agencies expect to pay these claims; and (3) actions the Forest Service and BLM are taking to minimize the future liability arising from suspended or cancelled timber sale contracts. GAO found that: (1) from October 1992 through June 1996, the Forest Service and BLM paid more than $6.6 million in claims for 49 contracts that were suspended or cancelled to protect threatened or endangered species; (2) the agencies have paid purchasers for the value of replacement timber, interest, lost profits, and unrecovered costs; (3) the Forest Service paid damages of almost $6.5 million from its appropriations and BLM settled its single claim by modifying another contract held by the purchaser to reduce the amount paid to the government for purchased timber without changing the original volume of timber to be harvested; (4) as of October 1996, the Forest Service had 73 pending claims with potential damages of about $61 million, but it could incur at least an additional $198 million in damages; (5) BLM had one pending claim for almost $2.2 million, but it could incur between $35 million and $40 million more in potential future liability; (6) uncertainty arises from the agencies' inability to predict the outcome of ongoing and future litigation that could result in the award of more or less in damages than the purchasers claim, the results of countersuits that could be filed by the Forest Service and BLM, or the success of the agencies' efforts to offer replacement timber or other settlements in lieu of paying damages; (7) Forest Service officials stated that the Service may not have the funds to pay for pending and future claims without additional congressional funding; (8) according to a BLM official, if purchasers sought and were awarded damages, the agency would first attempt to reduce the price of existing contracts to offset damages; and (9) BLM has repeatedly revised its timber sale contract to minimize its liability when it must suspend or cancel a timber sale contract to protect threatened and endangered species, but the Forest Service has not finalized either new regulations or a new timber sale contract that would limit the government's liability on cancelled timber sale contracts and redistribute the risk between the Forest Service and the purchaser.
Background Statutory Requirements for DOD’s Strategic Workforce Plan Section 115b requires that the Secretary of Defense submit to the congressional defense committees, in every even-numbered year, a strategic workforce plan to shape and improve the civilian employee workforce of DOD. The statute assigns overall responsibility for developing and implementing the plan to the Under Secretary of Defense for Personnel and Readiness. In turn, the Under Secretary has assigned responsibility for developing the plan to the Defense Civilian Personnel Advisory Service. The plan is required to include, among other things, assessments of DOD’s existing and projected civilian workforce, and a plan of action to address gaps in critical skills and competencies identified in those assessments. Section 115b further requires DOD to include in its strategic workforce plan specific information on its Civilian Senior Leader—senior management, functional, and technical—workforces. DOD’s senior management, functional, and technical workforces consist of five career Civilian Senior Leader workforces, which DOD relies on to operate and oversee nearly every activity in the department. These workforces include the following: Senior Executive Service workforce. Most of the department relies on these officials to fill positions with managerial, supervisory, or policy advisory responsibilities. Senior Level workforce. These officials fill positions that typically require less than 25 percent of their time to be spent on supervisory or related managerial responsibilities. Most Senior Level employees are in nonexecutive positions whose duties are broad and complex enough to be classified above the GS-15 level. Senior Technical workforce. These officials perform high-level research and development in the physical, biological, medical, and engineering science fields. Defense Intelligence Senior Executive Service workforce. These officials fill positions with managerial, supervisory, or policy advisory responsibilities in the Intelligence functional community that falls within DOD. Defense Intelligence Senior Level workforce. These officials fill senior positions within DOD’s Intelligence community that require less than 25 percent of their time to be spent on managerial or supervisory responsibilities. Section 1053 of the National Defense Authorization Act for Fiscal Year 2012 added a requirement for DOD to include specific information on its Financial Management workforce in the department’s strategic workforce plan. That section requires that DOD include specific steps that the department has taken or plans to take to develop appropriate career paths for civilian employees in the financial-management field, and include a plan for funding improvements in the Financial Management workforce of the department through the period of the current Future Years Defense Program, including a description of any continuing shortfalls in funding available for that workforce. DOD’s 2013-2018 Strategic Workforce Plan is the department’s first plan that is to include information addressing this requirement. DOD’s financial-management workforce was added to our High-Risk List in 1995. Key Principles of Effective Strategic Workforce Planning Strategic workforce planning is an iterative, systematic process that addresses two critical needs: (1) aligning an organization’s human-capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. While agencies’ approaches to workforce planning vary, key principles of strategic workforce planning that should be addressed include the following: align workforce planning with strategic planning and budget formulation; involve top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan; determine the critical skills and competencies that will be needed to achieve current and future programmatic results; develop strategies that are tailored to address gaps in number, deployment, and alignment of human-capital approaches for enabling and sustaining the contributions of all critical skills and competencies; build the capability needed to address administrative, educational, and other requirements important to support workforce planning strategies; and monitor and evaluate the agency’s progress toward its human-capital goals and the contribution that human-capital results have made toward achieving programmatic results. With the exception of the first two principles, these principles are similar to elements of DOD’s statutory reporting requirements. DOD’s Approach to Strategic Workforce Planning To conduct strategic human-capital planning efforts for the department’s civilian workforce, DOD’s Strategic Human Capital Planning Office used functional community categories to group together employees who perform similar functions. In some cases, these communities include mission-critical occupations. For its 2013-2018 Strategic Workforce Plan, DOD identified 22 functional communities and provided information on 30 of its 32 mission-critical occupations. DOD’s approach to developing its strategic workforce plan includes meetings between functional community leadership and officials from the Defense Civilian Personnel Advisory Service, an environmental scan as part of a continuous information- gathering process, and quarterly recruitment and retention data updates provided to the functional communities. According to DOD’s Strategic Workforce Plan, mission-critical occupations are occupations or occupational groups that set direction, directly impact, or execute performance of mission-critical functions or services. Further, mission-critical occupations are positions key to DOD’s current and future mission requirements, as well as those that present recruiting and retention challenges. Appendix II shows DOD’s functional communities and their associated mission-critical occupations. Prior GAO Work Evaluating DOD’s Strategic Workforce Plans We have conducted assessments of DOD’s strategic workforce plans since 2008, and our body of work has found that DOD’s efforts to address strategic workforce planning requirements have been mixed.February 2009 report, we recommended that the offices responsible for addressing DOD’s strategic workforce-planning requirements develop performance plans that include establishing implementation goals and time frames, measuring performance, and aligning activities with resources. DOD has since completed implementation of the actions it had underway at that time to develop performance plans. In our most recent In our report in September 2012 on the department’s overall civilian strategic workforce plan, we recommended that DOD include in the guidance that it disseminates for developing future strategic workforce plans clearly defined terms and processes for conducting these assessments; conduct competency-gap analyses for DOD’s mission-critical occupations and report the results; establish and adhere to timelines that will ensure issuance of future strategic workforce plans in accordance with statutory time frames; provide guidance for developing future strategic workforce plans that clearly directs the functional communities to collect information that identifies not only the number or percentage of personnel in its military, civilian, and contractor workforces but also the capabilities of the appropriate mix of those three workforces; and enhance the department’s results-oriented performance measures by revising existing measures or developing additional measures that will more clearly align with DOD’s efforts to monitor progress in meeting the strategic workforce-planning requirements in section 115b of Title 10 of the United States Code. DOD either concurred or partially concurred with the recommendations in our September 2012 report, stating that, among other things, the department was deliberate in applying lessons learned from previous workforce plans and identifying specific challenges and the actions being taken to address those challenges to meet statutory planning requirements by 2015. In our September 2012 report on the department’s Civilian Senior Leader strategic workforce plan, we recommended that DOD conduct assessments of the skills, competencies, and gaps within all five career Civilian Senior Leader workforces and report them in DOD’s future strategic workforce plans.and stated that the department fell short of conducting assessments of skills, competencies, and gaps within three of the five Civilian Senior Leader workforces as a result of their technical roles in the DOD leadership hierarchy, and that, as roles are refined, this work will be DOD concurred with our recommendation reflected in future plans as appropriate. The status of these and other prior recommendations may be found in appendix III. DOD’s Plan Partially Addresses Most Statutory Requirements and Does Not Provide Comprehensive Workforce Information for Decision Makers Our assessment of DOD’s Fiscal Years 2013-2018 Strategic Workforce Plan found that DOD’s plan addresses 8 and partially addresses 19 of the 32 statutory requirements. The plan does not address 5 of the 32 statutory requirements. As a result, DOD’s plan does not provide decision makers with comprehensive information on DOD’s workforce. The 2013- 2018 plan represents an improvement since 2008 when we found that DOD’s 2006-2010 Civilian Human Capital Strategic Workforce Plan did However, since not meet six of eight statutory reporting requirements.2008, Congress has expanded the number and scope of statutory requirements for DOD to include, among other things, details about its civilian senior-leader and financial-management workforces. Table 1 provides the results of our analysis of the extent to which DOD’s plan for the overall, civilian senior-leader, and financial-management workforces addressed the Section 115b statutory requirements. Section 115b of Title 10 of the United States Code requires DOD to develop a strategic workforce plan to shape and improve the civilian employee workforce of the department, to include specific elements such as an assessment of the current critical skills and competencies of the civilian workforce; any gaps in the existing or projected workforce; and the appropriate mix of civilian, military, and contractor capabilities. DOD also was required to identify the specific funding needed to achieve recruiting and retention goals, and the funding needed to implement strategies for developing, training, deploying, compensating, and motivating the civilian employee workforce of the department. In addition to addressing the overall civilian workforce, the legislation also requires the department to include separate chapters to address the shaping and improvement of specific workforces, including: (1) the Civilian Senior Leader workforce and (2) the Financial Management workforce. We summarize the results of our assessment of DOD’s overall civilian, Civilian Senior Leader, and Financial Management workforce plans below. In addition, we provide detailed information in appendix IV on the extent to which the 2013-2018 Strategic Workforce Plan addressed the statutory requirements. Overall Civilian Workforce With regard to the overall civilian workforce, we found that DOD’s Fiscal Year 2013-2018 Strategic Workforce Plan addresses three of the statutory requirements, partially addresses six requirements, and does not address one requirement. For example, DOD’s plan addressed the requirement to provide an assessment, using results-oriented performance measures, of the progress the department has made in implementing the plan in the prior year. Specifically, DOD’s plan identified six performance measures the department is using to assess progress. These measures include, among others, (1) the percentage of workforce planning key milestones completed by each functional community, then aggregated across all communities; (2) the percentage difference between actual workforce levels and target workforce levels for the mission-critical occupations; and (3) the number of mission-critical occupations for which competency models will be developed and deployed by September 2014. By contrast, DOD’s overall civilian plan partially addresses, among others, the requirements to assess the department’s critical skills and competencies of both the existing and future workforces. We found that DOD assessed its critical skills by identifying and providing information on the mission-critical occupations. However, DOD’s plan did not include competency assessment information for most of the workforces. In subsequent discussions, DOD officials told us that, as of April 2014, the department had completed competency models for the current mission- critical occupations, but had not yet completed the assessments of the competencies associated with those occupations and, therefore, could not include the required assessments in the plan. Those officials also told us that they intend to include the results of these assessments in future strategic workforce plans. Finally, DOD’s overall civilian workforce plan does not address the requirement to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. According to the plan, in fiscal year 2012 the department initiated a pilot study of the process to assess capabilities being delivered by the federal government career civilian workforce, military personnel, and contract support for three mission-critical occupations. Although the plan discussed the existence of this pilot program, it did not provide the actual assessment of those occupational series’ mix of capabilities, as directed by the requirement. At the time of our review, Defense Civilian Personnel Advisory Service officials told us that the plan did not include specific information on the results of the pilot because the program was being reevaluated during the fiscal year 2014-2019 Strategic Workforce Plan cycle and will be updated for future reporting. According to the plan, the department’s goal is to assess a broad range of mission-critical occupation capabilities by fiscal year 2016, but the department did not provide further details on these plans, such as interim milestones or the number of mission critical occupations it intends to assess. Civilian Senior Leader Workforce With regard to the Civilian Senior Leader workforce, we found that DOD’s 2013-2018 Strategic Workforce Plan addresses four of the statutory requirements, partially addresses seven requirements, and does not address one requirement. For example, DOD’s plan addresses the requirement to include specific strategies for developing, training, deploying, compensating, motivating, and designing career paths and career opportunities. Specifically, DOD’s plan identified the department’s effort to revise the scope of its Defense Executive Advisory Board to help the department maintain the caliber of its Senior Executive Service leadership. In addition, the plan discussed efforts by the department to promote diversity and interest among civilian employees to join the executive ranks. However, we found that the Civilian Senior Leader plan partially addresses, among others, the requirements to assess the critical skills and competencies of the current and future workforces because the plan only provided competency information for the Senior Executive Service. More specifically, the plan provided information on the 16 Office of Personnel Management government-wide competencies and 2 DOD- specific competencies for the Senior Executive Service, which includes strategic thinking, leveraging diversity, and developing others, among others. The department’s plan did not, however, provide an assessment of the critical skills needed by the Civilian Senior Leader workforce, which is also a part of the statutory requirement. In addition, DOD’s Civilian Senior Leader workforce plan partially addresses the requirement to assess critical skill and competency gaps because, while it did include information on the process used for assessing gaps in its Senior Executive Service workforce, it did not include similar gap information for the other Civilian Senior Leader workforces. Finally, the Civilian Senior Leader plan also does not address the requirement to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. Although the plan provided a breakdown of the projected requirements for each of the department’s Civilian Senior Leader workforce categories through fiscal year 2018, and also stated that the department may use military personnel, among others, to fill gaps in its Civilian Senior Leader workforce, it did not discuss any requirements for the department’s senior military personnel. The plan further stated that there is little to no need for a contractor workforce at the Civilian Senior Leader workforce level. Financial Management Workforce With regard to the Financial Management workforce, we found that DOD’s 2013-2018 Strategic Workforce Plan addresses one, partially addresses six, and does not address three of the statutory requirements. For example, DOD’s plan addresses the requirement to include specific steps that the department has taken or plans to take to develop appropriate career paths for its workforce, among other things. The plan outlined six major milestones for developing the enterprise-wide financial management career paths, beginning, for example, with collecting the component-specific career paths. Consistent with the other two plans, however, we found that DOD’s Financial Management workforce plan partially addresses the requirements to assess the critical skills and competencies of the current and future workforces. Although the plan identified 4 of the community’s 13 occupational series as mission-critical occupations, which DOD considers its critical skills, and identified the projected trends associated with those occupational series due to retirement, the plan did not provide an assessment of the overall Financial Management workforce’s 23 competencies. According to officials, this information was not included in the plan because the community did not begin to conduct its competency assessments until April 2014 and the results of the assessments are not expected to be available until July 2014. Additionally, the plan partially addresses the requirement to include specific strategies for developing, training, deploying, compensating, and motivating the workforce to address gaps in critical skills and competencies, as well as the program objectives and funding for those strategies. We found that, although the plan included a specific strategy for, among other things, developing and training the Financial Management workforce, the plan did not identify the funding needed for this strategy. The plan stated only that the community coordinated funding needs with DOD’s components and that these funding needs were included in the fiscal year 2014 President’s Budget. Although not included in the plan, officials from the Comptroller’s office told us that they realigned approximately $13 million to $14 million per year across the fiscal year 2014-2018 Future Years Defense Program to address training shortfalls and to provide funding for additional web-based course development. Officials from the Defense Civilian Personnel Advisory Service told us that they did not include specific cost information for a majority of the goals and strategies because strategy information comes from the functional communities and, in some cases, each service implements the identified goals and strategies, as needed, and builds any resultant costs into their individual budgets. However, the information we obtained separately from the Financial Management functional community, if included in the plan, would have contributed toward DOD’s ability to meet this statutory requirement. The Financial Management workforce plan also does not address the requirement to include an assessment of civilian, military, and contractor capabilities. As noted above, the department took preliminary steps to assess its workforce mix capabilities through a pilot program; however, the financial management mission-critical occupations were not included as part of that pilot program, and details of the program were not included as part of DOD’s plan. According to the plan, the community focused primarily on the civilian Financial Management workforce and, therefore, only provided demographic information on civilians in the mission-critical occupations. The plan did not, however, provide similar demographic and projected target data for the military portion of the Financial Management workforce, nor did it address the statutory requirement to provide an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. According to officials from the Defense Civilian Personnel Advisory Service responsible for the overall Fiscal Year 2013-2018 Strategic Workforce Plan, the department did not address all requirements because the plan comprises individual submissions from each of the 22 functional communities and the communities are at different stages in the planning process. In addition, officials also told us that the department tried to provide a consistent level of information for each workforce and, in some cases, omitted available information—such as information on specific workforces—that would address some of the statutory requirements. DOD officials also stated that some requirements were not addressed in the Fiscal Year 2013-2018 Strategic Workforce Plan because additional efforts are ongoing to address those requirements and the requirements will be reported on in future iterations of the plan. The department provided its own self-assessment as part of the plan in which it rated its progress addressing the statutory requirements, and found that further work was needed to assess its critical competencies and appropriate mix of military, civilian, and contractor workforce capabilities. Without ensuring that all statutorily required information is included in the strategic workforce plan, DOD risks producing a plan that does not effectively address the department’s needs and will not aid the decision-making process for total workforce management. Although DOD has not met all statutory requirements in its strategic workforce plan, we are not making recommendations at this time because we previously recommended that DOD include these requirements in the plan, as discussed in additional detail in appendix III. We will continue to monitor DOD’s progress in implementing these recommendations. DOD’s Strategic Workforce Plan Is Not Consistent with Some Key Strategic Workforce-Planning Principles DOD has not fully incorporated key strategic workforce-planning principles into the development of its strategic workforce plan. Specifically, although DOD has begun to take steps that are consistent with these key principles, we found that the department’s plan was not fully aligned with the budget process or the department’s other strategic workforce-management initiatives—such as those to address recruiting, retention, and readiness issues—and did not involve stakeholders within its Intelligence functional community. Further, DOD has begun to conduct competency-gap assessments, which may help them in implementing the key principles. However, DOD’s plan did not include a complete assessment of the competencies that will be needed to achieve current and future programmatic results nor strategies tailored to address gaps in the critical competencies needed to achieve those results. DOD’s Strategic Workforce Plan Is Not Fully Aligned with the Budget Process or Other Strategic Workforce- Management Initiatives DOD’s strategic workforce plan is not fully aligned with the budget process or the department’s other strategic workforce-management initiatives. While this practice is not part of DOD’s statutory reporting requirements, key practices in human-capital management identify strategic alignment—which occurs when an agency’s workforce strategies are linked with its mission and goals, and integrated into its strategic plan, performance plan, and budget formulation—as one of six leading principles for effective strategic workforce planning. In addition, according to one model of strategic human-capital management for federal agencies to use when designing their human-capital management plans, the highest level of strategic integration and alignment occurs when an agency considers human-capital initiatives or refinements in light of both changing organizational needs and the demonstrated successes or shortcomings of its human capital efforts.model, the human-capital needs of the organization and new initiatives or refinements to existing human-capital approaches should be reflected in strategic workforce-planning documents. Further, according to that DOD’s Fiscal Year 2013-2018 Strategic Workforce Plan is missing key funding information that would facilitate DOD’s budget decisions because it is not aligned to the budget process. DOD’s plan identified 31 strategies for addressing workforce gaps, but did not provide specific information on the funding required—an important element in budget planning decisions—to implement most of these strategies. Only 1 of the 31 strategies—developed by the Intelligence functional community and identified in the plan—provided detailed information on the amount of funding required to implement the strategy. However, for some strategies, DOD’s plan stated that funding was, among other things, at the discretion of the DOD components or to be included in component budgets. For other strategies, DOD’s plan stated that additional funding may be necessary. In each case where funding information was not provided, the plan provided no additional detail on the amount of funding needed to implement the strategies. Furthermore, the importance of DOD aligning its workforce plan with other strategic workforce initiatives is demonstrated in a December 2013 letter That letter that DOD provided to the congressional defense committees.outlined steps the department is taking to minimize any negative effect on the morale of its civilian workforce and long-term consequences on recruiting and retention due to DOD’s implementation of civilian furloughs conducted during the summer of 2013. DOD’s letter stated the morale of the civilian workforce had been declining for a number of reasons, including the furloughs, and stated that the department expected future furloughs and budget uncertainties to further this downward trend. The department also stated it anticipates that these issues will factor heavily into employees’ decisions about when to depart, as well as individuals’ decisions about when to apply for positions within the department, which will have an effect on recruiting, retention, and readiness. We recognize that DOD’s letter to the congressional defense committees on this matter was sent after its strategic workforce plan was provided to Congress. We note, however, that the steps outlined in DOD’s letter are targeted at minimizing the declining morale of the department’s civilian workforce. The strategic workforce plan includes recruitment, retention, and development strategies aimed specifically at addressing gaps in critical skills within the department’s functional communities. However, without aligning the steps DOD describes in its letter to congressional defense committees with its strategic workforce plan, DOD may lack assurance that these efforts will support its overall strategic workforce- planning goals including, for example, how it will address gaps. To address these issues, DOD stated that the department continues to focus on its strategic workforce-planning efforts that include strategies identified and carried out by the functional communities aimed to recruit, retain, motivate, and develop the present and future civilian workforce. However, DOD officials stated during this review that the recruiting, retention, and workforce development strategies outlined in the department’s strategic workforce plan are developed by the individual functional communities and are implemented and funded as appropriate primarily at the component level. Therefore, these strategies do not constitute a department-wide approach to addressing morale, recruitment, and retention issues related to past or future furloughs or budget uncertainties. Further, as our analysis of DOD’s current strategic workforce plan shows, DOD has not completed its competency or competency-gap assessments to know where it should target its efforts, nor did its plan include the funding required to carry out such strategies. DOD’s strategic workforce plan currently is not fully aligned with the budget process or other workforce initiatives because there is no requirement for DOD’s plan to be aligned with these other efforts. Additionally, there is not a common understanding of how the department’s strategic workforce plan will be or should be used within the Office of the Under Secretary of Defense for Personnel and Readiness. Officials from the Defense Civilian Personnel Advisory Service, who have overall responsibility for DOD’s strategic workforce plan, told us they do not envision the plan being linked to other strategic workforce efforts, but added that they believe the workforce planning process itself has value for those involved in the development process. In contrast, officials from the Total Force Requirements and Sourcing Policies Directorate within the Office of the Under Secretary of Defense for Personnel and Readiness told us that the strategic workforce plan in its current form is more useful as a source of demographic information on the department’s civilian workforce than as a strategic document intended to guide workforce decisions. These officials further stated that the plan would be more useful as a strategic document if certain actions were taken, such as aligning the strategic workforce plan with the DOD budget process, among other things. In addition, although the plan states that DOD intends to align it with the DOD budget and total force–management processes, it did not include specific details or time frames for doing so. According to officials with the Defense Civilian Personnel Advisory Service, they have been unable to make progress on these efforts because of a lack of information and challenges in coordinating with other offices within DOD. These officials further stated that they do not foresee these challenges being resolved before the department issues its next strategic workforce plan. Our analysis of the Fiscal Year 2013-2018 Strategic Workforce Plan in particular indicates that without fully aligning DOD’s strategic workforce plan with the budget process and management workforce initiatives, such as those to address recruiting, retention, and readiness issues associated with declining morale, the department will not be in the best position to make informed management and resource decisions about its workforce. What we found in our analysis of the Fiscal Year 2013-2018 Strategic Workforce Plan is in line with a body of work we have conducted in recent years examining other DOD strategic workforce efforts, which found that not aligning the plan with other strategic workforce initiatives is a long- standing issue within DOD. These include workforce-sizing decisions and total force–management efforts. In July 2012, we testified on our observations of DOD’s planning for its civilian workforce requirements and reported on DOD’s efforts since the 1990s to reduce its civilian workforce. Specifically, we found that these efforts did not focus on taking a strategic approach to its workforce downsizing and reshaping efforts, which resulted in imbalances to the shape, skills, and retirement eligibility of its workforce. We found that this was especially true of the civilian acquisition workforce, which from September 1989 to September 1999 was reduced by almost 47 percent. This rate of reduction substantially exceeded that of the rest of the DOD workforce. We concluded that 11 consecutive years of downsizing produced serious imbalances in the skills and experience of the highly talented and specialized civilian acquisition workforce, putting DOD on the verge of a retirement-driven talent drain. Moreover, we found that the lack of an adequate number of trained acquisition and contract-oversight personnel has, at times, contributed to unmet expectations and placed DOD at risk of potentially paying more than necessary. In January 2013, we reported on DOD’s efforts to implement its self- imposed cap on its civilian workforce levels, and found that the department had not completed its competency-gap assessments— identifying gaps in the existing or projected civilian workforce that should be addressed to ensure the department has continued access to needed critical skills and competencies. As a result, we concluded that information on competency gaps was unavailable to help inform decision making about DOD’s civilian workforce when implementing the cap. We also concluded that a fully developed workforce plan, with all completed gap assessments, would help DOD make informed decisions about reducing its workforce and develop strategies to mitigate skill shortages that affect achieving the mission. As a result, we recommended that DOD involve functional community managers and use information from its critical skill and competency- gap assessments as they are completed to make informed decisions for future changes to the workforce and that it document its strategies. DOD partially concurred with our recommendation and stated that it aligns its workforce, both in size and structure, to mission requirements and justifies the current size and possible reductions or increases to the workforce based on mission workload rather than competency or skill gaps needed to deliver capabilities. However, we concluded in that report that DOD is not in a position to justify the size of its workforce until it has fully addressed its statutory requirement to identify areas of critical-skill and competency gaps within the civilian workforce. At the time of our current review, DOD maintained its civilian workforce cap, and, in addition, section 955 of the National Defense Authorization Act for Fiscal Year 2013 required the Secretary of Defense to ensure that the civilian and service contractor workforces are appropriately sized to support and execute the National Military Strategy, taking into account military personnel and force-structure levels, and to develop an efficiencies plan for those workforces. Section 955 further requires that the efficiencies plan achieve savings in total funding of the civilian and service contractor workforces that are not less than certain savings achieved from reductions in military end strength over a 6-year period, subject to certain exceptions. In May 2013, we reported on DOD’s total force–management efforts and found that DOD had taken some steps to improve its understanding and management of its total workforce, but it had not assessed the appropriate mix of its military, civilian, and contractor personnel capabilities in its strategic workforce plan as required by law. We concluded that the department was hampered in its ability to make more-informed strategic workforce mix decisions, which are crucial to meeting DOD’s statutory responsibility to manage its total workforce. We recommended that DOD revise its existing workforce policies and procedures according to current statutory requirements in Section 129a of Title 10 of the United States Code as well as regulatory requirements set forth in the Office of Federal Procurement Policy’s September 2011 policy letter to address the (1) determination of the appropriate workforce mix, and (2) identification of critical functions. DOD partially concurred and stated that the department’s strategic workforce plan is an integral tool in informing policies and procedures for retention, recruitment, and accession planning and helps inform the demographic makeup of its civilian personnel inventory. DOD also stated that the department justifies its workforce size based on mission workload, rather than competency or skills gaps. However, we concluded in that report that DOD is required by law to establish policies and procedures that require the use of the strategic workforce plan when making determinations of the appropriate mix of total workforce personnel necessary to perform its mission, and to include in the strategic workforce plan an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. As of May 2014, DOD has not taken action to implement these recommendations. DOD’s Approach to Developing Its Plan Did Not Fully Involve Stakeholders within Its Intelligence Functional Community DOD’s approach to developing its Fiscal Years 2013-2018 Strategic Workforce Plan did not fully involve stakeholders within the Intelligence functional community to help ensure statutorily required information is included. While this kind of stakeholder involvement is not statutorily required, a key principle of effective strategic workforce planning is to involve top management, employees, and other stakeholders in developing, communicating, and implementing strategic workforce plans. In addition, federal internal-control standards state that decision makers need complete and relevant information to manage risk associated with achieving objectives, such as those outlined in strategic plans. DOD’s Fiscal Year 2013-2018 Strategic Workforce Plan did not fully involve some stakeholders within the Intelligence functional community throughout the development of the plan, which resulted in the omission of key stakeholder input. Specifically, during the strategic workforce planning process, the Intelligence functional community compiled workforce information on its one mission-critical occupation—the 0132 Intelligence series—and the two senior-leader workforces over which the Office of the Under Secretary of Defense for Intelligence has oversight— the Defense Intelligence Senior Executive Service and the Defense Intelligence Senior Level workforces. DOD relies on these two senior- leader workforces to oversee key activities within the defense intelligence community. The Defense Intelligence functional community includes personnel in intelligence occupations from the military departments as well as the National Security Agency, the Defense Intelligence Agency, the National Reconnaissance Office, the National Geospatial-Intelligence Office, and the department’s Defense Intelligence civilian senior-leader workforces—Defense Intelligence Senior Executive Service and Defense Intelligence Senior Level. DOD is required to include information on the two Intelligence Civilian Senior Leader workforces in its strategic workforce plan. According to Intelligence functional community managers, they provided information on these two workforces to the Defense Civilian Personnel Advisory Service. However, our review found disparities between the level of detail of the information provided and the information that was incorporated in the strategic workforce plan about these two workforces. For example, the strategic workforce plan included one paragraph of descriptive information on both categories of defense intelligence senior-leader workforces in the Senior Executive Service workforce plan. In contrast, the Intelligence functional community provided detailed information on the Defense Intelligence Senior Executive Service and Defense Intelligence Senior Level workforces to the Defense Civilian Personnel Advisory Service for inclusion in the strategic workforce plan. This information included details that would have supported an assessment of the critical skills and competencies, among other things, of the defense intelligence civilian senior-leader workforces, which is required by law to be included in DOD’s strategic workforce plan. However, the Defense Civilian Personnel Advisory Service did not fully incorporate this information on the workforces’ critical skills and competencies into the final version of DOD’s strategic workforce plan. During the course of our review, we discussed these disparities with officials from both the Intelligence functional community and the Defense Civilian Personnel Advisory Service to try to determine the cause of the missing information. According to officials from the Defense Civilian Personnel Advisory Service, the office revised and omitted select information provided by the Intelligence functional community on defense intelligence civilian senior-leader workforces, but they believed the level of detail provided was sufficient to address the statutory requirements. However, our assessment of the information raises questions as to whether the officials responsible for the Civilian Senior Leader strategic workforce plan checked these omissions against the statutory reporting requirements, and if not, as a result excluded key information from the strategic workforce plan on defense intelligence civilian senior-leader workforces that was statutorily required. In our discussions, officials within the Intelligence functional community responsible for the initial development of this required information and officials from the Defense Civilian Personnel Advisory Service were unclear as to why, in the final coordination of the plan, all stakeholders from the Intelligence functional community were not involved in helping to ensure the inclusion of key statutorily-required information. In subsequent discussions, officials from the Defense Civilian Personnel Advisory Service stated that they would be vigilant of such omissions during the development of future plans to help ensure all available, statutorily required information is included and that all appropriate stakeholders are involved in the development and review processes. DOD’s Plan Did Not Include Complete Competency-Gap Assessments or Strategies for Addressing Critical Competency Gaps DOD’s Fiscal Year 2013-2018 Strategic Workforce Plan did not include completed competency-gap assessments or strategies for addressing critical competency gaps. A leading principle of effective strategic workforce planning is that agencies should determine the critical skills and competencies that will be needed to achieve current and future programmatic results and develop strategies tailored to address gaps and human-capital conditions in critical skills and competencies that need attention. Although we found that DOD has taken steps to determine the critical skills needed by the department through the identification of its mission-critical occupations, our evaluation of DOD’s Fiscal Year 2013- 2018 Strategic Workforce Plan found that the department has not completed its competency and competency-gap assessment or included strategies to address competency gaps based on such assessments. In 2004, we recommended, and DOD partially concurred, that the department develop workforce strategies to fill the identified skills and competency gaps. Additionally, in 2012, we recommended DOD conduct competency-gap assessments for its mission-critical occupations and report the results. DOD concurred with our recommendation and stated in its agency comments that competency gaps would be assessed in the future. To date, however, the department’s strategic workforce plan has only included strategies that address staffing gaps—which are aimed at addressing gaps in the critical skills, but not the critical competencies, needed in a workforce—within its mission-critical occupations. Furthermore, our analysis of three functional communities’ strategic workforce plans found that the Intelligence and Financial Management functional communities have not completed their competency-gap assessments or developed strategies for addressing competency gaps in their workforces. In regard to the Civilian Senior Leader functional community, DOD’s strategic workforce plan provided information on competencies for one of the five categories of Civilian Senior Leader workforces—the Senior Executive Service. Our analysis of these three functional communities’ strategic workforce plans suggests that additional attention is needed in DOD’s approach to strategic workforce planning to fully address our prior recommendations with regard to critical skill and competency assessments. Intelligence Functional Community The Intelligence functional community has identified broad competencies for its one mission-critical occupation—the 0132 Intelligence series—but further progress in competency development and assessment for its workforce will not be complete until the community develops a common standard for titling occupational specialties (e.g., Human Intelligence and Signals Intelligence) within its 0132 Intelligence series workforce. According to Intelligence functional community officials and the Intelligence community’s own workforce plan, eight broad competencies have been identified for its mission-critical occupation. However, competency assessments and gap assessments cannot be completed until the community completes development of standard civilian intelligence job titles for occupational specialties within the Intelligence series mission-critical occupation. Officials from the Intelligence functional community told us they are coordinating with relevant stakeholders within the defense intelligence community and the Defense Civilian Personnel Advisory Service on an occupational titling standard to address this issue. Those officials also stated that efforts are underway and it may take up to 2 years to complete the development of the occupational titling standard. Once the standard is developed, competency models for each occupational specialty within the intelligence series will be further developed for the workforce, according to officials. Financial Management Functional Community The Financial Management functional community is currently in the process of developing a strategy that, in part, addresses its workforce’s skills and competencies and is scheduled to assess the competencies of its mission-critical occupations. The strategy being developed—the Financial Management Certification Program—is a program that is expected to consolidate multiple Financial Management workforce development efforts across the department into a mandatory program to educate, train, and certify both civilian and military financial management personnel. In addition, competency assessments for the Financial Management functional community’s mission-critical occupations were scheduled to begin using an enterprise-wide competency assessment tool from April 7 to April 25, 2014, and extended the assessment period through May 9, 2014. The results of this assessment were not yet available to the functional community, and officials stated that they would receive the results from the Defense Civilian Personnel Advisory Service by July 2014. At the time of our review, officials from the Office of the Secretary of Defense (Comptroller) were waiting for the Defense Civilian Personnel Advisory Service to analyze the competency-assessment data. According to DOD, this effort, which included key financial-management and leadership competencies, will enable the community to assess and close the gaps between current capabilities and the competencies required by DOD’s future Financial Management workforce. Civilian Senior Leader Functional Community The strategic workforce plan for DOD’s Civilian Senior Leader functional community—a cross-cutting workforce that includes personnel from the Senior Executive Service and Senior Level workforces, among others— identifies the competencies of its Senior Executive Service workforce but does not include information on the competencies and competency assessments of its other Civilian Senior Leader workforces. The department’s Civilian Senior Leader workforce plan provides information on Office of Personnel Management–and DOD-specific competencies for DOD’s Civilian Senior Leader workforce. However, the plan does not include information on the competencies and competency assessments for its Senior Level and Senior Technical workforce. According to the plan and officials responsible for the development of the Civilian Senior Leader strategic workforce plan, these senior-leader workforces are specialized in nature and they are evaluated primarily on the technical or functional competencies found within their specific occupational series. The plan also does not provide information on the competency and competency assessments for the Defense Intelligence Senior Executive Service and Defense Intelligence Senior Level workforces. Furthermore, the plan does not identify any specific strategies for addressing competency gaps for the Civilian Senior Leader workforces, though it does identify some minimal competency gaps for its senior executive service workforce. Because we previously recommended that DOD conduct competency- gap analyses as part of our 2004 and 2012 reports, we are not making new recommendations to address this issue in this report. We continue to believe that our prior recommendations have merit and would improve DOD’s strategic workforce plan. We will continue to monitor DOD’s efforts to address our prior recommendations. Conclusions In this time of budgetary and fiscal constraint, a strategic workforce plan that includes relevant and complete workforce analyses and associated plan of action is crucial for DOD to effectively and efficiently manage all of its civilian workforces. Since our first report in 2008 on DOD’s strategic workforce plan, we have consistently reported that DOD has made progress in addressing strategic workforce planning requirements. In that time, however, we have also identified consistent issues in DOD’s efforts to develop its plan, such as the department not assessing and reporting on mission-critical competencies and gaps. We recognize the effort that DOD is putting forth to include this information in its recurring strategic workforce plans and address all mandated strategic workforce planning requirements. However, without ensuring that all statutorily required information is included in the strategic workforce plan, DOD risks producing a plan that does not effectively address the department’s needs and will not aid the decision-making process for total workforce management. As a result, we continue to believe that DOD should fully implement our past recommendations on including all required information. While this is the last of DOD’s strategic workforce plans we are mandated to evaluate, we believe that, moving forward, it is important for DOD to continue its efforts to complete development of its strategic workforce plans to help ensure that the department has the right people, in the right place, at the right time. This type of workforce planning can also help DOD focus limited resources on those human-capital programs that most affect their ability to accomplish the department’s wide array of missions. However, until DOD aligns its strategic workforce plan with the budget process and other strategic management initiatives, such as those to address recruitment, retention, and readiness issues, both DOD and congressional decision makers may not have visibility over the areas most in need of attention. Through greater alignment, DOD can help to better ensure that its strategic workforce plan appropriately addresses the human-capital challenges of the future and better contributes to the agency’s efforts to meet its missions and goals. Recommendation for Executive Action To help ensure that decision makers and Congress have the necessary information to provide effective oversight of DOD’s civilian workforce and that the strategic workforce plan can be used effectively, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness align DOD’s strategic workforce plan with the budget and management workforce initiatives, such as those to address recruiting, retention, and readiness issues associated with declining morale among its civilian workforces. Agency Comments and Our Evaluation We provided a draft of this report to DOD for comment. In written comments, DOD concurred with our recommendation. DOD’s comments are reprinted in their entirety in appendix V. DOD concurred with our recommendation to align the department’s strategic workforce plan with the budget and management workforce initiatives, such as those to address recruiting, retention, and readiness issues associated with declining morale among its civilian workforces. We are encouraged that DOD recognizes the importance of linking its strategic workforce plan with the budget and other management workforce initiatives. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Under Secretary of Defense for Personnel and Readiness. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Scope and Methodology To determine the extent to which the Department of Defense’s (DOD) Fiscal Year 2013-2018 Strategic Workforce Plan addressed the statutory requirements of Section 115b of Title 10 of the United States Code, we evaluated DOD’s 2013-2018 Strategic Workforce Plan and supporting documentation to determine the degree to which the plan addresses, partially addresses, or does not address each of the required elements. Using a scorecard methodology, we assigned a rating of “addresses” if all elements of a legislative requirement were cited, even if specificity and details could be improved upon. We assigned a rating of “partially addresses” if an assessment or plan of action did not include all of the elements of a legislative requirement. A rating of “does not address” was assigned when elements of a characteristic were not explicitly cited or discussed or any implicit references were either too vague or too general to be useful. To make this determination, two GAO analysts independently evaluated and scored each element in DOD’s plan, and then met to resolve any differences in their respective independent analyses. When different initial ratings were given by the two analysts, they met to discuss and resolve the differences in their respective scorecards. In addition, our Office of General Counsel reviewed the team’s completed analysis. We then discussed the results of our analysis with officials from the Strategic Human Capital Planning Office and the Defense Civilian Personnel Advisory Service, within the Office of the Under Secretary of Defense for Personnel and Readiness, who are responsible for developing the plan. We also discussed the results of our assessment with officials from the Office of the Undersecretary of Defense for Intelligence and the Office of the Undersecretary (Comptroller) who serve as the functional community managers for the Intelligence functional community and the Financial Management functional community, respectively. Officials from the Defense Civilian Personnel Advisory Service have responsibility for the Civilian Senior Leader workforce plan. To determine the extent to which DOD’s strategic workforce plan is consistent with key strategic workforce planning principles, we compared DOD’s 2013-2018 Strategic Workforce Plan to key principles of effective strategic workforce planning.principles—aligning workforce planning with strategic planning and budget formulation and involving stakeholders, among others, in developing, communicating, and implementing the strategic workforce plan—that do not overlap with the statutory requirements. Additionally, we selected two other key principles—determining the critical skills and competencies needed to achieve current and future programmatic results and developing strategies that are tailored to address gaps—because they are integral to developing a comprehensive strategic workforce plan and we have consistently found that DOD has not yet completed actions to address issues related to them. GAO, Human Capital: DOD Needs Complete Assessments to Improve Future Civilian Strategic Workforce Plans, GAO-12-1014 (Washington D.C.: Sept. 27, 2012). selection of these three functional communities does not constitute a representative sample of DOD’s 22 functional communities. Thus, while the results cannot be projected to all functional communities, they did provide us with important insights. For each of the three functional communities selected, we reviewed GAO and Office of Personnel Management guidance regarding strategic workforce planning, and compared DOD’s current approach to developing and integrating functional community–specific information into its Strategic Workforce Plan against that guidance. Functional community managers are responsible for monitoring the strategic human- capital planning efforts for their respective communities, including workforce forecasting, competency assessment, and strategy development. findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: DOD’s Functional Communities and Mission-Critical Occupations The Department of Defense (DOD) uses a functional-community construct and functional-community categories to group together employees who perform similar functions. Mission-critical occupations are those occupations that are critical to the success of meeting the DOD mission within its functional communities. For the Fiscal Year 2013-2018 Strategic Workforce Plan, the department expanded its functional community construct from 12 functional communities to 22 functional communities and from 24 mission-critical occupations to 32 mission- critical occupations. Table 2 identifies the department’s functional communities and mission-critical occupations. Appendix III: Status of Prior GAO Recommendations Related to Strategic Workforce Planning In prior reports on the Department of Defense’s (DOD) strategic workforce-planning efforts, we made several recommendations for executive action to improve DOD’s strategic workforce plan and the department’s approach to strategic workforce planning. We consider a recommendation open if DOD has not taken or has not completed actions to address a recommendation. We consider a recommendation closed– implemented once DOD has taken action to satisfy the intent of the recommendation. We consider a recommendation closed–not implemented if the intent of the recommendation has not been satisfied or circumstances have rendered the recommendation invalid. We are continuing to monitor DOD’s efforts to address these recommendations. Table 3 summarizes our recommendations, DOD’s response to our recommendations, and what actions, if any, DOD has taken to address them. Appendix IV: GAO’s Assessment of the Extent to Which DOD’s Civilian Workforce Plan Addresses Statutory Requirements This appendix provides our assessment of the extent to which the Department of Defense’s (DOD) Fiscal Year 2013-2018 Strategic Workforce Plan for its overall civilian workforce, civilian senior-leader workforce, and financial-management workforce address statutory requirements in Section 115b of Title 10 of the United States Code in the following three tables. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the above named contact, David Moser, Assistant Director; James Kernen; Steven Lozano; Brian Pegram; Michael Pose; Terry Richardson; Jennifer Weber; Erik Wilkins-McKee; and Michael Willems made key contributions to this report. Related GAO Products Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Human Capital: Critical Skills and Competency Assessments Should Help Guide DOD Civilian Workforce Decisions. GAO-13-188. Washington, D.C.: January 17, 2013. Human Capital: DOD Needs Complete Assessment to Improve Future Civilian Strategic Workforce Plans. GAO-12-1014. Washington, D.C.: September 27, 2012. Human Capital: Complete Information and More Analyses Needed to Enhance DOD’s Civilian Senior Leader Strategic Workforce Plan. GAO-12-990R. September 19, 2012. DOD Civilian Workforce: Observations on DOD’s Efforts to Plan for Civilian Workforce Requirements. GAO-12-962T. Washington, D.C.: July 26, 2012. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Defense Acquisitions: Further Actions Needed to Improve Accountability for DOD’s Inventory of Contracted Services. GAO-12-357. Washington, D.C.: April 6, 2012. Defense Workforce: DOD Needs to Better Oversee In-sourcing Data and Align In-sourcing Efforts with Strategic Workforce Plans. GAO-12-319. Washington, D.C.: February 9, 2012. DOD Civilian Personnel: Competency Gap Analyses and Other Actions Needed to Enhance DOD’s Strategic Workforce Plans. GAO-11-827T. Washington, D.C.: July 14, 2011. Human Capital: Opportunities Exist for DOD to Enhance Its Approach for Determining Civilian Senior Leader Workforce Needs. GAO-11-136. Washington, D.C.: November 4, 2010. Human Capital: Further Actions Needed to Enhance DOD’s Civilian Strategic Workforce Plan. GAO-10-814R. Washington, D.C.: September 27, 2010. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. Human Capital: The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004.
Strategic workforce planning can help DOD determine whether it has the civilian personnel with the necessary skills and competencies to perform a wide variety of duties and responsibilities, including mission-essential combat-support functions, such as logistics and maintenance, that traditionally have been performed by uniformed military personnel. In 2006, Congress enacted a requirement for DOD to produce strategic workforce plans, and GAO first reported on DOD's plans in 2008. The National Defense Authorization Act for Fiscal Year 2010 mandates that GAO report to Congress on these plans. GAO evaluated the extent to which (1) DOD's Fiscal Year 2013-2018 Strategic Workforce Plan addressed statutory requirements; and (2) DOD's plan is consistent with key strategic workforce-planning principles. GAO examined DOD's Fiscal Year 2013-2018 Strategic Workforce Plan and associated documents, relevant legislation, and key strategic workforce-planning principles, and interviewed officials from across the department involved in producing the plan. The Department of Defense's (DOD) Fiscal Year 2013-2018 Strategic Workforce Plan addressed or partially addressed 27 of the 32 statutory reporting requirements and did not address 5 of the requirements. The statute requires DOD, for example, to conduct assessments of critical skills and competencies, to assess gaps in the workforce, and to assess the appropriate mix of civilian, military, and contractor capabilities. DOD has taken steps to address many of its reporting requirements since 2008. However, DOD has not yet addressed the requirement to assess the appropriate mix of civilian, military, and contractor capabilities in its plan, as shown in the table below. GAO previously has made 10 recommendations regarding statutory compliance covering a range of issues. In addition to recommending that DOD conduct the required assessments, GAO also has recommended providing clearer guidance for developing the plan and enhancing performance measures and is not making further recommendations related to statutory compliance at this time. DOD's strategic workforce plan does not fully incorporate key strategic workforce-planning principles. There are six key strategic workforce-planning principles, and most are similar to elements of the statutory reporting requirements, such as assessing critical skills and competencies. A key principle that is not addressed in the statutory requirements is strategic alignment, which links workforce strategies to an agency's mission and goals, and aligns them with, among other things, budget formulation. DOD's 2013-2018 plan noted the need to integrate the department's plan with the budget process but did not include specific details and, according to officials, DOD does not have actions underway to do so. Further, the plan identified strategies addressing some critical-skill staffing gaps, but did not provide specific information on the funding required to implement most of these strategies. The plan also did not align with recent recruiting, retention, and readiness initiatives to improve the morale of DOD's civilian workforce as reported to congressional defense committees. Without aligning its workforce plan with the budget process and management workforce initiatives, such as those to address recruiting and retention issues associated with declining morale, the department will not be in the best position to make informed management and resource decisions about its workforce.
Background The EFOG-M is being designed to engage armored combat vehicles, other high value ground targets (such as command, control, and communication centers), and helicopters beyond the line of sight at ranges up to 15 kilometers. The system will consist of a gunner’s station and eight missiles mounted on a High Mobility Multipurpose Wheeled Vehicle. The missiles are launched toward a target area based upon forward intelligence information. After missile launch, the gunner can intervene at any time to lock on and engage detected targets. The gunner views the flight path and the target via a seeker (located in the missile) that is linked to the gunner’s video console by fiber optic cable. Figures 1 and 2 show the EFOG-M fire unit and missile and the potential EFOG-M deployment concept, respectively. According to an Army official, the EFOG-M uses the same concept and some of the same technology as three previously terminated efforts costing more than $440 million—the Fiber Optic Guided Missile (FOG-M), the Non-Line-of-Sight Missile (NLOS), and the NLOS-Combined Arms (CA). The Army began development work in 1978 to demonstrate fiber optics guidance and conducted flight tests in 1984 to demonstrate the technology as an antitank missile (FOG-M). However, in late 1986, the Office of the Secretary of Defense (OSD) approved development not primarily as an antitank weapon but to provide defense against helicopters (NLOS). Although the Army had planned to produce NLOS, OSD decided to terminate the program once its development was completed because other programs had higher priority and other systems could accomplish the intended mission. However, the Army then terminated the program in January 1991 before completing development because of excessive cost growth. The Army restarted the effort as NLOS-CA in mid-1991, performed concept analyses, explored alternative acquisition strategies, and sought approval for engineering and manufacturing development. But its development was not approved. The Army is now developing the EFOG-M and plans to acquire limited quantities under an advanced technology demonstration program designed to demonstrate potential technology enhancements; and the Army will provide the system and support it for the RFPI ACTD. RFPI is exploring new approaches to provide an early entry force that is significantly more capable against a heavy armored threat. The primary objective of an ACTD is to accelerate the application of new technology to solve military problems. ACTDs are to (1) evaluate military utility before committing to acquisition, (2) develop operational concepts, and (3) rapidly provide operational capability. During this process, ACTD programs require much more early user involvement than expected during normal acquisition program phases. Department of Defense (DOD) officials believe ACTD programs will shorten the acquisition process. Under the demonstration program, the Army plans to procure 12 fire units, 3 platoon vehicles, 300 missiles, and associated equipment at an estimated cost of about $280 million. According to Army officials, the development, demonstrations, and evaluations could result in one of the following actions: terminating the effort before building the system hardware (not a likely option); purchasing only the limited quantities and making a decision as to whether to leave the residual quantities in the field; procuring much larger quantities of the EFOG-M currently being developed (3,126 missiles and 120 fire units are being examined from an affordability standpoint); or substantially modifying the system and procuring larger quantities. The Army plans to demonstrate EFOG-M performance and military utility through (1) simulations, (2) contractor-conducted missile performance tests, (3) a force-on-force demonstration along with other early entry systems and potential systems, (4) government check-out missile firings, and (5) a 2-year user fielding and evaluation of a residual force. Table 1 shows the schedule for these events. The Requirement for EFOG-M Has Not Been Established The Army does not have an agreed-upon requirement for the EFOG-M. It has not completed the documentation nor analyses for the EFOG-M program required for most acquisition programs. For example, the Army has not (1) prepared a mission need statement documenting the mission deficiency, (2) analyzed other alternatives to satisfy the need, (3) defined the system’s operational and performance requirements, nor (4) comprehensively compared EFOG-M’s cost and operational effectiveness to other existing or developmental systems. According to Army officials, that type of documentation, analysis, and evaluation is not required for ACTD programs. They said these changes resulted from defense acquisition reform efforts. However, at the current time, U. S. Army Training and Doctrine Command (responsible for determining requirements) officials state (1) the system is needed for use with early entry forces and (2) the requirement will be defined during the ACTD. “NLOS-CA has struggled in budget competition within the Army because it is such a revolutionary concept. It simply doesn’t fit well anywhere within the Army’s branch structure and has been passed around among air defense (anti-helicopter version), artillery, and infantry branches.” Because requirements and/or support for predecessor systems have disappeared after considerable effort and expenditure of funds, we believe that the EFOG-M requirement should be agreed upon and formally documented. In addition, we believe the system’s cost and operational effectiveness should be comprehensively compared to other alternatives for satisfying that requirement. In its report (104-131, June 1, 1995) on the National Defense Authorization Act for Fiscal Year 1996, the House National Security Committee expressed concern that the Army is pursuing a weapon system that provides questionable value and possesses known fiscal risk. The committee recommended a provision (sec. 215) that would (1) require the Secretary of the Army to certify by December 1, 1995, that a requirement exists for the EFOG-M and whether there is a cost-effectiveness analysis supporting such requirement and (2) limit the expenditure of funds for the EFOG-M program to that identified in the current program plan only ($280 million, based on fiscal year 1995 constant dollars) and deny continuation of the program beyond fiscal year 1998 if contract obligations are not met. Some Criteria for Evaluating Performance Are Not Specific Army guidance for advanced technology demonstration programs require establishment of criteria to be met and the RFPI ACTD management plan recognizes that criteria as the technical goals for the system. A DOD instruction states that, to be effective, the criteria must be specific and quantitative. Since the ACTD’s objective is to judge the military value of the system, it appears reasonable and prudent to establish specific measurable standards as a basis for making the judgment. The Army’s EFOG-M Advanced Technology Demonstration Plan establishes exit criteria for evaluating EFOG-M performance (see app. I). Some of these criteria are specific and easily measurable. For example, the plan establishes specific minimum criteria that must be accomplished by mid-1996 for missile reload time, the number of missiles mounted on each fire unit, and the system response time for missile launch. It also provides specific minimum criteria that must be accomplished by mid-1999 for missile range and set-up time for system operation. However, the criteria for some other operational issues that project officials consider critical do not provide the specific values to be attained—a standard to measure against to determine success. For example, to demonstrate successful identification of targets, the minimum criterion to be accomplished in 1996 is “gunner recognition without diverting the missile and obtain in-flight intelligence.” However, the plan does not identify the minimum required probabilities of correctly identifying the target—a performance issue very critical to the effectiveness of the weapon system—either in 1996 or at the end of the technology demonstration. Another criterion extremely important to the basic role and need for the system is demonstrating that targets can be engaged even though they are not within the gunner’s view. The criterion states that the Army is to demonstrate engaging targets not in the line of sight by mid-1996. But the criterion does not address the required probability for engaging each target correctly identified—a key determinant of the success of the system—either in 1996 or at the end of advanced technology demonstration in 1999. In addition, the minimum criteria for warhead lethality is to “defeat existing threat tanks and helicopters.” But it does not establish and provide for measuring specific minimum required probabilities of defeating the tanks or helicopters with a single shot. However, the probability of killing a target with a single shot is critical to determining whether the system is cost-effective and, consequently, whether it should be procured. We believe that in order to accomplish an evaluation of the system, the criteria for determining a success must be (1) specific and measurable and (2) representative of the capability needed rather than the capability available. In our opinion, if the military value of the program is to be judged, the criteria for measuring that value, including specific performance of the missile, should be established in advance of the tests rather than relying on subjective judgment of success afterward. Future EFOG-M Acquisition Could Be Shortened ACTD programs are designed to shorten the time required to obtain operating capability. But, when asked where EFOG-M would enter the acquisition process if a larger procurement is desired, the Under Secretary of Defense for Advanced Technology said that it depends upon the quality of the ACTD—it could enter at production or it could go back to the beginning of engineering and manufacturing development. However, since the ACTD is scheduled for 6 years, it appears to us that, unless engineering and manufacturing development is greatly abbreviated, entering the process at that phase would accomplish little toward shortening the acquisition process. One shortening strategy could involve conducting tests and evaluations during the limited acquisition in such a fashion to prevent duplication during a larger procurement. For normal Army acquisition programs, development testers (Army Test and Evaluation Command) plan and conduct developmental testing and provide safety release of all systems; independent evaluators or assessors (Army Materiel Systems Analysis Activity or Test and Evaluation Command) determine the degree to which the technical parameters of the system have been achieved; and operational testers and independent operational evaluators (Army Operational Test and Evaluation Command) conduct operational tests and address the operational effectiveness and suitability of the system. However, the roles of development testers, independent evaluators, and operational testers and evaluators in the RFPI demonstration and EFOG-M tests and evaluations are not well defined at this time. The RFPI ACTD Management Plan is endorsed by the Test and Evaluation Command but the plan does not specify the Command’s role nor the role of other independent testers in the demonstration. More detailed draft plans for conducting EFOG-M tests, conducting the demonstrations, and acquiring the EFOG-M limited quantities also do not identify the specific roles. And discussions with independent testers and evaluators and with EFOG-M management officials provided little additional definitive information about the role of the independent testers and evaluators. According to EFOG-M management officials, the contractor has prepared a draft master test plan for the limited acquisition, and the contractor will be responsible for the tests. Project test officials have sent the plan to the independent testers and evaluators for comment, but their approval is not required. The project manager will approve the test plan, and will consider the independent comments. Project management officials said that the testers and evaluators would be invited to observe the tests, but not control them. However, there are no formal agreements with independent testers and evaluators as to (1) their role in the testing and evaluation of EFOG-M or (2) the amount of testing and independent tester and evaluator involvement required to prevent retesting and reevaluating the system if a larger quantity is desired. All acknowledge receiving the contractor’s master test plan. However, the Army Materiel Systems Analysis Activity, for example, is only currently attempting to define its role in ACTD programs. Its representatives have participated in RFPI and EFOG-M discussions, and they plan to provide some informal evaluation. Army Test and Evaluation Command representatives have been informed they will be responsible for safety tests, and they are actively attempting to define their involvement. Operational Test and Evaluation Command officials are aware of the RFPI and EFOG-M programs, but they have not yet defined their role in the programs. They believe they will be involved at the appropriate time. One RFPI ACTD manager has begun efforts to provide coordinated evaluation for the virtual prototype evaluation If, in order to accomplish the ACTD objective, the Army initiates strategies to ensure that the ACTD reduces the time required to acquire a larger quantity of systems, we believe there should be assurances that required tests and evaluations of the system are conducted in such a fashion during the ACTD program to preclude the need to repeat the tests and evaluations to support a larger procurement. Resources for Fielding System Beyond the ACTD Are Not Ensured Because of the early stage of the ACTD program, the Army has not yet planned for the personnel and funds to support, operate, and maintain the EFOG-M beyond the ACTD program. In addition, the Army has not yet determined whether a deployment of the residual equipment would be cost-effective. According to Army officials, the ACTD could result in (1) leaving the EFOG-M residual equipment deployed with a combat unit but not purchasing additional systems or (2) purchasing a much larger quantity of EFOG-Ms—possibly to equip the entire early entry force. Before making decisions regarding retaining the residual deployment or a larger deployment, the Army should ensure that it has the force structure and funding needed to operate, support, and maintain EFOG-M beyond the ACTD program and that the deployment is cost-effective. For the extended user evaluation, the EFOG-M will be assigned to a company consisting of 3 platoons with a total of 58 personnel. Each platoon will have 1 platoon leader vehicle and 4 EFOG-M fire units (12 per company), and the company will be assigned support vehicles for resupply of ammunition and fuel. The EFOG-M contractor will support and maintain the system during the period. Training and Doctrine Command officials informed us that the company will perform its normal activities during the evaluation. For example, if the unit went to training, it would train with the EFOG-M. If the unit were deployed for a military contingency, it would deploy with the EFOG-M as a part of the force. The Army Forces Command will provide the personnel to operate and support the systems during the user evaluation, and the RFPI program management office will fund the supporting contractor. However, Training and Doctrine Command officials informed us that funding or support beyond the 2-year extended user evaluation period has not been planned for the residual quantity or for a larger procurement. They said such plans would be premature since decisions have not been made regarding retaining the residual quantity or procuring a larger amount. In addition, although retaining the residual quantity without a larger procurement is an option, at this time the Army has not examined the cost-effectiveness of such a deployment. For example, we found no evidence the Army has compared (1) the cost of personnel to operate the system and the cost to establish or contract for maintenance and logistics support with (2) the cost to accomplish the mission with other alternatives. An Army official said the Army plans to make these comparisons during the ACTD. We believe the Army should ensure that such cost-effectiveness studies are performed as well as ensure that a supporting/operating force is available before making decisions regarding retaining the residual deployment. In addition, before making decisions regarding a larger deployment, DOD should ensure that the Army has the force structure and funding planned to operate, support, and maintain the larger procurement. Recommendations We recommend that, before deciding to either acquire more EFOG-Ms or retain the limited quantity beyond the user evaluation, the Secretary of Defense require the Army to prepare (1) a formal EFOG-M requirements document and (2) analyses comparing EFOG-M’s cost and operational effectiveness with other alternatives for satisfying the requirement, including the weapons of other services if appropriate. We recommend that the Secretary of Defense establish measurable exit criteria regarding the most critical EFOG-M performance issues before beginning the tests, demonstrations, and evaluations. We also recommend that the Secretary of Defense evaluate the feasibility and costs of performing the tests and evaluations to be conducted during the limited procurement in such a fashion to preclude the need to repeat them if a larger procurement is desired. We further recommend that, before requesting appropriations to support and operate the EFOG-M equipment beyond the extended user evaluation period, the Secretary of Defense require the Army to provide evidence that such a deployment would be cost-effective. In addition, before requesting funds for a larger procurement, we recommend that the Secretary of Defense ensure that the Army has planned sufficient funding and personnel to support, operate, and maintain the larger procurement. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD said the report contained many useful comments and observations and it partially agreed with the recommendations. However, it did not agree with the findings because it believes the report treats EFOG-M as a normal acquisition program instead of as part of the RFPI ACTD. We disagree. The report is directed toward improving DOD’s management of acquiring EFOG-M for the RFPI ACTD, demonstrating EFOG-M’s utility, and evaluating its military value. DOD partially agreed with our draft recommendation to prepare a formal requirements document and conduct analyses comparing EFOG-M cost- and operational effectiveness with other alternatives by the end of the force-on-force demonstration. DOD stated that it would prepare a formal cost- and operational effectiveness analysis and statement of requirement if the results of the ACTD indicates that a larger quantity of EFOG-M should be acquired. However, it believed that the timing should be keyed to the transition decision. Based on DOD’s comments, we modified the recommendation to provide more flexibility in the timing of establishing requirements and conducting a cost- and operational effectiveness analysis. DOD agreed with the modified recommendation. DOD did not agree with our draft recommendation to establish measurable exit criteria regarding the most critical EFOG-M performance issues. DOD stated that exit criteria are not appropriate for use with an ACTD. It further stated that appropriate testing would be performed to characterize performance and required levels of performance will be established at the conclusion of the ACTD. We disagree with DOD. The Army has already established exit criteria for EFOG-M and the RFPI ACTD management plan recognizes that most of the systems (including the EFOG-M) have approved exit criteria that describe the technical goals for each system. Our recommendation is directed toward making some of these technical goals more specific and measurable. We continue to believe that measurable critical levels of performance should be established before beginning the tests, demonstrations, and evaluations. Because of a misinterpretation, DOD partially agreed with our draft recommendation to evaluate the feasibility and costs of performing sufficient tests and evaluations during the limited procurement to preclude the need to duplicate them during a larger procurement. DOD concluded that we wanted them to expand the testing program. However, our intent was to preclude the need to repeat tests to support a larger procurement. Therefore, we modified the recommendation to prevent any misunderstanding. DOD agreed to provide evidence that the deployment of EFOG-M would be cost-effective before requesting appropriations to support and operate the EFOG-M equipment beyond the extended user evaluation period. DOD stated that the results of the RFPI ACTD would include an analysis of the cost-effectiveness of limited fielding with the inventory procured for the ACTD as well as for an expanded deployment and that any decision to procure additional units would include full consideration of funding and personnel levels required to operate and support the expanded deployment. The DOD response and our comments are included in appendix III. We are sending copies of the report to the Secretaries of Defense and the Army and the Director, Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appen dix IV. EFOG-M Exit Criteria Engage targets not in line of sight (continued) Sling transportable by CH-47D helicopter in a march order configuration Sling transportable by UH-60 helicopter (2 lifts) Scope and Methodology We obtained information regarding the purposes of the Rapid Force Projection Initiative (RFPI) Advanced Concept Technology Demonstration (ACTD) by (1) reviewing the RFPI ACTD management plan and (2) discussing the matter with the Deputy Under Secretary of Defense for Advanced Technology; the Director of Technology, Office of the Assistant Secretary of the Army for Research, Development, and Acquisition; and officials from the RFPI Program Office, U.S. Army Missile Command. We obtained information regarding the Enhanced Fiber Optic Guided Missile (EFOG-M) system’s exit criteria by reviewing the EFOG-M Advanced Technology Plan and interviewing officials from the Non-Line of Sight Project Office (responsible for managing the EFOG-M program), Program Executive Office for Tactical Missiles. In addition, we obtained information regarding demonstration, test, and evaluation plans from discussions with RFPI and EFOG-M project officials and officials from the (1) Army Materiel Systems Analysis Activity, Aberdeen Proving Ground, Maryland; (2) Army Test and Evaluation Command, Aberdeen Proving Ground and Redstone Arsenal, Alabama; and (3) Operational Test and Evaluation Command, Alexandria, Virginia. We also obtained information regarding EFOG-M system requirements, force structure requirements, and fielding plans from the U. S. Army Training and Doctrine Command’s System Manager for Antitank Missiles and the Dismounted Battlespace Battle Laboratory, Fort Benning, Georgia, and the Early Entry Lethality and Survivability Battle Laboratory, Fort Monroe, Virginia. We conducted our review from September 1994 through July 1995 in accordance with generally accepted government auditing standards. Comments From the Department of Defense The following are GAO’s comments on the Department of Defense’s (DOD) letter dated September 15, 1995. GAO Comments 1. The report does not focus on the EFOG-M program as a normal acquisition program. The report is directed toward improving DOD’s management of acquiring a limited number of EFOG-Ms for the RFPI ACTD. For example, we believe that the recommendation regarding the formal agreed-upon requirement is appropriate because requirements and/or support for three EFOG-M predecessors have disappeared after considerable effort and expenditure of funds. 2. The report does not ignore the primary thrust of ACTDs. The draft recommendation was directed at establishing an EFOG-M requirement by the end of the force-on-force demonstration in mid-1998, or nearly 4 years into the ACTD program, not at its inception. Our intent was to ensure that the Army validated its requirement for EFOG-M before deciding whether to either acquire a larger quantity of EFOG-Ms or retain the residual ACTD quantity after the 2-year evaluation. Based on DOD’s comments, we modified our recommendation to permit more flexibility in the timing and even greater user evaluation. 3. We disagree that requirements, exit criteria, and cost-effectiveness analyses must be products of an ACTD. We addressed the importance of exit criteria in the agency comments and evaluation section of the report and the importance of requirements in comment 1. A cost-effectiveness analysis can be performed at any time, not at just the end of the ACTD. 4. The report does not recommend force structure planning at this time; however, it does recommend that such planning occur before a decision is made to either acquire a larger quantity or retain the limited quantity after the 2-year evaluation. DOD agreed with the recommendation. 5. The report neither addresses changes in threat nor prohibits exploring EFOG-M’s effectiveness under early entry conditions. However, as modified, it recommends an agreed-upon requirement before making a decision to either procure a larger quantity or retain the limited quantity. 6. We do not judge EFOG-M because of its history; but, at the same time, we believe that history should be used to assist in making good management decisions. 7. Our review was not designed to evaluate the ACTD process, but rather to examine selected aspects of the acquisition of the Army’s EFOG-M system. Therefore, we cannot comment on the benefits of ACTD programs. 8. Regarding critical decisions, we modified our recommendations to permit more flexibility in establishing the requirement; however, we still believe that a requirement should be established before decisions are made regarding a larger procurement or retaining a limited quantity. We also believe that specific measurable exit criteria, or standards for performance, should be established before tests, evaluations, and demonstrations. 9. DOD’s comments and our evaluation are included in the body of the report. Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Atlanta Field Office Thomas W. Gilliam, Evaluator-in-Charge Erin B. Baker, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
GAO reviewed the Army's plans for acquiring the Enhanced Fiber Optic Guided Missile (EFOG-M) system. GAO found that: (1) the Army lacks a formal requirement for EFOG-M and has not prepared comprehensive comparative cost studies because requirements documents and analyses are not normally required for Advanced Concept Technology Demonstration (ACTD) programs; (2) the Army should develop the requirement documents because of its prior difficulty in justifying the systems; (3) Congress has required the Army to certify that the requirement and analyses exist by December 1, 1995; (4) the Army has not fully defined EFOG-M performance criteria to evaluate the system's military value; (5) the ACTD program may not shorten the EFOG-M acquisition process unless innovative strategies are devised and formal testing agreements are reached; and (6) resources are not available to support limited fielding of EFOG-M after the 2-year ACTD evaluation period.
Background EVM is a project management tool that, when properly used, can provide accurate assessments of project progress, produce early warning signs of impending schedule delays and cost overruns, and provide unbiased estimates of anticipated costs at completion. Pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion is a challenge for most projects. Without such information, managers can have a distorted view of a project’s status and risks. EVM also allows individuals outside the project to see a standardized metric describing the cost and schedule performance of that particular project and compare it consistently with other projects. EVM measures the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and with the actual cost of work accomplished. Differences in these values are measured in both cost and schedule variances. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a $1.7 million negative cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed with the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a $5 million negative schedule variance. Earned value provides information that is necessary for understanding the health of a project and an objective view of project status. Cost and schedule variances can also be used in estimating the cost and time needed to complete the project. While some data that an EVM system produces are retrospective and indicate performance to date, EVM data can also be used to predict future performance. For example, estimates at completion for a project can be calculated by using efficiency indices which are based on a project’s past cost and schedule performance. See appendix II for additional information on the importance of EVM. A project’s EVM data comes from multiple sources. For example, each contractor that supports a project will produce and deliver EVM reports to the project for the work they and their subcontractors perform, if the contract so requires. In addition, the project will collect EVM information for the work that it performs in-house at a NASA center. All of this lower level EVM data can then be consolidated at the project level to produce a project level EVM report. Pulling together EVM data from multiple levels into a project level report gives the project a comprehensive outlook of its cost and schedule, and provides the project manager with early warning of potential cost and schedule overruns. EVM has evolved from an industrial engineering tool to a government and industry best practice, providing improved information to conduct oversight of acquisition programs. As such, it is guided by industry best practices and standard, and is required by regulations and requirements at the federal government and the agency level at NASA. These requirements and standards are summarized in table 1 below. Almost two decades of NASA’s past efforts to improve its use of earned value management have had uneven success. An EVM Focal Point Council was created in 1996 and focal points were designated at each NASA center and the Office of Procurement and the Office of the Chief Financial Officer to provide an open forum to share experiences and develop a network of support within the NASA EVM community. In 1997, the agency issued NASA Policy Directive 9501.3 “Earned Value Management Performance” which established the basis for applying EVM to NASA contracts. Prior to this policy, centers used their own individual policies on performance measurement systems. However, in 1999, the NASA Inspector General reported that EVM policy was not consolidated as an overall program management responsibility and that the Agency Program Management Council did not receive comprehensive EVM information.policy from the Office of the Chief Financial Officer to the Office of the Chief Engineer to emphasize EVM as a project management tool, rather than a financial management tool. As a result, in 2003, NASA shifted responsibility for the EVM In 2004, GAO reported that only 2 of 10 NASA projects reviewed used EVM consistently and appropriately. Several barriers to EVM implementation were identified, such as lack of reliable financial data, trained EVM staff, data analysis tools, and incentives. Among other things, GAO recommended that NASA take action to ensure that a true EVM system is used as an organizational management tool to bring cost to the forefront in NASA’s decision-making process and that acquisition and EVM management policies and procedures be enforced. In response to our recommendations, NASA stated that it was updating NASA Procedural Requirements 7120.5, its program and project management processes and requirements policy, to improve its cost estimating and ensure that its cost estimate and earned value analyses were effectively used, and the updated policy was issued in 2005. Similar to other agencies in our 2009 report on governmentwide use of EVM, we reported on weaknesses in NASA’s EVM policies and practices and recommended that the agency modify its policies governing EVM to ensure that they are consistent with best practices. In particular, we found problems with the EVM training requirements for personnel responsible for investment oversight and management responsibilities. In addition, NASA’s policy for revising project cost and schedule baselines did not have adequately defined criteria on the acceptable reasons for permitting a rebaselining. We also found weaknesses in how the NASA projects we reviewed implemented EVM and managed their negative performance trends. NASA acknowledged the identified weaknesses and stated that it was revising its NASA procedural requirements for programs and projects to include expanded and strengthened policies governing EVM application and processes, and revised policies for rebaselining projects. In January 2010, the Agency Program Management Council approved the funding for the EVM capability project to develop an EVM system that complies with the guidelines in ANSI/EIA-748. Another goal of the project was to determine whether it was feasible to implement a single EVM system that would integrate the scope, schedule, and budget of EVM data for NASA’s in-house managed efforts and contractor data across the agency. According to agency officials, NASA invested about $2 million into the capability project to pilot the EVM system through two projects: the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) and the Constellation Extra-Vehicular Activity project. Through the pilots, the capability project demonstrated that an agency-wide EVM system was feasible. As a result, a finalized set of processes, tools, guidance, and training products that compose NASA’s new EVM system was developed. This new system was peer reviewed and approved by a panel of EVM experts. NASA Projects Have Not Consistently Implemented Key EVM Practices and Most Did Not Have Access to Reliable EVM Data Our assessment of 10 major spaceflight projects showed that NASA has not yet fully implemented EVM and thus is not taking full advantage of an important tool that could help reduce acquisition risk. GAO found that the projects had shortfalls in two of the three fundamental practices that we assessed. Specifically, we found that half of the projects did not use an EVM system that was certified as compliant with the ANSI/EIA-748 standard. Most of the projects conducted an integrated baseline review (IBR), a practice that ensures the performance measurement baseline reflects all requirements and that resources are adequate to complete the work. Specifically, 9 projects conducted an IBR for their overall efforts; however, the Stratospheric Observatory for Infrared Astronomy (SOFIA) project office only conducted an IBR of one of its major contractors. In addition, we found that only 4 of the 10 projects had established formal independent surveillance reviews to ensure that key elements of the EVM process were maintained over time so that the data produced by the system provided timely indications of actual or potential problems. For the 6 projects that did not have formal independent surveillance in place, each provided evidence that they instituted monthly EVM data reviews, which according to project officials, helps them to continually monitor cost and schedule performance. However, the rigor of both the formal and informal surveillance reviews is questionable given the numerous EVM data anomalies we found in the monthly EVM reports. Specifically, we found many unexplained anomalies, such as the presence of negative numbers or missing data, which caused us to question the reliability of the data. Out of the 10 projects we reviewed, we found that just 3 projects had reliable EVM data while the remaining 7 had only partially reliable data. Overall, the projects are using EVM, but NASA has not consistently implemented EVM across these projects. For example, we found that several projects were not implementing EVM at the project level, which is considered a best practice. Table 2 summarizes the performance of each of the 10 projects in meeting the three fundamental EVM practices and the reliability of the data. Of the 10 projects we reviewed, 4 projects had a certified EVM system, 3 did not, and 3 had a mixture in which some contractors and subcontractors had certified systems and some did not. When an EVM system is certified, the agency has assurance that the implemented system was validated for compliance with the ANSI/EIA-748 standard by independent and qualified staff and therefore can be considered to provide reliable and valid data from which to manage a project. The Global Precipitation Measurement (GPM), Tracking and Data Relay Satellite System (TDRS), Landsat Data Continuity Mission (LDCM), and James Webb Space Telescope (JWST) were the only projects that provided evidence that the contract performance reports provided came from EVM systems that were certified as compliant with the ANSI/EIA-748 standard. The Lunar Atmosphere and Dust Environment Explorer (LADEE), Magnetospheric Multiscale (MMS) and Radiation Belt Storm Probes (RBSP) projects did not have EVM systems that were certified to be compliant with the ASNI/EIA-748 standard. Finally, the Jet Propulsion Laboratory, a federally funded research and development center that the California Institute of Technology manages under a contract with NASA, was the only NASA Center with a certified EVM system. The Jet Propulsion Laboratory is responsible for managing the Orbiting Carbon Observatory 2 (OCO-2) project. The Mars Atmosphere and Volatile Evolution Mission (MAVEN) and SOFIA prime contractors also had certified systems; however, their project offices did not. NASA does not require a certified EVM system for their in-house work. Using the project’s integrated master schedule and contract performance reports, we assessed the EVM data provided by the projects against selected fundamental ANSI/EIA guidelines to determine the extent to which each project’s EVM system, whether certified or not, was meeting them. The guidelines we reviewed included whether the work breakdown structure (WBS)—which provides the basis of the project schedule—was consistent between the EVM report and the schedule, whether the schedule identified significant task interdependencies, and whether the project had identified a time-phased budget baseline for tracking cost and schedule variances. As shown in figure 1, a work breakdown structure breaks down product-oriented elements into a hierarchical structure that shows how elements relate to one another as well as to the overall end product. By subdividing a project into smaller elements, management can more easily plan and schedule the program’s activities and assign responsibility for the work. We found that even for the projects that had certified systems, there were problems with consistency between the WBS and the EVM report and the schedule. For example, we found discrepancies in the hierarchical structure and numbering of WBS elements for JWST, an $8.8 billion project. Specifically, the project’s WBS dictionary showed mission assurance efforts numbered differently than contractor reports for two contractors, each of which had mission assurance labeled with different WBS numbers. NASA officials explained that neither the spacecraft nor the near infrared camera contractor was required to follow the project- level WBS structure or numbering scheme. NASA officials explained that while it is not a requirement for the project and contractor WBSs to be the same, it is recommended that the prime contractor lower-level WBS numbering scheme be consistent with the overall project WBS numbering format. Doing so allows easier total project integration of cost and EVM data for project reporting. Consistency of the WBS element between the cost estimate and the schedule facilitates updating the cost estimate with actual costs and ensures there is correlation between the cost estimate and schedule. Our review of the project schedules also revealed that about half of the schedules were missing predecessor and/or successor dependencies and had constraints that prevented the schedule from responding properly to updates. Since the schedule is the foundation for the EVM baseline, it must be properly sequenced. This means knowing how one activity (the predecessor) affects another (the successor) and how each affects the critical path. When the schedule is not sequenced correctly, the reliability of the EVM data is called into question. Our review found that the MMS project was missing dependencies for 31 percent of its remaining activities for its instrument suite contract and 36 percent of the remaining activities for the instrument suite were constrained. Due to the major sequencing issues in the MMS instrument schedule, we questioned the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. MMS project officials said they believed many of the constrained activities we found were not valid because they reside in another schedule. In addition, officials said some of the constraints found are in the Harness area, and if removed, these constraints would have no effect on the overall schedule. Furthermore, MMS officials said some of the sequencing issues may be caused by manual integration because some instrument provider schedules are in Microsoft Project and others are in Primavera, and therefore it is not possible to ensure all tasks have been linked properly. As part of their schedule health check process, the MMS project scheduler tests the schedule for missing dependencies, logic errors, and reasonable durations and the results are shared with the project office and the contractor so that appropriate action can be taken. However, when we removed the 15 level of effort type activities from the missing dependencies count, the schedule still showed 28 percent of the remaining activities missing dependencies. We also removed the 14 level of effort type activities and 4 Harness from the constraint count, the schedule still showed 33 percent of the remaining activities were constrained. MMS program officials said that they have a process in place to manage float and the critical path, however, the schedule we reviewed still showed significant sequencing issues. Finally, we found that 4 of the 14 the schedules we analyzed were not resource loaded. This means that the schedule did not have the required labor, materials, equipment, and other resources assigned to the appropriate activities. When the schedule is not resource loaded, then costs need to be spread over time using some other method that may not be as straightforward as having the costs integrated directly within the schedule. Having a resource-loaded schedule is a best practice for developing the time phased budget baseline. The time-phased budget baseline represents the plan that performance is measured against for the life of a project. It takes into account that program activities occur in a sequenced order, based on finite resources, with budgets representing those resources spread over time. Deviations from the baseline identify areas where management should focus their attention. Majority of Projects Conducted an Integrated Baseline Review In keeping with best practices, 9 of the 10 projects conducted integrated baseline reviews. An IBR is an evaluation of the performance measurement baseline—the foundation for an EVM system—to determine whether all project requirements have been addressed, risks have been identified, mitigation plans are in place, and available and planned resources are sufficient to complete the work. Conducting an IBR increases confidence that the performance measurement baseline provides reliable cost and schedule data for managing the project and that it projects accurate estimated costs at completion. OMB has endorsed the IBR as a critical process for risk management on major investments and requires agencies to conduct IBRs for all contracts that require EVM. Since an IBR’s goal is to verify that the technical baseline’s budget and schedule are adequate for performing the work, it offers many key benefits such as laying a solid foundation for successfully executing the project and enabling better understanding of the risks. Officials for the SOFIA project did not conduct an integrated baseline review at the project level; however, its prime contractor for the engineering and modification of the airborne observatory platform did conduct an integrated baseline review. According to project officials, the lack of a project-level IBR is related to the EVM system being implemented “on the fly” late in the development phase for SOFIA, as a result of an audit recommendation in 2010. However, project officials noted that the EVM baseline was established concurrently with an agency approved re-plan and Joint Cost and Schedule Confidence Level analysis in 2009 and 2010 and was reviewed by a Standing Review Board as part of that process. Majority of Projects Do Not Have a Comprehensive Surveillance System in Place Four of the 10 projects we assessed had a comprehensive EVM surveillance system in place. Of the remaining 6 projects, 1 had formal surveillance at the project level but its contractor did not, and 2 projects did not have a formal surveillance system at the project level, only their prime contractors did. The remaining 3 projects did not have any formal surveillance, but provided evidence that EVM data, such as cost and schedule variances, were being reviewed during their monthly status reviews. Beyond reviewing cost and schedule variances and variances at completion, formal surveillance reviews ensure that the processes and procedures continue to satisfy the ANSI/EIA EVM guidelines. A formal surveillance plan involves establishing an independent surveillance organization with members who have practical experience using EVM. This organization then conducts periodic surveillance reviews to ensure the integrity of the contractor’s EVM system and where necessary discusses corrective actions to mitigate risks and manage cost and schedule performance. Effective surveillance ensures that the key elements of the EVM process are maintained over time and on subsequent applications. NASA delegates surveillance of contractor EVM systems to the Defense Contract Management Agency (DCMA); however, NASA has no entity to perform independent surveillance reviews to ensure that the ANSI/EIA- 748 standard is being met for EVM efforts performed in-house or by nonprofit organizations. Without an independent surveillance function, an organization’s ability to use EVM as intended may be hampered since surveillance monitors problems with the performance measurement baseline and EVM data. If the kinds of problems that formal surveillance can identify go undetected, EVM data may be distorted and may not be meaningful for decision making. Unreliable EVM Data Limit NASA’s Ability to Measure Project Performance Only 3 of the 10 projects we reviewed, MAVEN, RBSP, and OCO-2, produced fully reliable data for managing the project and reporting status. The other projects only partially met the criterion to have an EVM system in place that produces reliable data. If done correctly, EVM data can provide an objective means for measuring project status and forecasting potential project cost overruns and schedule slippages so that timely action can be taken to minimize their impact. To do so, however, the underlying EVM data must be reliable, meaning that they are complete and accurate and all data anomalies are explained. In our analysis, we found multiple cases of data anomalies that caused us to question the reliability of the EVM data. For example, we found several EVM reports where a contractor reported that no work was planned or accomplished, but actual costs were incurred without an explanation in the variance analysis report to say why this happened. Additionally, we found cases where a contractor reported that work was planned and actual costs were incurred, but a negative amount of work was performed—work that was previously reported as completed was now reported as not completed. Further, we also found several instances where a project reported an estimate at completion but no budget at completion. Finally, we found instances of negative values in the EVM reports. When explanations were provided in the variance analysis reports, the reasons for these anomalies included use of estimated, rather than actual costs, or adjustments from prior periods due to mistakes or errors with the underlying EVM systems. For example, the SOFIA project said that many of the negative values in its EVM reports were due to over-reporting of earlier progress, mischarges by employees, delayed cost postings, inappropriate use of charge codes. When there are data anomalies such as those we identified for SOFIA, the EVM data can become skewed and can distort true performance. Variance thresholds try to quantify an acceptable range of deviation; those that do not exceed a threshold are usually not a concern while those that do are worthy of further inspection to determine the best course of action to minimize any negative impacts to the cost and schedule objectives. should be minimized and the reason for each should be fully explained in the monthly EVM variance analysis reports. To do less limits the completeness and accuracy of the EVM data and makes the resulting variance determinations unreliable. While an industry standard for what constitutes an acceptable volume of anomalies does not exist, EVM experts in the public and private sectors stated that the occurrence of EVM data anomalies should be rare. For four projects that provided subcontractor EVM data, we tried to map EVM data at the project level to lower level EVM data at the subcontractor level. However, we were only able to successfully map the data for one of the projects. This mapping allows project managers to track cost and schedule by defined deliverables to more precisely identify which components are causing cost or schedule overruns and to more effectively mitigate the root cause of the overruns. When the reports do not allow for traceability, project managers are not able to effectively measure progress, use the reports to monitor and control costs based on the original baseline, and/or to track where and why there were differences. For example, when we attempted to map the EVM data in the lower level reports for MAVEN’s spacecraft, science operations center, remote sensing instrument, and Langmuir Probes and Waves instrument efforts to the overall MAVEN project EVM report, we were not able to see how the costs tracked from one report to another and therefore could not reconcile the costs between the reports. Such issues raise the question of which reports contain the true costs for these efforts. However, MAVEN officials walked us through the process they use to ensure that lower-level reports map to the project level reports. Furthermore, MAVEN officials said they do not mandate that their contractors follow a certain reporting format, instead any adjustments necessary to ensure that the lower-level reports map to the project-level reports are made by the project office. Though the MAVEN project does not prescribe to a standard reporting format, attempting to manually resolve incompatible pieces of data can become time-consuming, expensive and can lead to data reliability issues. We also had similar difficulty mapping various levels of EVM data for the MMS project. The MMS project was able to demonstrate how the Southwest Research Institute (SwRI) budget at completion in the lower- level report mapped to the SwRI budget at completion in the MMS project report, but because of the way the contractor submits their data, project officials said that the two reports will never match. Although the project was able to explain how the data tracked, again, attempting to manually resolve incompatible pieces of data can become time-consuming and can lead to data reliability issues. Since our review, MMS officials said the project is working to capture data at lower WBS levels which will allow for a closer tie between the cost and schedule data. Projects Have Not Consistently Applied EVM All the projects we reviewed were using some EVM to manage their work. However, the extent to which EVM is implemented across NASA’s in- house projects and their contractors varies by project and center. For example, 3 of the 10 projects we reviewed did not report project level EVM data. Implementing EVM at the project level rather than just for the contract is considered a best practice. In addition, OMB policy requires the use of an EVM system for both in-house and contractor work and when there is both government and contractor work, the data from the two EVM systems must be consolidated at the reporting level for total program management and visibility. Integrating government and contractor cost, schedule, and performance status at the project level should result in better project execution through more effective management. In addition, some of the in-house projects we analyzed were only required to meet EVM principles and gather “EVM-like” data. Further, contractors such as the nonprofit organization managing the MMS instrument suite, do not report its EVM in the standard contract performance report format because they are only required to meet the intent of the standard. Other contractors for the JWST and MAVEN projects are required to provide EVM reports that show cost and schedule data by WBS elements. Cultural, Technical, and Other Challenges Seen as Impediments to EVM Implementation NASA EVM focal points, headquarters officials, project representatives, and program executives cited cultural and technical challenges, as well as other challenges, as impediments to the effective use of EVM at the agency. NASA’s culture traditionally has focused on solving science and engineering challenges and not on monitoring cost and schedule data such as data produced by an effective EVM system. Technical challenges were also cited as an impediment to effective EVM use, but opinions differed within NASA on the extent of their impact. The technical challenges cited involved difficulty in gathering sufficiently detailed data for timely inclusion and analysis in an EVM system. In addition, though NASA has not conducted an EVM skills gap analysis, NASA representatives said it is a challenge for the agency to implement EVM effectively due to a lack of sufficient staff with the skills and experience to analyze EVM data. NASA Culture Seen as Not Valuing EVM Almost half of the more than 30 NASA representatives we interviewed, including a large number of those charged with implementing EVM agencywide—known as EVM focal points—and officials from the Chief Engineer’s office, cited NASA’s culture as a challenge to the effective use of EVM at the agency. Specifically, several NASA representatives said that historically, NASA’s culture has not focused on or valued the kind of information that EVM can highlight. For example, a NASA EVM analyst said the culture has been focused on science and engineering and that accomplishment of that work has been the first priority for managers. Discussion of the cost of the work has been a secondary concern. A NASA EVM focal point described overcoming a culture that is based on applied and basic research rather than discrete tasks with discrete deliverables, which are among the requirements for effective EVM implementation. Further, a senior official at NASA headquarters told us that in project reviews, discussion of cost and schedule information, like EVM data and analysis, tends to be pushed to the very end of the review meeting and generally is not discussed in detail. Because EVM data is not universally valued within NASA, in some cases, the data generated to satisfy a project’s EVM requirement may be of limited use. For example, one NASA EVM focal point said some managers are just “checking the box” with respect to using EVM. The task is performed but the requirement to collect EVM data was viewed as a nuisance that ultimately did not provide worthwhile information. Nonetheless, several of those we interviewed said persistent inquiries about EVM data from senior management at headquarters, especially over the last couple of years, are having a positive impact on the culture and forcing projects to pay more attention to the data. Technical and Other Challenges Cited as Making EVM Use More Difficult The NASA representatives we interviewed also cited technical challenges as having an impact on the effective use of EVM, although their views varied on the extent of these challenges. For example, about half of the focal points we interviewed reported that a challenge to using an EVM system at NASA was aligning it with the agency’s accounting system, the SAP Core Financial System. One of the problems cited was that EVM data collection may require more detailed data than a project has collected for the agency’s accounting system and this may require the use of estimated costs instead of actual costs. Estimating these costs can create additional work for the project, delay the production of EVM data, and limit the reliability of the EVM data that is produced. Nonetheless, a NASA manager said, projects could more effectively plan their work to better accommodate the accounting system. For example, NASA’s accounting system is set up to measure and report on labor in terms of full-time equivalents. A project, however, may have set up its earned value management system with a different measure for labor, such as productive hours. As a result, the accounting system cannot fill in the proper numbers for an earned value analysis, potentially causing more work for the project. Further, the EVM data could be less accurate due to the use of estimates rather the actual figures. However, if the project had planned from the outset to have the same measure of labor as the accounting system, there would not be a problem having this data fit the EVM system. An Office of the Chief Financial Officer representative did not believe that the projects have consistently demonstrated that the accounting system is a problem, but nonetheless agreed that potential work-arounds and slight changes to processes are potential solutions for these issues. The Office of the Chief Financial Officer has started an initiative to address both the level of detail of the data and improve the monitoring of contractor cost performance at levels that may be lower than levels at which obligations are made and costs are reported in the financial system. A report on a NASA EVM pilot project noted that the greatest impediment for implementing EVM is cultural resistance, not technical challenges. Specifically, it noted that “it’s not the EVM Process. It’s not the EVM Tools. It’s not the SAP Accounting System. It is the NASA culture.” NASA’s use of contracts with one entity to provide goods or services to several different NASA projects was also cited as a challenge to use of EVM. For example, NASA centers may have a contract with one firm to provide engineering support services. Multiple projects may seek services using a single task order on this contract. Because of the way the NASA accounting system is configured, this approach can create artificial variances when looking at EVM data on a month-to-month basis. According to a NASA representative, contractual requirements can correct these issues and allow for a closer accounting of the funds for EVM purposes and in fact, one NASA center has already instituted such requirements in a new contract for services. NASA plans to address this issue as current service contracts expire, but it will take time for the new data requirements that would provide the desired data to be implemented. Other challenges cited include the difficulty of gathering sound EVM data from nonprofit subcontractors, such as universities. One project, for example, reported that EVM data from nonprofit subcontractors were immature or non-existent. The nonprofits may be doing a significant amount of work for a center, but are not equipped to collect earned value data at the level of detail needed, Office of the Chief Financial Officer and center representatives reported. EVM focal points said the problem of collecting EVM data from nonprofits to feed into a larger project-level EVM system could be mitigated through contract language that clearly specifies what data are required from contractors. According to these officials, this kind of language has been included in Jet Propulsion Laboratory contracts and has been successfully demonstrated as a result. Furthermore, NASA is concerned that if nonprofits and small businesses are required to have a fully compliant or certified EVM system, they may not be able to bid on the work. However, the Federal Acquisition Regulation is clear that no offeror can be eliminated from consideration for a contract award because the entity does not have a compliant or certified EVM system. Understanding of EVM Varies Widely Across NASA NASA representatives we interviewed said there was a need for improved abilities across the agency to analyze EVM data and implement EVM systems. Specifically, several focal points said the challenge for NASA is not as much in obtaining EVM data because most of the information comes from private contractors responsible for much of NASA’s work, but in having a staff that can analyze the data and integrate it at the project level. A senior NASA official also noted that the career civil servants, who typically are the first level of review for EVM data, do not have background or training in EVM and cannot conduct a sound analysis of the data. A project representative echoed this comment and noted a general awareness of EVM within projects but a shortage of in-depth knowledge to understand EVM fundamentals and how to interpret the data it produces. For example, some projects seek to reset EVM baselines to match funding allocations, which thwarts efforts to examine cost and schedule trends. One EVM focal point told us it has been difficult to convince project managers that EVM can predict what will happen in their projects given the highly technical nature of their work. For example, a senior manager of a program that experienced significant schedule delays and cost overruns stated that he is an “EVM skeptic” and that he does not see EVM data as helpful in helping him track the performance of a project. Additionally, the employee skill sets available to analyze and implement EVM vary widely from center to center, headquarters officials said. In recent years, NASA has provided EVM training to a large number of employees; however, the agency has not conducted a skills gaps analysis, which could help to determine the number and extent of the staff’s EVM expertise. NASA centers may have staff skill levels reflective of the level of EVM use at the center. Some centers have many projects producing EVM data while others may only rarely work on a project that uses EVM. Without a sufficient number of trained staff to analyze contractor data and implement in-house EVM efforts, NASA will likely continue to struggle to effectively use EVM as a valuable project management tool. NASA Policy Is in Line with Best Practices but Implementation Remains the Challenge NASA has undertaken several initiatives aimed at improving the agency’s use of EVM. For example, NASA strengthened its spaceflight management policy to require projects to comply with the 32 ANSI/EIA- 748 guidelines and has developed the processes and tools for projects to meet this requirement through its new EVM system. While these are positive steps, the policy continues to lack a requirement for rigorous oversight or surveillance of how projects are implementing EVM and NASA does not require projects to use the new EVM system to implement the EVM requirement of the revised policy. In addition, the issues that have impeded NASA’s ability to effectively implement EVM, such as its culture, are longstanding and, as a result, NASA has not had much success implementing EVM in the past. The agency’s recent revision of NASA Procedural Requirements 7120.5—the policy that governs NASA’s spaceflight projects and contains project EVM requirements—strengthened the EVM requirements over prior versions of the policy. For example, the revised policy requires all spaceflight projects to demonstrate compliance with each of the 32 EVM guidelines as set forth in ANSI/EIA-748, whereas the prior policy only required projects to comply with seven high-level EVM principles. The new requirements took effect through the release of an interim directive on September 28, 2011 and have since been made final in NASA’s most recent update to 7120.5. As a result, projects meeting EVM reporting thresholds that enter implementation after that date are required to comply with the new requirement. According to an agency official, the Office of the Chief Engineer and the mission directorates will determine which projects that began development under the prior policy must comply with the new EVM requirements. At major milestones, Office of the Chief Engineer representatives will review whether the projects have implemented the 32 EVM guidelines. However, the new policy still only contains the minimum requirements for earned value management, such as the thresholds for implementing EVM and the requirement to comply with ANSI/EIA-748 guidelines. The policy does not require projects to implement formal independent surveillance of their EVM systems. Without effective surveillance, agencies cannot ensure they are meeting the ANSI/EIA-748 guidelines because internal management systems are not being reviewed to determine if they are providing reliable cost, schedule and technical performance data. In addition, effective surveillance helps pinpoint problems, and is useful for verifying the effectiveness of corrective action plans used to mitigate EVM system deficiencies. While projects are not required to implement formal independent surveillance, NASA does plan to conduct periodic surveillance of project EVM systems. For example, NASA plans to conduct EVM assessments at two key decision point life cycle reviews and through the Office of the Chief Engineer compliance surveys.these methods will increase the agency’s surveillance efforts, best practices call for project level surveillance to be an ongoing, continuous process conducted by an independent surveillance function. The policy also does not require projects to use NASA’s new EVM system, although the system was designed to help projects meet the ANSI/EIA-748 guidelines. We found that the system meets the intent of the ANSI/EIA-748 guidelines. Examples of how NASA’s EVM system will satisfy three key ANSI/EIA-748 guidelines are summarized in table 3 below. For the projects required to comply with the new policy, use of the agency-developed EVM system would meet the ANSI/EIA guidelines; however, some projects will be permitted to continue using their individual EVM systems as long as the 32 guidelines are met. According to agency officials, while future revisions to the policy may require use of the standardized agency-developed EVM system by all projects, at this time, the agency does not plan to require projects to use the agency-developed system in order to meet the guidelines. Instead, senior managers will determine on a case by case basis whether a project will use the agency’s new EVM system. Currently, only the Space Launch System and ICESat-2 projects have been selected to implement the new EVM system. According to an agency senior official, the Agency Program Management Council approved a phased rollout of the new system because NASA does not have the resources to implement it agency-wide. For example, there are not enough NASA subject matter experts to provide the support needed by the projects when applying the new EVM system and to build the institutional capability at the centers. Their approach aims to incrementally build the capacity to do EVM, and seek increased acceptance of EVM as the requirement for its use is expanded. Strong Leadership Needed to Fully Implement EVM Over the years, NASA has attempted to address its EVM shortcomings through a series of policy changes, but these efforts have failed to adequately address the cultural resistance to implementing EVM highlighted by many of the NASA officials we interviewed. NASA has made uneven progress since we reported in 2004 that the agency needed to improve its use of EVM as a project management tool. Furthermore, a 2008 NASA internal study noted that projects needed to be educated on the value and approaches for using EVM and to provide support for setting up EVM within the projects early. Also, an internal agency briefing on EVM stated that a change management initiative would be necessary in order to successfully implement EVM at NASA centers. Our work has also shown that implementing a large-scale initiative, such as EVM, requires more than just policy changes. To see real change and, in effect, a cultural shift at NASA, top leadership must provide to employees a succinct and compelling reason that effective implementation of EVM is important. Articulating a compelling reason for implementing EVM enables employees and other stakeholders to understand the expected outcome of the management initiative and engenders not only their cooperation, but also their ownership of the outcome, which our work has shown can take at least 5 to 7 years to fully implement. NASA, by having a policy that is not comprehensive, allowing projects to opt out of using the new EVM system, and not committing resources to adequately train staff, continues to limit progress in the cultural change needed to implement EVM. Without breaking through the cultural resistance to EVM and committing to efforts intended to strengthen the use of EVM, NASA is missing an opportunity to make full use of a key tool that could help it to manage its projects more effectively. Conclusions Implementing an effective earned value management system and using it across a large federal agency with well-established processes is without doubt a challenging task. However, NASA has had uneven progress to date. NASA acknowledges that EVM can be a valuable tool for monitoring project development and has initiated an effort to implement an agencywide system. Currently, only a few of the 10 major spaceflight projects we reviewed were able to produce reliable EVM data, raising concern that they cannot produce reliable estimates of cost at completion. Moreover, until the data are sufficiently reliable, NASA, as well as external stakeholders, lose valuable insights into project performance that EVM provides. A sound EVM system is not merely an accounting tool; it can alert managers to developing problems so that they can be proactive in reducing the project’s cost and schedule overruns. However, NASA is not making full use of a key tool that could help it address the cost and schedule issues that have kept NASA acquisition management on GAO’s high risk list for more than 20 years. Though NASA’s recent efforts to improve its EVM capability and strengthen its policy are steps in the right direction, implementation—fully integrating EVM into management processes—has been the biggest challenge and remains so today. NASA faces cultural and technical challenges that it must overcome to successfully implement an earned value system and to use this data on a regular basis to inform decision- making. Managing change will be key if NASA’s latest effort to overcome these challenges and implement an agencywide EVM plan is to succeed. To accomplish effective earned value management, strong leadership is required to set an expectation that reliable and credible data are necessary to manage a successful project. This should be buttressed with a sound EVM policy and system that are required, and a commitment of resources to enable staff. Without sustained momentum and commitment, its current efforts could suffer the same consequence as those in the past. Recommendations for Executive Action To improve NASA management and oversight of its spaceflight projects, we recommend that the NASA Administrator direct the appropriate offices to take the following four actions: Establish a time frame by which all new spaceflight projects will be required to implement NASA’s newly developed EVM system, unless the project is proposing to use a certified system, to ensure that in- house efforts are compliant with ANSI/EIA-748. The time frame selected should take in to account the need to increase NASA’s institutional capability for conducting EVM and analyzing and reporting the data. Conduct an EVM skills gap analysis to identify areas requiring augmented capability across the agency. Based on the results of the assessment, develop a workforce training plan to address any deficiencies. Develop an EVM change management plan to assist managers and employees throughout the agency with accepting and embracing earned value techniques while reducing the operational impact on the agency. The plan should include a strategy for having the agency’s senior leadership communicate their commitment to implementation of EVM. To improve the reliability of project EVM data, NASA Procedural Requirements (NPR) 7120.5 should be modified to require projects to implement a formal surveillance program that: Ensures anomalies in contractor-delivered and in-house monthly earned value management reports are identified and explained, and report periodically to the center and mission directorate’s leadership on relevant trends in the number of unexplained anomalies. Ensures consistent use of WBSs for both the EVM report and the schedule. Ensures that lower level EVM data reconcile to project level EVM data using the same WBS structure. Improves underlying schedules so that they are properly sequenced using predecessor and successor dependencies and are free of constraints to the extent practicable so that the EVM baseline is reliable. Agency Comments and Our Evaluation We provided a draft of this report to NASA for comment. In its written comments, reproduced in appendix IV, NASA’s Chief Engineer stated that the agency concurred with two recommendations and partially concurred with two other recommendations. In particular, the agency agreed with our recommendation to perform an EVM skills gap analysis and develop a workforce training plan to address any deficiencies identified. To that end, NASA plans to conduct a skills gap assessment and to augment its EVM training program to address the gaps identified. In addition, the agency also concurred with our recommendation to develop an EVM change management plan and is planning to develop a strategy targeted at all levels of the workforce from project team members to the agency’s leadership. The agency partially concurred with our recommendation that NASA establish a time frame by which all new spaceflight projects will be required to implement NASA's newly developed EVM system, stating that they already require projects to perform EVM with an ANSI/EIA 748 compliant system. NASA stated that its phased rollout approach for implementing the agency’s EVM system is based on available resources, budgetary constraints, and institutional and project needs. However, NASA's approach does not include a timeframe for when projects will be required to use the new system. We recommended that NASA establish a timeframe for rolling out the system to all projects because a large number of projects are not in compliance with NASA’s requirement, and very few are implementing the new EVM system. Using the newly developed EVM system could help projects better ensure NASA's projects are using a system that is compliant with the ANSI/EIA standard. The agency also noted its disagreement with the notion that all projects, in particular those that have a skilled EVM workforce and a compliant system in place, should be forced to use the agency's new system. Accordingly, we acknowledged in our report that there may be situations where a project should not be required to use the agency’s EVM system, such as when a project already uses a certified system or for current, ongoing projects. Furthermore, we reported that NASA lacks the appropriate level of surveillance of its projects’ EVM systems to monitor project adherence to the EVM standard; in addition, the extent to which EVM has been effectively implemented across NASA's projects varies. If NASA chooses not to require projects to use its new system it should take steps to ensure that it monitors their compliance with the standard. Finally, while we appreciate that NASA must balance its resources with its needs, the benefits that an effective EVM system can provide, such as allowing project managers to identify cost growth and take actions to stem further growth, warrants prioritization of resources to ensure earlier widespread implementation of NASA's EVM system. The agency also partially concurred with our recommendation that NPR 7120.5 be modified to require projects to implement a formal EVM surveillance program. Citing resource constraints, NASA commented that it does not plan to implement a formal surveillance program, but agreed that the reliability and utility of the EVM data needed to be improved. As a result, the agency plans to establish a surveillance process, expand the workforce’s EVM skills, and provide analytical tools including developing an EVM System Acceptance and Surveillance Guide. Furthermore, NASA said that it was not appropriate to incorporate the surveillance requirement in NPR 7120.5 because of the level of detail associated with requirements in that policy. The most important part of our recommendation is that EVM surveillance should be required to ensure better quality data. We reported that only 4 of the 10 projects we assessed had a comprehensive EVM surveillance system in place and the others had limited or no surveillance being performed and only 3 of the 10 projects had fully reliable data. Without an effective surveillance program, NASA cannot ensure its projects are meeting the ANSI/EIA-748 standard because internal management systems are not being reviewed to determine if they are providing reliable cost, schedule and technical performance data. In its response, NASA also noted that the project data we used in our report is over a year old and does not take in to account progress made over the past year. We disagree and note that in the report we discuss progress the agency has made over the past year, such as strengthening the EVM requirements in its policy and developing its new EVM system. Furthermore, we did not solely rely on project EVM data to develop our findings. For example, interviews with project officials and additional documentation they provided further validated our findings. Finally, it is important to note that NASA Acquisition Management has been on GAO’s High Risk list for many years due to the agency’s cost and schedule performance on its major projects. EVM is an important project management tool that can assist project managers in managing and assessing performance. Not addressing key issues that impact the availability of accurate and reliable data could lessen the usefulness of this key project management tool. NASA also provided technical comments, which have been addressed in the report, as appropriate. We are sending copies of this report to interested congressional committees, NASA’s Administrator, and other interested parties. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Scope and Methodology To determine to what extent NASA’s major spaceflight projects are using earned value management (EVM) to manage acquisitions, we reviewed all NASA major spaceflight projects with a life cycle cost of over $250 million that were in the implementation phase and thus required to report EVM. There were 13 projects that met these criteria. Of these, 2 projects had recently launched and the launch of a 3rd was imminent. These 3 were excluded from our assessment because the work on these projects was nearly complete. Collectively, the 10 projects we reviewed will cost over $14 billion to develop. Our review looked at EVM data for the period of August 2010 to August 2011. While the majority of the 10 projects we reviewed had at least 6 months of EVM data, a few did not because the project had only recently entered the implementation phase. Additionally, some projects were undergoing a re-plan and, therefore, were not required to provide EVM data for certain periods of time. In particular, the Tracking and Data Relay Satellite Sustainment contract had 5 months of data, the Mars Atmosphere Volatile Evolution Mission project had only 3 months of data, and the Orbiting Carbon Observatory 2 Orbital contract had just 4 months of data. The James Webb Space Telescope contract had EVM data for the whole period; however, the contractor underwent a major replan in which all EVM data except for the reporting of actual costs were suspended from January 2011 to April 2011. Although the Global Participation Measurement (GPM) project provided EVM reports for the entire project, we did not conduct an analysis of the project EVM data, because the performance reports did not contain the detailed data we needed for our analysis. However, we were able to assess the performance data for GPM’s Microwave Imager Instrument. To determine cost and schedule performance for the selected major projects based on an evaluation of the earned value data, we analyzed project and contractor data and documentation including contract performance reports; project work breakdown structures; project schedules; integrated baseline review briefings; the extent to which surveillance of the EVM system was occurring; and monthly management briefings for the 10 major spaceflight projects. Specifically, we compared project documentation with EVM and scheduling best practices as identified in GAO’s Cost Estimating and Assessment Guide and Schedule Assessment Guide. To the extent practicable, we assessed how each of the 10 projects’ EVM data adhered to 3 of the American National Standard Institute’s (ANSI) and Electronic Industries Alliance (EIA) 32 guidelines. In addition, we assessed the projects against 3 fundamental EVM practices that we believe are necessary for maintaining a reliable EVM system, as identified in our cost guide. We also analyzed the contract performance reports for each project to determine the level of data reliability. Specifically, we identified instances of the following: (1) negative planned value, earned value, or actual cost; (2) planned value and earned value without actual cost; (3) earned value and actual cost without planned value; (4) actual cost without planned value or earned value; (5) earned value without planned value and actual cost; (6) inconsistencies between the estimated cost at completion and the planned cost at completion; (7) actual cost exceeding estimated cost at completion; and (8) planned or earned values exceeding planned cost at completion. For the contracts that had more than 6 months of data, we used contract performance report data in order to generate our estimated overrun or underrun of the contract cost at completion by using formulas accepted by the EVM community and printed in the GAO Cost Estimating and Assessment Guide. To perform this analysis, we examined contractor performance reports over the period for which we had data to show trends in cost and schedule performances. We generated multiple formulas for the projected contract cost at completion that were based on how much of the contract had been completed up to August 2011 or earlier for some projects. The ranges in the estimates at completion are driven by using different efficiency indices based on the project’s past cost and schedule performance to forecast the cost of the remaining work and adding that cost to the actual costs to date. The efficiency indices capture how the project has performed in the past and can be useful in predicting how it will perform in the future. We also analyzed monthly project management review briefings to support our analysis. Finally, we analyzed the earned value data contained in EVM performance reports obtained from the projects. To perform this analysis, we compared the cost of work completed with budgeted costs for scheduled work to show trends in cost and schedule performances. To assess the reliability of the cost data, in addition to electronically testing the data for anomalies, we also reviewed relevant project documentation and interviewed agency and project officials about the data. We then followed up on these anomalies with the project offices that manage each of the spaceflight projects by sharing our preliminary analysis for each of the 10 projects. When warranted, we updated our analyses based on the agency’s response and additional documentation provided to us. The data that we used were sufficiently reliable for how we portrayed them in our report and we are making recommendations to the agency to improve NASA’s data reliability based on the findings discussed in our report. We did not test the adequacy of the agency or contractor accounting systems. To support and clarify information in our documentation reviews, we interviewed agency officials at NASA headquarters and EVM Focal Point Working Group members—the agency officials that are responsible for developing an integrated, consistent approach for implementing EVM throughout NASA, as well as addressing EVM review and surveillance issues and activities—at each center and the Human Exploration and Operations and Science mission directorates to discuss their roles as well as the extent to which EVM data is used to inform decision making. We interviewed officials at NASA headquarters in Washington, D.C.; and officials from Ames Research Center in Moffett Field, California; Dryden Flight Research Center in Edwards, California; Glenn Research Center in Cleveland, Ohio; Goddard Space Flight Center in Greenbelt, Maryland; Johnson Space Center in Houston, Texas; the Jet Propulsion Laboratory in Pasadena, California; Kennedy Space Center in Florida; Langley Research Center in Hampton, Virginia; Marshall Space Flight Center in Huntsville, Alabama; and Stennis Space Center in Mississippi. Additionally, we received responses to questions regarding the EVM data from each of the 10 projects we selected for review. These questions addressed how EVM practices are implemented at the project level and how the project utilizes EVM data. To determine the challenges that NASA has faced in implementing an effective EVM system, we interviewed NASA headquarters personnel to discuss the status and plans for implementing the agency-wide EVM system. We developed a standard set of questions and interviewed EVM Focal Point Working Group members at each center and the Human Exploration and Operations and Science mission directorates to assess the challenges of implementing EVM at individual centers and across the agency. We also interviewed a selection of senior officials and program executives at NASA headquarters that represent projects from each mission directorate and NASA center included in our engagement to obtain their perspective on the challenges of implementing and using EVM on their projects. We also reviewed prior GAO and NASA Inspector General reports that discuss the agency’s prior efforts to implement EVM. We examined GAO and NASA Inspector General reports that discuss the importance of effective organizational change. Additionally, we received written responses to a standard set of questions regarding the challenges associated with implementing EVM from each of the 10 projects we selected for review. To determine the steps that NASA is taking to improve its use of earned value management, we examined the results of NASA’s EVM capability pilot projects and draft policies and guidance and compared these with best practices in EVM as discussed in GAO’s Cost Estimating and Assessment Guide, the ANSI/EIA-748 standard, and OMB Circular A-11, Preparation, Submission, and Execution of the Budget and the Capital Programming Guide. In addition, we interviewed NASA headquarters personnel and EVM Focal Point Working Group members at each center and the Human Exploration and Operations and Science mission directorates to discuss and obtain information on ongoing initiatives the agency has undertaken. We conducted this performance audit from June 2011 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Importance of Earned Value Management Pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion is always a challenge for any project. Without this information, management of the project will be fragmented, presenting a distorted view of project status. For several decades, the Department of Defense (DOD) has utilized a tool called earned value management (EVM) to compare the value of work performed to the work’s actual cost. Earned value management goes beyond the two- dimensional approach of comparing budgeted costs to actual costs. It attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the project, the earned value concept should alert project and senior managers to the potential problems early in the project. In 1996, DOD adopted 32 criteria for evaluating the quality of earned value management systems. These 32 criteria are organized into 5 basic categories: organization, planning and budgeting, accounting considerations, analysis and management reports, and revisions and data maintenance. In general terms, the criteria require contractors to define the contractual scope of work using a work breakdown measure the progress of work based on objective indicators; structure; identify organizational responsibility for the work; integrate internal management subsystems; schedule and budget authorized work; collect the cost of labor and materials associated with the work performed; analyze any variances from planned cost and schedules; forecast costs at contract completion; and control changes. The criteria have evolved to become an American National Standards Institute (ANSI) and Electronic Industries Alliance (EIA) standard for EVM, which has been adopted by major U.S. government agencies, industry, and the governments of Canada and Australia. The full application of EVM system criteria is appropriate for large-cost reimbursable contracts where the government bears the cost risk. For such contracts, the management discipline described by the criteria is essential. In addition, data from an EVM system have been proven to provide objective reports of contract status, allowing numerous indices and performance measures to be calculated. These can then be used to develop accurate estimates of anticipated costs at completion, providing early warning of impending schedule delays and cost overruns. The standard format for tracking earned value is through a Contract Performance Report (CPR). The CPR is a monthly compilation of cost, schedule and technical data which displays the performance measurement baseline, any cost and schedule variances from that baseline, the amount of management reserve used to date, the portion of the contract that is authorized unpriced work, and the contractor’s latest revised estimate to complete the project. As a result, the CPR can be used as an effective management tool because it provides the project manager with early warning of potential cost and schedule overruns. Using data from the CPR, a project manager can assess trends in cost and schedule performance. This information is useful because trends can be difficult to reverse. Studies have shown that once projects are 15 percent complete, the performance indicators are indicative of the final outcome. For example, a CPR showing a negative trend for schedule status would indicate that the project is behind schedule. By analyzing the CPR, one could determine the cause of the schedule problem such as delayed flight tests, changes in requirements, or test problems because the CPR contains a section that describes the reasons for the negative status. A negative schedule condition is a cause for concern, because it can be a predictor of later cost problems since additional spending is often necessary to resolve problems. For instance, if a project finishes 6 months later than planned, additional costs will be expended to cover the salaries of personnel and their overhead beyond what was originally expected. CPR data provides the basis for independent assessments of a project’s cost and schedule status and can be used to project final costs at completion in addition to determining when a project should be completed. Examining a project’s management reserve is another way that a project can use a CPR to determine potential issues early on. Management reserves, which are funds that may be used as needed, provide flexibility to cope with problems or unexpected events. EVM experts agree that transfers of management reserve should be tracked and reported because they are often problem indicators. An alarming situation arises if the CPR shows that the management reserve is being used at a faster pace than the project is progressing toward completion. For example, a problem would be indicated if a project has used 80 percent of its management reserve but only completed 40 percent of its work. A project’s management reserve should contain at least 10 percent of the cost to complete a project so that funds will always be available to cover future unexpected problems that are more likely to surface as the project moves into the testing and evaluation phase. An Integrated Baseline Review (IBR) is conducted to ensure the reliability of the EVM data and that the performance measurement baseline accurately captures all the work to be accomplished. Data from the CPR can then be used to assess project status—typically, monthly. Cost and schedule variances are examined and various estimates at completion are developed and compared to available funding. The results are shared with management for evaluating contractor performance. Finally, because EVM requires detailed planning for near-term work, as time progresses, planning packages are converted into detailed work packages. This cycle continues until all work has been planned and the project is complete. An IBR is an evaluation of the performance measurement baseline to determine whether all project requirements have been addressed, risks identified, and mitigation plans put in place and all available and planned resources are sufficient to complete the work. Too often, projects overrun because estimates fail to account for the full technical definition, unexpected changes, and risks. Using poor estimates to develop the performance measurement baseline will result in an unrealistic baseline for performance measurement. After the CPRs start being delivered to the government, it is important to oversee the project by conducting surveillance of the EVM system. Surveillance is reviewing a contractor’s EVM system as it is applied to one or more projects. Its purpose is to focus on how well a contractor is using its EVM system to manage cost, schedule, and technical performance. For instance, surveillance checks whether the contractor’s EVM system summarizes timely and reliable cost, schedule, and technical performance information directly from its internal management system; complies with the contractor’s implementation of ANSI/EIA-748 guidelines; provides timely indications of actual or potential problems by performing spot checks, sample data traces, and random interviews; maintains baseline integrity; gives information that depicts actual conditions and trends; and provides comprehensive variance analyses at the appropriate levels, including corrections for cost, schedule, technical, and other problem areas. Effective surveillance ensures that the key elements of the EVM process are maintained over time and on subsequent applications. EVM system surveillance ensures that the contractor is following its own corporate processes and procedures and confirms that the contractor’s processes and procedures continue to satisfy the ANSI guidelines. The surveillance team designated to perform project reviews should consist of a few experienced staff who fully understand the contractor’s EVM system and the processes being reviewed. The surveillance organization should appoint the team leader and ensure that all surveillance team members are independent. This means that they should not be responsible for any part of the projects they assess. Key activities on the surveillance team’s agenda include reviewing documents, addressing government project office concerns, and discussing prior surveillance findings and any open issues. Sufficient time should be allocated to all these activities to complete them. The documents for review should give the team an overview of the project’s implementation of the EVM process. Successful surveillance is predicated on access to objective information that verifies that the project team is using EVM effectively to manage the contract and complies with company EVM procedures. Objective information includes project documentation created in the normal conduct of business. Besides collecting documentation, the surveillance team should interview control account managers and other project staff to see if they can describe how they comply with EVM policies, procedures, or processes. During interviews, the surveillance team should ask them to verify their responses with objective project documentation such as work authorizations, cost and schedule status data, variance analysis reports, and back-up data for any estimates at completion. Appendix III: Case Studies of Selected Projects’ Implementation of Earned Value Management We conducted case studies of 10 major spaceflight system acquisition projects. This appendix provides a brief description of each project, including an analysis of the project’s earned value data and trends. As part of our analysis, we assessed the projects’ implementation of three fundamental earned value management (EVM) practices that we believe are necessary for maintaining a reliable EVM system—using a certified American National Standards Institute (ANSI) and Electronic Industries Alliance (EIA) compliant system, performing surveillance, and conducting integrated baseline reviews. We also assessed the projects’ EVM data against three ANSI and EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a work breakdown structure (WBS) that has been tailored to the project and that the WBS is the same for the cost estimate, schedule, and EVM. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. As mentioned above, this appendix includes an analysis of each project’s earned value trends from August 2010 to August 2011. These data and trends are often described in terms of cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. Schedule variances are also measured in dollars, but they compare the earned value of the completed work with the value of the work that was expected to be completed. Positive variances are good—they indicate that activities are costing less than expected or are completed ahead of schedule. Negative variances are bad—they indicate activities are costing more than expected or are falling behind schedule. Variances are merely measures that indicate that work is not being performed according to plan and that it must be assessed further to understand why. Although our EVM cost projections may show that a project is experiencing negative cost variances and schedule slippages, this does not mean that a project has exceeded its agency baseline commitment and will require additional funds to complete the project. These estimates use a project’s EVM baseline, which represents only a portion of the agency baseline commitment for a project. The EVM baseline contains the cost and schedule contained in a project’s management agreement minus unallocated future expenses and schedule margin held by the project and others above the project. As of August 2011, the budget at completion for the 10 projects was estimated to be $6.4 billion. To estimate the project variance at completion, we examined the trends for the earned value data for the entire project, if data was collected at that level, or elements of the project. Table 4 provides a summary of the projects’ implementation of each EVM best practice we assessed and projected costs. In October 2011, the Global Precipitation Measurement mission was approved for a replan. The contract for the Science Operations Center, Remote sensing Package, Langmuir Probe and Waves Instrument does not exceed $50 million. Therefore, the supplier is not required to have a certified system for this contract. Orbiting Carbon Observatory 2 is in the process of being rebaselined due to a change in the launch vehicle. With timely and effective action taken by project and executive management, it is possible to reverse negative performance trends so that the projected negative cost variances at completion may be reduced. To get such results, management needs to obtain reliable EVM data from EVM systems that adhere to the ANSI/EIA-748 standard for informed decision making. Until project offices undertake a rigorous validation of their EVM data, NASA faces an increased risk that managers may not be receiving the information they need to effectively manage their projects. The following information describes the key that we used in tables 5 through 14 to convey the results of our assessment of the 10 case study projects’ implementation of EVM practices. Global Precipitation Measurement The Global Precipitation Measurement (GPM) mission, a joint NASA and Japan Aerospace Exploration Agency (JAXA) project, seeks to improve the scientific understanding of the global water cycle and the accuracy of precipitation forecasts. The GPM is composed of a core spacecraft carrying two main instruments: a Dual-frequency Precipitation Radar and a GPM Microwave Imager (GMI). GPM builds on the work of the Tropical Rainfall Measuring Mission, and will provide an opportunity to calibrate measurements of global precipitation when it launches in 2014. This analysis focuses only on the GMI-1 effort. Ball Aerospace and Technology Company is the prime contractor for GMI. GMI’s current contract value is $217 million, which represents approximately 23 percent of the total GPM project budget of $932.8 million. The GMI instrument was delivered to Goddard Space Flight Center in February 2012 for integration into NASA’s upcoming Earth science spacecraft. All remaining effort for GMI-1 is post delivery support, which is all level of effort. The GPM project provided EVM reports for the entire project, but we did not conduct an analysis of the project EVM data because the performance reports did not contain the detailed data we needed for our analysis. GPM Microwave Imager Contractor Uses a Certified EVM System Compliant with the ANSI/EIA Standard The GMI-1 contractor met the three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. The Defense Contract Management Agency (DCMA) certified that the GMI-1 contractor’s EVM system complied with the ANSI/EIA standard in April 2008. Though the contractor has a certified system, the implementation of that system is questionable based on our findings below. We assessed GMI-1’s EVM data against three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. Our review found that the WBS in the GMI- 1 schedule did not match the WBS used for the EVM data. Project officials said that the WBS for the project schedule did not match the WBS used in GMI contractor’s EVM reports because GMI-1 is only one element of the total project and the project level schedule has a simplified summary of the GMI-1 schedule that was used for completeness. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review of the GMI-1 schedule found some sequencing issues. For example, about 5 percent of the remaining activities were missing predecessor and successor links, which are necessary for properly sequencing work so that the schedule will update in response to changes. We also found that 19 percent of the remaining activities had date constraints, which also hinder the schedule’s ability to respond dynamically to status updates resulting in an artificial or unrealistic view of the project plan. These sequencing issues and constraint dates within the schedule affect the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. Project officials agreed with our findings but said that these issues are corrected as they are discovered, so there is no impact to the project. Finally, though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the GPM schedule was not resource loaded, it is GAO’s assessment that the project did not show evidence that it had established and maintained a time-phased budget baseline. Project Conducted an Integrated Baseline Review The project conducted an integrated baseline review (IBR) in January 2006. From that review, officials believed the results were less than satisfactory due to the contractor’s inability to demonstrate the integration of contract schedule and cost in accordance with their EVM system description. In particular, 64 areas of concern were identified during the IBR, including major concerns with data continuity, cost/schedule risk, and EVM processes. As a result of these issues and changes to the contract another IBR was performed in January 2011. EVM Surveillance Is Being Performed Joint surveillance reviews of the EVM data are being performed by DCMA and the contractor. According to DCMA, although they have found some deficiencies in the contractor’s EVM data, the contractor has responded with acceptable corrective action plans. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to July 2011. Our review of the GMI-1 EVM data found various data anomalies that call into question the reliability of the data. For example, we found negative values for EVM data without any explanation in three monthly reports. Project officials responded that the negative values all fell within the contract threshold and therefore the GMI-1 contractor did not need to provide an explanation in the variance analysis report. In addition, there were many instances of costs and performances being recorded when no work had been scheduled. In response to this finding, project officials explained that this work had been accounted for in previous months, which explain the missing values. Anomalous EVM data prevents the project from being able to gain meaningful and proactive insight into potential cost and schedule performance shortfalls, and take corrective action to avoid shortfalls in the future. Figure 2 below illustrates that as of July 2011, the project was reporting a negative cumulative cost variance of $13 million and a negative cumulative schedule variance of $3 million. GPM project officials said the cumulative negative cost and schedule variances were due to slips caused by suppliers not delivering flight hardware as planned, which pushed uncompleted work into the future. The negative variances were also caused by tasks being worked that were not included in the baseline. Officials also said the project experienced unfavorable variances in labor costs across all integrated project teams, which were further affected by the unfavorable 2010 year end indirect rate adjustment.officials said the contractor increased staffing and added extra shifts, To address the negative schedule variance, which increased labor costs thereby increasing the contract value. Officials noted, however, that the increase in contract value did not translate into an increase in the baseline, just an increase in the project funding. Due to both the negative cost and schedule variances, we are forecasting a negative variance ranging from $14 million to $22 million at contract completion. According to NASA, the project is not overrunning its approved baseline commitment, because the EVM baseline does not include unallocated future expenses held at the project and headquarters level. James Webb Space Telescope The James Webb Space Telescope (JWST) is a large, infrared-optimized space telescope that is designed to find the first galaxies that formed in the early universe. Its focus will include searching for first light, assembly of galaxies, origins of stars and planetary systems, and origins of the elements necessary for life. Scheduled to launch in October 2018, JWST’s instruments will be designed to work primarily in the infrared range of the electromagnetic spectrum, with some capability in the visible range. JWST will have a large primary mirror composed of 18 smaller mirrors, measuring 6.5 meters (21.3 feet) in diameter, and a sunshield that is the size of a tennis court. A successor to the Hubble Space Telescope and the Spitzer Space Telescope, JWST will reside in an orbit about 1 million miles from the Earth. NASA rebaselined JWST in September 2011 and made changes in the project’s management in response to cost and schedule performance issues and the recommendations of the Independent Comprehensive Review Panel report. As part of the rebaseline, NASA took the lead role for systems engineering from the prime contractor. The telescope, along with a segmented primary mirror, will deliver infrared light to the Fine Guidance Sensor and fine pointing updates to the Observatory and four scientific instruments including the Near-Infrared Camera (NIRCam), the Near-Infrared Spectrograph, the Mid-Infrared Instrument and the Fine Guidance Sensor/Near InfraRed Imager and Slitless Spectrograph. For work being performed by its international partners, such as the Near- Infrared Spectrograph, EVM data is not collected. At the time of our review, there was no EVM data for the overall JWST project or for the work done in-house on the Integrated Science Instrument Module. According to project officials, the Integrated Science Instrument Module effort began over a decade ago, and significant parts of the project, particularly those undertaken in-house at GSFC were not structured to enable EVM to be implemented easily. However, the JWST project office is collecting EVM data from Northrop Grumman Aerospace Systems and obtains copies of the Lockheed Martin Space Systems- Advanced Technology EVM data, which is the University of Arizona’s prime contractor. Northrop Grumman Aerospace Systems is responsible for developing and launching the JWST Observatory, which comprises the spacecraft, sunshield, and the optical telescope element, systems integration and test observatory verification, observatory commissioning, and ground and launch support equipment. The University of Arizona is responsible for developing the Near-Infrared Camera science instrument. Observatory Contractor Has a Certified EVM System Compliant with ANSI/EIA Standard The observatory contractor met all three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. For the observatory portion of the JWST project, Northrop Grumman Aerospace Systems has a certified EVM System, which it uses to fulfill the earned value reporting requirement. Though the contractor has a certified system, the implementation of that system is questionable based on our findings below. We assessed the observatory contractor’s EVM data against three fundamental ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that while consistent naming conventions existed between the WBS and contract performance reports, there were discrepancies in the hierarchical structure and numbering of the WBS elements. For example, the WBS dictionary shows the Mission Assurance listed as 3.0, while the contract performance report provided by the observatory contractor has Mission Assurance listed as 2.0. NASA officials responded that the observatory contractor is not required to follow the project level WBS hierarchical structure nor the WBS numbering scheme. They further stated that the project and the observatory WBS structures are not identical because procurement of the observatory is only one element of the overall JWST project. NASA did not provide us with a schedule in a format that would allow us to determine if the schedule had the proper sequencing in place. As a result, we were unable to determine if significant task interdependencies such as predecessor and successor links were in place to ensure that the schedule would update in response to changes. In addition, because we could not view the schedule in its native software, we were unable to determine if the schedule was resource loaded, which is a best practice for establishing and maintaining a time-phased budget baseline. Project Conducted an Integrated Baseline Review An Integrated Baseline Review was conducted in February 2010; however, because of the rebaseline in September 2011, according to the project an additional IBR was held for the observatory in October 2012. EVM Surveillance Is Being Performed Surveillance is being performed by DCMA. In addition, EVM data is reviewed monthly by multiple individuals on the project office as well as at higher levels of NASA headquarters and the Goddard Space Flight Center. Data Resulting from the EVM System Was Somewhat Reliable We reviewed contract performance reports from August 2010 to July 2011. Our review of the Observatory EVM data found various data anomalies that call into question the reliability of the data. For example, we found actual costs recorded without any work being performed, inconsistencies between the reported estimate at completion and budget at completion, large month to month performance swings, and unexplained variances. NASA officials explained that during this time period the observatory contractor was engaged in re-planning efforts so NASA did not want them to expend resources reporting performance management data to an outdated performance measurement baseline that did not reflect the new rebaseline assumptions. Further, while variances analyses were provided in the variance analysis reports for WBS elements that exceeded contractual thresholds, there was no explanation for the anomalies we found. A variance analysis report provides a detailed, narrative report explaining significant cost and schedule variances and other contract problems and topics. Without this information, management cannot understand the reasons for the variances and the contractor’s plan for fixing them. When information is missing in a variance analysis report, the EVM data will not be meaningful or useful as a management tool. As of July 2011, the observatory portion of the JWST Project was 51 percent complete with a positive cumulative cost variance of $2.4 million. For the same period the project was also experiencing a positive cumulative schedule variance of $0.9 million as seen in figure 3 below. In January 2011, the observatory contractor began replanning the remaining effort in order to meet the October 2018 launch readiness date. As a result, NASA suspended performance measurement reporting during the period of January 2011 through April 2011. However, the observatory contractor was still required to submit contract performance reports depicting actual cost and estimate at completion data. Since there were only 3 months of EVM data after the rebaseline, we were not able to forecast a variance at completion. Since our assessment, JWST project officials said they have made some significant improvements in implementation and use of EVM that includes an EVM approach for the in-house work that will provide EVM metrics to measure progress. In addition, the officials said the project is also doing managerial analysis on its contracts and project components and producing independent estimate of completion each month based on the EVM data. Near-Infrared Camera Contractor Has a Certified EVM System Compliant with ANSI/EIA Standard The Near-Infrared Camera (NIRCam) contractor, Lockheed Martin, met one of the three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. The contractor has a certified EVM System that it is using to report EVM data for NIRCam. Our review found similar problems with the WBS hierarchical structure and numbering of the elements. For example, the WBS dictionary shows Mission Assurance listed as 3.0 while the NIRCam contract performance report shows this effort under element 5.5. NASA officials explained that the NIRCam contractor is not required to follow the project level WBS hierarchical structure or numbering of elements. As stated above, we did not receive a schedule in its native software, so we were unable to confirm whether the schedule was sequenced using predecessor and successor links or if it was resource loaded, necessary for establishing the time-phased budget baseline. Project Conducted an Integrated Baseline Review As stated above, while the JWST project conducted an IBR in February 2010, the rebaseline necessitated a new IBR, which occurred in March 2012. EVM Surveillance Is not Being performed While formal surveillance is not occurring for the Lockheed Martin EVM data, monthly reviews of the EVM data are performed by both the project and program offices and by independent groups. As part of these reviews, trending metrics are prepared and presented to management as part of the internal project reviews and monthly status reviews. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 through July 2011 and found various data anomalies that call into question the reliability of the data. For example, we found EVM data with negative values, no work scheduled but work performed, and actual costs being incurred without any work being performed. NASA officials stated that variances that exceeded contractual thresholds should be reflected in the variance analysis reports, however many of these anomalies did not breach a variance threshold so the report provided no explanation. In addition, NASA officials explained that many of these anomalies occurred during the project replan, which was formally approved in September 2011, when the project was rebaselined. As a result of the replan, NASA suspended EVM data reporting, which resulted in many of the anomalies we found. As of July 2011, the NIRCam portion of the JWST project was 98 percent complete. In July 2011, there was a negative cumulative cost variance of $33 million and a negative cumulative schedule variance of $4.4 million as seen in figure 4 below. The reasons for the downward trend reflected in the graph below were not explained because variance analysis reporting was suspended during the replan period. Based on the downward trend, we are forecasting a negative variance at completion ranging anywhere from $34 million to $48 million. This analysis is based on information prior to the project’s 2011 replan. The variances projected do not take in to account the establishment of a new EVM baseline as a result of the replan. Landsat Data Continuity Mission The Landsat Data Continuity Mission (LDCM) is a joint mission between NASA and the United States Geological Survey (USGS) that seeks to extend the ability to detect and quantitatively characterize changes on the global land surface at a scale where natural and man-made causes of change can be detected and differentiated. It is the successor mission to Landsat 7. The Landsat data series, begun in 1972, has provided the longest continuous record of changes in the Earth’s surface as seen from space. Landsat data is a resource for people who work in agriculture, geology, forestry, regional planning, education, mapping, and global change research. The LDCM provides remotely sensed, highly calibrated, moderate resolution, multispectral imagery affording systematic global coverage of the Earth’s land surfaces on a seasonal basis and makes the data readily available for large-scale and long-term Earth System Science and Land use/land cover change research and management. The project plans to launch early in February 2013. LDCM consists of an Operational Land Imager (OLI) and a Thermal Infrared Sensor (TIRS) science instrument, a spacecraft, and a mission operations element. LDCM does not collect EVM data at the project level. The decision not to perform EVM at the project level was reviewed extensively prior to proceeding into the design and development phase, according to project officials. Also, there is no EVM data for the spacecraft effort because this work is being done under a firm fixed price contract and NASA regulations do not require EVM for firm fixed price contracts. The TIRS instrument, built in-house at NASA’s Goddard Space Flight Center, was added late in the formulation phase with an aggressive delivery schedule and delivered in February 2012. The Ground System is being built and delivered by the USGS. The developmental part of the OLI contract with Ball Aerospace and Technology Corporation was completed with delivery of the instrument in early October 2011. Following on-orbit checkout, the contract will transfer to USGS for management. The OLI instrument is the only part of the project that performed EVM. Operational Land Imager Contractor Has a Certified EVM System Compliant with ANSI/EIA Standard The OLI contractor met two of the three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. In 2007, the Defense Contract Management Agency (DCMA) certified Ball Aerospace and Technology Corporation’s EVM system. Though the contractor has a certified system, the implementation of that system is questionable based on our findings below. We assessed the contractor EVM data against three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. Our review found that the WBS in the OLI schedule was consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our analysis found 11 percent of the remaining activities were missing dependencies, 13 percent had lags, and 24 percent had constraints, among other things. When schedules are not sequenced properly, float values and the calculated critical path will not be valid. Project officials said the schedule sequencing is driven by external forces such as facilities availability, spacecraft and ground system interfaces, DCMA inspections, and so forth. The effects of these forces on schedule sequencing and critical path are reviewed extensively and the validity of the critical path is not typically an issue for project management. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guideline, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. Our analysis found that the OLI schedule was resource loaded. Project Conducted an Integrated Baseline Review The project conducted an IBR in 2007. The IBR identified 80 areas of concern and as a result the LDCM Project did not accept the Performance Measurement Baseline at the IBR. However since 2007, Ball Aerospace and Technology Corporation has addressed the areas of concern. EVM Surveillance Is Being performed Joint surveillance reviews are being conducted on Ball Aerospace and Technology Corporation’s EVM system by DCMA and the Defense Contract Audit Agency. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to August 2011. While the EVM data reflected several data anomalies, Ball Aerospace provided detailed explanations for each of them. For example, negative values were attributed to year end rate savings, labor corrections, or material transfers that were greater than the current month actual costs which resulted in a negative number. The EVM data assessed below reflects all work associated with the LDCM OLI instrument. Figure 5 below illustrates that as of August 2011, the project was a reporting a negative cumulative cost variance of approximately $46 million and a negative cumulative schedule variance of $1.3 million. The negative cumulative cost and schedule variances were due to various technical challenges experienced during instrument development, including detector fabrication issues, coatings issues that necessitated the build of a second calibration subassembly, and instrument integration onto the baseplate taking two weeks longer than expected to complete fabrication. Due to the negative variances, we are forecasting a negative variance at completion ranging from $49 million to $52 million. According to a project official, the project is not overrunning its agency baseline commitment, because the EVM baseline does not include unallocated future expenses held by the project or NASA headquarters. The developmental part of the OLI instrument contract with Ball Aerospace and Technology Corporation was completed with delivery of the instrument in early October 2011. Lunar Atmosphere and Dust Environment Explorer The Lunar Atmosphere and Dust Environment Explorer (LADEE) will determine the global density, composition, and time variability of the lunar atmosphere. LADEE’s measurements will determine the size, charge, and spatial distribution of electrostatically transported dust grains. Additionally, it will carry an optical laser communications demonstrator that will test high-bandwidth communication from lunar orbit. Finally, it will broaden the scientific understanding of other planetary bodies regarding exospheres or very thin atmospheres, like the moon. Project Office Does not Have a Certified EVM System Compliant with the ANSI/EIA Standard LADEE met only one of three key fundamental ANSI/EIA-748 practices for reliable EVM system. NASA’s EVM guidance says that projects must start reporting EVM data once the project enters the project implementation phase, if the project’s life-cycle cost is at or greater than $20 million. While LADEE may not have a contract that exceeds $20 million, the overall LADEE project cost is about $262.9 million. Nevertheless, project officials said LADEE is responsible only for gathering “EVM-like” data at the project level. The “EVM-like” data is collected using the “EVM Lite” process, which attempts to meet the ANSI/EIA-748 standard where applicable. Project officials said that when LADEE was initiated in February 2008, NPR 7120.5D was still in effect which required application of EVM principles. This is, in effect, “EVM-Lite” or “EVM-Like.” Officials further stated that prior to August 2011, the LADEE project evaluated candidate EVM techniques using sample data from January to March 2011. Based on that evaluation period, LADEE decided to use the “EVM Lite” technique to collect the necessary data to manage the project. From April to June 2011, additional evaluations of this technique continued. Therefore, standard reporting of the LADEE “EVM-like” project level data did not begin until August 1, 2011. We reviewed all available EVM data from September 2010 to June 2011, which was before the standard EVM reporting period began. Though LADEE does not have a certified system, we assessed how well LADEE project was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS tailored to the project and the WBS should be the same for the schedule, cost estimate, and EVM. Our analysis found that the WBS in the LADEE schedule matched the WBS used for EVM data. The ANSI/EIA guidelines also states that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifies significant task interdependencies required to meet project requirements. Our review of the LADEE schedule found 3 percent of the remaining activities were missing predecessor or successor links, which cause the schedule to not properly update in response to changes. We also found that about 6 percent of the remaining activities had date constraints and/or lags, which also hinder the schedule from responding dynamically to changes and can portray an artificial or unrealistic view of the project plan. While these issues may be relatively small, any missing dependencies, constraints, and lags may disrupt the reliability of the overall network. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the LADEE schedule was not resource-loaded, therefore, it is GAO’s assessment that the project did not show evidence that it had established and maintained a time-phased budget baseline. Project Conducted an Integrated Baseline Review LADEE conducted an IBR in December 2010. From that review, a list of concerns about the EVM data and schedule were identified. Specifically, the project was not using a consistent approach to collect EVM data, calling into question the credibility of the data. In addition, it was unclear if an objective assessment of cost and schedule performance could be made using data generated by the “EVM Lite” approach. Since that review, officials said all issues and actions have been addressed and were formally closed by the IBR panel in July 2011. EVM Surveillance Is Not Being Performed While formal surveillance is not occurring, EVM data assurance reviews are being performed during monthly management reviews with the Lunar Quest Program office. Data Resulting From the EVM System Are Somewhat Reliable We reviewed contract performance reports from September 2010 to June 2011. Our review of LADEE’s EVM data found various data anomalies that call into question the reliability of the data. For example, from October 2010 to December 2010 there were some instances of negative values in the EVM reports that were unexplained. Since a variance analysis report provides a detailed narrative explaining significant cost and schedule variances, when this information is missing, management cannot understand the reasons for variances and the plan for fixing them. Also, the EVM data provided by the project office was not presented in a standard EVM format. This could be attributed to the fact that LADEE is required to provide only “EVM-like” data. Figure 6 below illustrates that as of June 2011, the LADEE project was reporting a positive cumulative cost variance of $3 million and a negative cumulative schedule variance of $10 million. LADEE project officials provided no information regarding positive cost and negative schedule variance drivers. As such, we have no insight into what could be causing deviations from the plan. Based on the positive cost variance and negative schedule variance thus far, we are forecasting a negative variance at completion ranging from $0.1 million to $13 million dollars. According to NASA, the project is not overrunning its commitment because the EVM baseline does not include unallocated future expenses held at the project and NASA headquarters level. Magnetospheric Multiscale The Magnetospheric Multiscale (MMS) is made up of four identically instrumented spacecraft. The mission will use the Earth’s magnetosphere as a laboratory to study the microphysics of magnetic reconnection, energetic particle acceleration, and turbulence. Magnetic reconnection is the primary process by which energy is transferred from solar wind to Earth’s magnetosphere and is the physical process determining the size of a space weather storm. The four spacecraft will fly in a pyramid formation, adjustable over a range of 10 to 400 kilometers. The data from MMS will be used as a basis for predictive models of space weather in support of exploration. The MMS spacecraft is being designed, developed, and tested in-house at Goddard Spaceflight Center (GSFC) while instrument development activities are under contract with Southwest Research Institute (SwRI). The Mission Operations Center and the Flight Dynamics Operations Area will be developed and operated at GSFC. The Science Operations Center for the instruments will be developed and operated at the Laboratory for Atmospheric and Space Physics at the University of Colorado and is under contract to SwRI. The MMS project office is collecting EVM data both at the project level as well as from SwRI, which is responsible for the entire instrument suite. Therefore, the SwRI Instrument Suite effort is a subset of the overall MMS project level EVM report. Project Office Does Not Have a Certified EVM System Compliant with the ANSI/EIA Standard At the project level, MMS met one of three fundamental ANSI/EIA-748 practices for a reliable EVM system. MMS does not have a certified EVM system that complies with the ANSI/EIA-748 standard. NASA project officials said in-house projects are required only to be ANSI/EIA compliant, and are not required to have a certified system. Although the MMS project does not have a certified system, we assessed how well the MMS project was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. Our review found that the WBS in the MMS schedule did not match the WBS used for the EVM data, which is not in line with best practices. According to project officials, the MMS project was started before the requirements for earned value management were developed. As a result, the schedule and WBS were created without significant consideration of a one-to-one correlation between the two. A project official stated that MMS has retrofitted its EVM system to provide as close a correlation as possible without having to rebuild the WBS. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review of the MMS schedule found some sequencing issues within the schedule. For example, 9 percent of predecessor and successor tasks were not linked to one another, which is necessary for properly sequencing work so that the schedule will update properly once changes are made. This number accounts for removing all external tasks and level of effort (LOE) type activities. We also found that 21 percent of the remaining activities were constrained. This number also does not include LOE type activities. In fact, the majority of these constraints were hard constraints. Hard constraints can sometimes be impossible to meet, given the network characteristics, and can thereby result in schedules that are logically impossible to carry out. The presence of constraints also impacts the schedule’s ability to respond dynamically to changes and may portray an unrealistic view of the project plan. As a result, these sequencing issues and date constraints within the schedule affect the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guideline, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that that the MMS schedule was resource loaded. Project Conducted an Integrated Baseline Review The MMS project office conducted an IBR in June 2010. This resulted in positive comments, general observations, and constructive recommendations by the review team. EVM Surveillance Is Not being Performed While formal surveillance is not occurring at the project level, EVM data assurance reviews are being performed by the Explorers and Heliophysics Program Office, GSFC Flight Projects Directorate managers, GSFC Chief Financial Officer’s Office, Standing Review Board, and NASA headquarters. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to July 2011. Our review of the MMS project level EVM data found some minor data reliability issues. The 12 months of project level EVM data reflect all of the performed in-house effort at GSFC for the project and the instrument suite contractor’s summary level effort. Also, the data provided was not reported in the standard contract performance report format. Project officials said a defined contract performance report was not dictated to the MMS project, but that all reporting has the same information as a “standard” contract performance report even though the formatting may be different. However, beginning in October 2011, MMS began reporting with the standard format 1 contract performance report, which provides cost and schedule data for each element in the project’s product-oriented WBS. We tried to map the EVM data in the lower level report for the instrument suite to the MMS project level report and we were not able to see how the costs tracked from one report to another. For example, the project level EVM data showed that the instrument suite contractor’s July 2011 budget at completion was $296 million, whereas the lower level EVM data in the instrument suite contractor’s report showed the budget at completion to be $217 million. The MMS project was able to demonstrate how the SwRI budget at completion in the lower-level report mapped to the SwRI budget at completion in the MMS project report. However, officials said because of the way the contractor submits their data the two reports will never match. Though we acknowledge that the project was able to explain how the data tracked, attempting to manually resolve incompatible pieces of data can become time-consuming, expensive and can lead to data reliability issues. Project officials said that MMS is working to capture data at lower WBS levels, which will allow for a closer tie between the cost and schedule data. In addition, MMS said that in addition to receiving the SwRI EVM reports, the project now internally calculates earned value metrics on the contractor provided instrument suite EVM data, which gives MMS completely internally derived earned value performance reports, based on the project team’s assessment without bias from contractor data. Figure 7 below illustrates that as of July 2011, the project was reporting a negative cumulative cost variance of $18 million and a negative cumulative schedule variance of $25 million. In October 2010, the project was in the midst of a replan and not all data was available at the time of report submission to generate detailed variance explanations. The replan was conducted so that the earned value baseline was the same as the cost plan required by the agency for monthly plan versus actual reporting. The goal of the replan was to prevent the project from having to report variances against two different plans. Since the replan, however, the project has experienced a downward trend in both cost and schedule performance. As a result, we are forecasting a negative variance at completion ranging from $47 million to $80 million dollars. NASA stated that the project is not overrunning its approved baseline commitment, because the EVM baseline does not include unallocated future expenses held at the project and headquarters level. Instrument Suite Contractor Does not Have a Certified EVM System Compliant with ANSI/EIA Standard SwRI, the contractor responsible for the entire instrument suite for MMS, met one of the three fundamental ANSI/EIA-748 practices for a reliable EVM system. SwRI does not have a certified EVM system that complies with the ANSI/EIA-748 standard. According to a project official, the SwRI contract does not require SwRI to have a certified system but only to be compliant with the ANSI/EIA-748. NASA convened an independent team to review the contractor’s readiness for EVM system certification and concluded that while the contractor has qualified people to support implementation of EVM, a single point of failure exists without a documented process. Not documenting the process is a problem because if the people who know the process leave, new staff will not know what to do. In addition, the team found that even though the right software tools are in place to support EVM, more integration is needed to reduce manual inputs. Finally, the team reported that compliance with the ANSI/EIA-748 standard would not be achievable without management support and resources. Despite these findings, the project office believed that contractor’s EVM data is useful in examining trends and overall performance of the instrument suite effort. Though SwRI does not have a certified system, we assessed how well the contractor was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. Our review found that the WBS used in the schedule was not consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review of the schedule found some sequencing issues. For example, the Primavera schedule provided showed 31 percent of the remaining activities were missing dependencies and 36 percent were constrained. Officials said majority of the constrained tasks were due to external dependencies. Due to the major sequencing issues in the MMS SwRI schedule, we question the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. MMS project officials said they believed many of the constrained activities are not valid because they reside in another schedule. Also, some constraints, in the Harness area for example, if removed, have no effect on the overall schedule. In addition, officials also said some of the sequencing issues may be caused by the manual integration because since some instrument provider schedules are in Microsoft Project and others are in Primavera, and therefore it is not possible to ensure all tasks have been linked properly. However, the MMS project scheduler tests the schedule for missing dependencies, logic errors, and reasonable durations and the results are shared with the project office and the contractor so that appropriate action can be taken. Lastly, officials said several of the activities identified in our analysis are not really schedule items but level of effort type activities. When we removed the 15 LOE type activities from the missing dependencies count, the schedule still showed 28 percent of the remaining activities were missing dependencies. When we removed the 14 LOE type activities and the 4 Harness activities from the constraint count, the schedule still showed 33 percent of remaining activities were constrained. Because the schedule is the foundation for the EVM baseline, we question the reliability of the Instrument Suite EVM data. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the MMS schedule was resource-loaded. Instrument Suite Contractor Conducted an Integrated Baseline Review The MMS project conducted an IBR of the MMS instrument suite effort in January 2010 and all action items have been closed out. EVM Surveillance Is Not Being Performed While the instrument suite contractor does not have a formal surveillance program, the MMS project office has an EVM analyst, schedule team, resources team, instrument management team, and project management team who all review the instrument suite EVM data on a monthly basis. In addition, the Solar Terrestrial Probes project office, Science Mission Directorate management, and The Aerospace Corporation review the instrument suite EVM data monthly. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to August 2011. Our review of the instrument suite EVM data found various data anomalies that call into question the reliability of the data. For example, there were negative numbers reported for EVM data in four of the months that we reviewed. For some of the negative numbers, there was no explanation for the cause. For others, the negative values were due to correcting several months of translation errors brought on by a known issue with importing data from the schedule into the EVM system software. There were also errors such as inflated EVM data that once corrected, resulted in negative values. Figure 8 below illustrates that as August 2011, the project was reporting a negative cumulative cost variance of $4 million and negative cumulative schedule variance of $6 million. The cost variance in August 2011 dramatically improved from the downward trend during the previous months due to the project applying almost $13 million from its management reserve to the instrument suite contract. However, due to the negative cumulative cost and schedule variances, we are forecasting a negative variance at completion ranging from $10 million to $24 million. Mars Atmosphere and Volatile Environment The Mars Atmosphere and Volatile EvolutioN (MAVEN) mission is part of NASA’s Mars Scout program, a robotic orbiter mission that will provide a comprehensive picture of the Mars upper atmosphere, ionosphere, solar energetic drivers, and atmospheric losses. Set to launch in 2013, MAVEN will deliver comprehensive answers to long-standing questions regarding the loss of Mars’ atmosphere, climate history, liquid water, and habitability. MAVEN will provide the first direct measurements ever taken to address key scientific questions about Mars’ evolution. Lockheed Martin is building the MAVEN spacecraft and will carry out mission operations for MAVEN. NASA’s Jet Propulsion Laboratory will navigate the spacecraft. The Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado at Boulder will coordinate the science team and science operations and lead the education and public outreach activities. NASA’s Goddard Spaceflight Center will provide management and technical oversight for the mission and will also provide mission systems engineering, mission design, and safety and mission assurance. The MAVEN project office is using EVM at the project level as well as collecting EVM data from Lockheed Martin, the spacecraft contractor, and LASP, which is responsible for the Science Operations Center, Remote Sensing, and Langmuir Probe and Waves instrument efforts. Both the Lockheed Martin and LASP contracts are subsets of the overall MAVEN Project EVM report. Project Does Not Have a Certified EVM System Compliant with ANSI/EIA Standard MAVEN fully met one of the three key practices for implementing EVM at the project level. Specifically, the project did not have a certified EVM system and is not required to have a certified system. Nevertheless, we assessed how well the MAVEN Project was meeting three ANSI/EIA guidelines. As part of our analysis, we assessed MAVEN’s EVM data against three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and that the WBS should be the same for the cost estimate, schedule, and EVM. We found that the project’s WBS was consistent between the schedule and EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review found some sequencing issues in the schedule. For example, 5 percent of the activities were missing dependencies, 6 percent had open ended logic links, and 20 percent had constraints, among other things. When schedules are not sequenced properly, float values and the calculated critical path will not be valid. In addition, the project conducted an integrated baseline review in July 2011. The MAVEN project office provided a May 2012 Schedule Health Check Report that showed the number of missing dependencies and constraints had been reduced considerably. Though we cannot validate the improvement in the schedule without performing our own assessment, we believe that MAVEN is working towards producing a more reliable schedule. This review found that the project schedule and technical design were in good shape, but noted concerns that more resources were needed to implement and maintain EVM, there were cost and schedule integration issues that caused the budgets for some work packages to not be in sync with the schedule, and reliable critical path analysis was at risk because of missing schedule links and constraints. Finally, a project should establish and maintain a time-phased budget baseline at the control account level, against which performance can be measured. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the schedule was resource-loaded. Project Conducted an Integrated Baseline Review Project conducted an IBR in July 2011 and all nine areas of concern were addressed and closed. EVM Surveillance Is Not Being Performed While formal surveillance is not occurring at the project level, EVM data assurance reviews are being performed by the Mars Program Office representatives, MAVEN standing Review Board representatives, and Aerospace Corporation representatives at both the project level and for LASP efforts. Data Resulting from the EVM System Is Reliable We reviewed contract performance reports from June 2011 to August 2011. EVM data prior to spring 2011 was not available because MAVEN had not been confirmed into the implementation phase. Our review of the MAVEN project EVM data found that there was a mistake causing the costs to be overstated by twice their actual amount. Other than this mistake, which affected most of the data in the report, no other errors were found. The project EVM data reflects all work on this project including effort related to the project office, two in-house instruments, and the Space Sciences Laboratory, as well as major efforts from Lockheed Martin and LASP. Consolidated reporting of all components at a summary level began with the May 2011 data, with the first full summary report delivered on July 15, 2011. Therefore, we had only 3 months of data to review. Figure 9 below illustrates that as of August 2011, the project was reporting a positive cumulative cost variance of $5 million while also experiencing a negative cumulative schedule variance of $5 million. The positive cumulative cost variance was due to a decrease in labor charges, delayed material costs, a reduction in re-work, and the leveraging of common engineering products from other projects. The negative schedule variance was being driven by the Neutral Gas and Ion Mass Spectrometer instrument, which experienced technical issues such as vendor machines not being manufactured to specifications. Because we had only 3 months of data, we did not forecast an estimate at completion. In addition, we tried to map the EVM data in the lower level reports for spacecraft, Science Operations Center, Remote Sensing, and Langmuir Probe and Waves efforts to the overall MAVEN project EVM report and in some cases we were not able to see how the costs tracked from one report to another. For example, while we could easily trace the costs for the Science Operations Center effort from the lower level EVM report to the overall MAVEN project report, we could not clearly map the costs for the spacecraft, Remote Sensing or Langmuir Probe and Waves efforts. In particular, for Remote Sensing and Langmuir Probe and Waves efforts, the lower level EVM report cost elements did not have their costs burdened at the WBS level, which could account for some of the differences between the lower level report costs and the overall MAVEN project costs for those elements. MAVEN project officials walked us through their process of how they ensure lower-level reports map to the project level reports. In addition, MAVEN project provided supporting documentation that validated this assertion. Though MAVEN project officials helped explain the mapping, officials said they do not mandate that their contractors follow a certain reporting format, instead any adjustments necessary to ensure that the lower-level reports map to the project-level reports are made manually by the project office. Though MAVEN project does not prescribe to a standard reporting format, attempting to manually resolve incompatible pieces of data can become time-consuming, expensive and can lead to data reliability issues. Although the agency provided explanations for the mapping issues, the ability to reconcile the costs between the reports should be evident, without additional explanations. Lockheed Martin Has a Certified EVM System Compliant with the ANSI/EIA Standard Lockheed Martin, the spacecraft contractor, met the three fundamental ANSI/EIA practices necessary for a reliable EVM system. In August 2008, the Defense Contract Management Agency (DCMA) certified that Lockheed Martin’s EVM system is compliant with the ANSI/EIA standard. However, the implementation of that EVM system is questionable based on our findings. We assessed the contractor’s data against the three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that the project’s WBS in the schedule was consistent with the WBS used in the EVM data. However, we found issues with the schedule. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review found some sequencing issues in the schedule. For example, 2 percent of the activities remaining were missing predecessor and successor links, which are necessary for properly sequencing work so that the schedule will update in response to changes. We also found that 37 percent of the remaining activities had constraints, which also hinder the schedule’s ability to respond dynamically to status updates resulting in an artificial or unrealistic view of the project plan. These issues with the schedule affected the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. Project officials further explained that some of the “start no earlier than” constraints were due to resource availability and “finish no later than” constraints were used intentionally to plan task activities to occur as late as possible. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the contractor schedule was resource loaded. Project Conducted an Integrated Baseline Review The project conducted the spacecraft’s IBR in April 2011. During the review, several areas of concern regarding the schedule were identified. The project office said that during the integrated baseline review, the review team identified many of the same observations with the schedule as our findings. As a result, the project office directed the contractor to eliminate the constraints, lags, and missing logic links in their integrated master schedule. Since the November 2011 schedule submittal, the contractor has decreased the number of sequencing issues in the schedule, according to project officials. The project office also said that the contractor continues to conduct schedule health checks to uncover any sequencing issues and provide the project office with schedule variance reports and critical path analysis, which are discussed during monthly management meetings. EVM Surveillance Is Being Performed Joint surveillance reviews of the EVM data are being performed by DCMA, MAVEN project officials, and the contractor. Data Resulting from the EVM System Is Reliable We reviewed contract management reports from January 2011 to August 2011. Our review of the EVM data found no major issues with data reliability. However, from January to May 2011, there were no variance analysis reports produced to explain significant cost and schedule variances and other contract problems and topics because they did not meet reporting thresholds. Without this information, however, management cannot understand the reasons for variances and the contractor’s plan for fixing them. Figure 10 below illustrates that as of August 2011, the project was reporting a positive cumulative cost variance of $6 million and a negative cumulative schedule variance of $3 million. One reason for the positive cost variance was due to the ability to leverage a lower subcontractor rate than planned, which resulted in a cost savings. The negative schedule variance was also being driven by the mechanism subsystem falling behind schedule due to the shop being overloaded with work and the mechanism designers supporting other NASA efforts, among other things. Due to the positive cost variance, we are forecasting a positive variance at completion ranging from $1 million to $14 million. Laboratory for Atmospheric and Space Physics Does Not Have a Certified EVM System Compliant with ANSI/EIA Standard LASP, the contractor responsible for the Science Operations Center, Remote Sensing package, and Langmuir Probe and Waves instrument efforts met one of three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. To date, LASP does not have a certified EVM system. Though the contractor is not required to have a certified system, we assessed how well the contractor was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored for effective internal management control of the project and the WBS is the same for the cost estimate, schedule, and EVM. We found some slight inconsistencies in the WBS numbering between the EVM report and the schedule for the Remote Sensing package, which calls into question the reliability of their overall schedule network. Moreover, the Langmuir Probe and Waves Instrument had issues with consistency between the WBS and the schedule. In particular, there was varying levels of information between the two WBSs, making it hard to use the WBS as a common thread between the EVM data and the schedule. Since the WBS is a critical component of EVM, it should be the same for developing the EVM performance measurement baseline and the schedule. Without a common link between these two features, project managers cannot fully understand project cost and schedule variances. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. We found some sequencing issues with the Langmuir Probe and Waves Instrument schedule. For example, 47 percent of the remaining activities had constraints, which defeated the purpose of a using a dynamic schedule. The quality of the schedule was also hampered by the presence of schedule lags on 18 percent of the remaining activities. Schedule lags must be justified because they cannot be easily monitored or included in risk assessments. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that all three components showed evidence of a time-phased budget baseline. Project Conducted an Integrated Baseline Review The project conducted an IBR in March 2011 and several areas of concern were noted. MAVEN project officials stated that many of the schedule issues we found were also discovered during the IBR and have now been corrected. EVM Surveillance Is Not Being Performed While formal surveillance is not occurring, EVM data assurance reviews are being performed by the, project office, Mars Program Office, and MAVEN Standing Review Board representatives. Also, The Aerospace Corporation is working as an advisor to Science Mission Directorate’s Planetary Systems Division. Data Resulting from the EVM System Is Reliable Our review of MAVEN’s Science Operations Center, Remote Sensing package, and Langmuir Probe and Waves instrument EVM data found no major issues with data reliability. However, there was a lack of variance analysis reports for these efforts. For example, Science Operations Center had no variance analysis reports with the explanations for any of the months reported. In addition, the Remote Sensing package, and Langmuir Probe and Waves instrument variance analysis reports did not provide any explanation for major performance swings from one month to another. Figure 11 below illustrates that as of August 2011, the Science Operations Center portion of the MAVEN project was reporting a positive cumulative cost variance of $0.01 million and a slightly negative cumulative schedule variance of $0.01 million. However, since the variance analysis reports provided no information regarding what is driving the positive cost and slightly negative schedule variances, we have no insight into the causes for deviations from the plan. Based on the positive cost variance thus far, we are forecasting a positive variance at completion of less than $0.5 million at contract completion. Figure 12 below illustrates that as of August 2011 the Remote Sensing package portion of the MAVEN project was reporting a positive cumulative cost variance of $0.8 million and a slightly negative cumulative schedule variance of $0.6 million. Factors behind the positive cost variance include an accounting lag on the invoicing and payment process associated with the procurements, which results in the appearance of cost efficiency. This issue has been occurring for several months and is causing the EVM metrics to be skewed to reflect false positive cost variances. The variance analysis reports do not give any explanation for why there is a negative schedule variance situation as of August 2011. As a result of the positive cost variance, we are forecasting a positive variance at completion ranging from $1 million to $7 million. Figure 13 below illustrates that as of August 2011 the Langmuir Probe and Waves portion of the MAVEN project was reporting a negative cumulative cost variance of $0.6 million and negative cumulative schedule variance of $0.3 million. Reasons for the negative cost and schedule variances are due to costs for outside services and materials being more than planned as well as additional work required to troubleshoot problems and mitigate risks. Due to these problems, we are forecasting a negative variance at completion ranging from $2 million and $3 million. According to NASA, the project is not overrunning its commitment because the EVM baseline does not include unallocated future expenses held at the project and headquarters level. Orbiting Carbon Observatory 2 NASA’s Orbiting Carbon Observatory 2 (OCO-2) is designed to enable more reliable predictions of climate change and is based on the original OCO mission that failed to reach orbit in 2009. It will make precise, time- dependent global measurements of atmospheric carbon dioxide. These measurements will be combined with data from a ground-based network to provide scientists with the information needed to better understand the processes that regulate atmospheric carbon dioxide and its role in the carbon cycle. NASA expects enhanced understanding of the carbon cycle will improve predictions of future atmospheric carbon dioxide increases and the potential impact on the climate. The OCO-2 mission consists of a dedicated spacecraft with a single instrument, flying in a near-polar, sun-synchronous orbit. The Jet Propulsion Laboratory (JPL) has overall responsibility for project management. The OCO-2 spacecraft is being built by Orbital Sciences Corporation while the instrument is being built in-house at JPL. Orbital Sciences Corporation submits spacecraft effort EVM data monthly and the project incorporates that data into the overall project EVM report. The project is facing a launch delay because the Taurus XL launch vehicle failed on the Glory Mission, and the contract was terminated. The project will be rebaselined as a result of NASA having to select a new launch vehicle. The current $477.2 million total project cost is a preliminary amount pending the outcome of the rebaseline process. Project Using a Certified EVM System Compliant with the ANSI/EIA Standard The OCO-2 project met all three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. JPL has a certified EVM system that complies with the ANSI/EIA standard. However, the implementation of its EVM system is questionable based on our findings below. We assessed the JPL EVM data against three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that the project had a work breakdown structure that was consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review found some sequencing issues in the schedule. In particular, 14 percent of predecessor and successor tasks were not linked to one another, which is necessary for properly sequencing work so that it will update in response to changes. We also found 15 percent of the remaining activities were constrained, which also hinders the schedule from responding dynamically to changes and can portray an artificial or unrealistic view of the project plan. These sequencing issues and constraint dates within the schedule affect the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. Project officials said the missing dependencies are mainly handoffs produced by level of effort (LOE) activities. Officials said these activities are not necessary for valid schedule network logic, and under no circumstances do these activities drive the critical path. They also said the constrained activities are largely composed of mandated delivery dates, which JPL uses as control points to manage subsystem schedule performance prior to assembly, test, and launch operations delivery, and coordinate major meeting logistics. Finally, the ANSI/EIA guideline states that a project should establish and maintain a time- phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. To develop the baseline, resources must be accounted for in the schedule and our review found that the schedule was resource loaded. Project Conducted an Integrated Baseline Review The project conducted an integrated baseline review in March 2011. EVM Surveillance Is Being Performed JPL also has a formal surveillance plan in place for monitoring the EVM data. In particular, project officials said each month detailed earned value data analysis is performed on each cost account and each work package so that conclusive understanding of the performance status can be reached and communicated within the project team. In addition, during the project monthly management reviews, both cost and schedule variances and reasons causing them are presented. Data Resulting from the EVM System Are Reliable We reviewed contractor management reports from December 2010 to July 2011. Though we found no major data reliability issues when we tried to map the EVM data in the lower level spacecraft report to the JPL project level report, we could not understand how the costs tracked from one report to another. In a subsequent interview, officials explained, with supporting documentation, how the lower level Orbital Sciences budget at completion mapped to the budget at completion found in the JPL Project level EVM report. Though we appreciate the explanations provided by project officials regarding the differences between the two reports, the issue remains that without additional documentation and explanations by project officials, GAO or another independent party could not have reconciled the data. Figure 14 below illustrates that as of July 2011, the project was reporting a positive cumulative cost variance of $0.6 million and a negative schedule variance of $8 million as seen in the graph below. According to project officials, the schedule variance was being caused by spectrometer slit instability and an incompatible memory chip in the Remote Electronics Module. Due to the positive cost variance and negative schedule variance we are forecasting a negative variance at completion ranging from $1 million to $26 million. Project officials said the project is not overrunning its commitment because the EVM baseline provided to GAO does not include unallocated future expenses held at the project and headquarters level. Spacecraft Contractor Has a Certified EVM System Compliant with ANSI/EIA Standard The spacecraft contractor, Orbital Sciences Corporation, met two of the three fundamental ANSI/EIA-748 practices necessary for a reliable EVM System. At the time of our review, the Orbital Sciences did not have a certified EVM system that complied with the ANSI/EIA-748 standard. In January 2012, DCMA certified the Orbital Sciences EVM System. Though the contractor now has a certified system, the implementation of that system is questionable based on our findings below. We assessed how well the contractor’s EVM system was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that the spacecraft’s WBS in the schedule was not consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. We found some sequencing issues in the spacecraft contractor schedule. We found that 27 percent of the remaining activities in the spacecraft’s schedule were missing predecessor and successor links, which are necessary for properly sequencing work so that the schedule will update in response to changes. In addition, 24 percent of the remaining activities had date constraints, which also hinder the schedule’s ability to respond dynamically to status updates resulting in an artificial or unrealistic view of the project plan. These sequencing issues and constraint dates within the schedule affect the reliability of the overall network and the schedule’s ability to correctly calculate float values and the critical path. Spacecraft Contractor Conducted an Integrated Baseline Review Orbital Sciences conducted an integrated baseline review of the spacecraft effort in March 2011. EVM Surveillance Is Not Being Performed While formal surveillance is not occurring, project officials stated that EVM performance data is reviewed during the monthly status reviews. Data Resulting from the EVM System Are Reliable We reviewed contract performance reports from April 2011 to July 2011. Because we had only 4 months of data to review, we were unable to forecast estimate variance at completion. As of July 2011, the project was reporting a positive cumulative cost variance of $0.4 million and a negative cumulative schedule variance of $2 million as seen in the graph below. The positive cost variance was being driven by lower than expected burden rates and less staff supporting project management, flight assurance, and systems engineering efforts. The positive cost variance was being driven by lower than expected burden rates and less staff supporting project management, flight assurance, and systems engineering efforts. Radiation Belt Storm Probes The Radiation Belt Storm Probes (RBSP) mission will explore the Sun’s influence on the Earth and near-Earth space by studying the planet’s radiation belts at various scales of space and time. This insight into the physical dynamics of the Earth’s radiation belts will provide scientists with data to make predictions of changes in this little understood region of space. Understanding the radiation belt environment has practical applications in the areas of spacecraft system design, mission planning, spacecraft operations, and astronaut safety. The RBSP project built two spacecraft that will be used to measure the particles, magnetic and electric fields, and waves that reside in the Van Allen radiation belts. RBSP launched on August 30, 2012 on a two-year prime mission. The RBSP spacecrafts and ground system are being designed, developed, and tested by the John Hopkins University’s Applied Physics Laboratory.level. Project Does Not Have a Certified EVM System Compliant with the ANSI/EIA Standard At the project level, the Applied Physics Laboratory only fully met one and partially met another of the three fundamental practices necessary for a reliable EVM system. RBSP is the first full NASA mission to use EVM at the Applied Physics Laboratory. According to the RBSP project manager, the RBSP project implemented a limited earned value management system in Phase B as a risk mitigation activity for Phase C/D. This early implementation was a risk mitigation activity, which allowed the project’s control account manager, instrument provider, and project office to better understand the reporting process and the use of the EVM system. The use of EVM during this phase was also intended to allow for timely, accurate, and useful EVM reporting during the formal reporting in later phases of the project. Since then Applied Physics Laboratory has made good progress and is in the process of meeting the intent of being compliant with the 32 ANSI/EIA-748 guidelines. We assessed how well the Applied Physics Laboratory was meeting the three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. Our review, we found that that the WBS used in the schedule was not consistent with the WBS used for the EVM data. Project officials said that the project utilizes the Applied Physics Laboratory WBS for all earned value management activity, which is then mapped to the NASA WBS for reporting to the sponsor. Also, the Applied Physics Laboratory WBS is uniformly utilized and consistent across all control accounts in the EVM system. This internal WBS ties into both the contract performance report and the integrated master schedule utilized on the project. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review found some sequencing issues in the schedule. For example, 23 percent of the remaining activities were missing predecessor and successor links, which are necessary for properly sequencing work so that the schedule will update in response to changes. We also found that 29 percent of the remaining activities had date constraints, which also hinder the schedule’s ability to respond dynamically to status updates resulting in an artificial or unrealistic view of the project plan. When schedules are not sequenced properly, float values and the calculated critical path will not be valid. Project officials explained that sequencing issues were a result of the project consciously including constrained instrument deliveries and deliverables, level of effort activities and material and subcontractor expenditures in the integrated master schedule, and though the RBSP integrated master schedule had these issues it was able to monitor the critical and near critical paths of all spacecraft systems and subsystems. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. Our review found that the schedule was resource loaded. Project Conducted an Integrated Baseline Review NASA conducted an integrated baseline review in August 2009. Of the seven IBR objectives identified, two were partially met and five were met. In December 2010, the IBR deputy chief notified the Applied Physics Laboratory that all areas of concern had been closed. As a result, the overall consensus of the government review team was that the integrated baseline review was successful. EVM Surveillance Is Not Being Performed While formal surveillance is not being performed, the project office reviews the EVM data on a monthly basis. Although they do not perform formal surveillance, an Applied Physics Laboratory official said that they performed additional monthly independent reviews of the RBSP EVM system throughout Phase C/D. Data Resulting from the EVM System Are Reliable We reviewed contract performance reports from August 2010 to April 2011 and July 2011 to August 2011. We were not provided reports for May and June 2011. This was due to a re-plan determined necessary by the Living with a Star Program Office and NASA headquarters in May 2011 due to changes in the launch manifest. Figure 16 below illustrates that as of August 2011, the project was reporting a negative cumulative cost variance of approximately $32 million and a negative cumulative schedule variance of $3 million. The negative cumulative cost variance was caused by sustained effort on the radio frequency communications, as well as by work on the avionics equipment, and ground system software launch and post launch components being behind schedule. The schedule variance is minimal since the project is nearing completion. Due to the negative cost variance we are forecasting a negative variance at completion from $40 million to $41 million. Project officials noted the forecasted variance at completion is below the revised contract value of $351.1 million, although above the project’s estimated budget at completion. In addition, officials said this is a project-level variance, and does not account for the application of unallocated future expenses to fund the movement of the launch date and to keep the project on track. When the project launched in August 2012, its estimate at completion was below the $351.1 million budget at completion. Stratospheric Observatory for Infrared Astronomy The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a joint project between NASA and Deutsche Zentrum für Luft-und Raumfahrt (DLR), the German space agency, to install a 2.5 meter telescope, as well as other scientific instruments capable of infrared and sub-millimeter observations, in a specially modified Boeing 747SP aircraft. This airborne observatory is designed to provide routine access to the visual, infrared, far-infrared, and sub-millimeter parts of the electromagnetic spectrum. Its mission objectives include studying many different kinds of astronomical objects and phenomena, including star birth and death; the formation of new solar systems; planets, comets, and asteroids in our solar system; and black holes at the center of galaxies. Currently, five U.S. and two German funded interchangeable instruments for the observatory are being developed to allow a range of scientific measurement to be taken by SOFIA. The SOFIA project office is using EVM at the project level as well as collecting EVM data from the L-3 Communications Integrated Systems L.P. (L-3), which is responsible for the airborne system platform effort. The EVM data provided to the project for the airborne observatory platform effort is a subset of the overall SOFIA project level EVM report. The German component of the SOFIA project does not generate earned value data and are not part of the project’s budget baseline. The University Space Research Association (USRA) has a support contract to help the Ames Research Center manage SOFIA’s science and mission operations in cooperation with the Deutsches SOFIA Institut. The USRA contract was established before NASA began requiring earned value management compliance. As a result, they are not required to generate earned value data. However, all of these components are subsets of the overall SOFIA project. Project does not have a Certified EVM System Compliant with the ANSI/EIA Standard SOFIA project did not meet any of the three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. Project officials said in- house projects are not required to have certified EVM systems. Though the SOFIA project does not have a certified system, we assessed how well the project was meeting three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that the WBS used in the SOFIA schedule was consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. Our review found some sequencing issues in the schedule. For example, 16 percent of the remaining activities had open ended logic links, and 1 percent had date constraints, among other things. SOFIA project officials explained that the majority of the missing links were due to using hammock activities to resource load the schedule, which resulted in missing successors. Out of all of the activities missing successors, only 11 were not hammocked activities, which accounted for a very small amount. In addition, SOFIA project officials said that constraints in the schedule were justified since they represented external deliveries or fiscal year funding availability. They added that the one hard constraint, Must Start On, was used to represent a fixed date for an international visit. While the majority of these explanations seem reasonable, when schedules are not sequenced properly, the slack values and the calculated critical path will not be valid. Project Did Not Conducted an Integrated Baseline Review An IBR was not conducted at the project level. Project officials said SOFIA, an in-house project, did not begin collecting EVM data until very late in the development phase, and because of this an IBR was not conducted. Officials added although SOFIA did not conduct a project- level IBR, the EVM baseline was established concurrently with an Agency approved re-plan and joint confidence level analysis in the 2009/2010 time frame, and was reviewed by a Standing Review Board. Though the Standing Review Board review satisfied some of the objectives of an IBR, including confirmation of the schedule and budget baselines (e.g., schedule review, risk review, key milestones identified), it did not address the full IBR checklist (e.g. work authorizations, control account plans, EVM system description). EVM Surveillance Is Not being Performed While formal surveillance is not occurring at the project level, EVM data assurance reviews are being performed monthly and quarterly by the SOFIA project office representatives. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to August 2011. We found various issues that bring into question the reliability of the SOFIA Project EVM data. For example, we found negative values due to an “over-reporting” of progress in previous months that was caused by a problem with translating percent complete progress from the University Space Research Association schedule to the SOFIA integrated master schedule as well as corrections/modifications in costs posted for support service contracts. In addition, we found other anomalies in the data that SOFIA project officials explained were most likely due to mischarges by employees, delayed cost postings, or employees continuing to use charge codes inappropriately. While the cost impact of these problems was not that large for any one WBS element, each of these issues causes us to question the reliability of the data. According to project officials, the variances that caused these anomalies did not meet the reporting threshold. A variance analysis report provides a detailed, narrative report explaining significant cost and schedule variances and other contract problems and topics. Without this information, management cannot understand the reasons for variances and the contractor’s plan for fixing them. When variance analysis reports are not produced, the EVM data will not be meaningful or useful as a management tool. Figure 17 below illustrates that as of August 2011, the SOFIA project was reporting a positive cumulative cost variance of $3.5 million and a negative cumulative schedule variance of $6.6 million. Due to the cost and schedule variances we are forecasting a negative variance at completion ranging from $1.4 million to $76 million. Airborne System Platform Contractor Has a Certified EVM System Compliant with the ANSI/EIA Standard The contractor met all three fundamental ANSI/EIA-748 practices necessary for a reliable EVM system. In 2002, DCMA certified that the contractor has an EVM system compliant with the ANSI/EIA standard. Project Conducted an Integrated Baseline Review of Airborne System Platform Effort The project conducted an IBR in November 2007. EVM Surveillance Is Being Performed It is unclear if EVM surveillance is being performed on the L-3 EVM data. Data Resulting from the EVM System Are Somewhat Reliable We reviewed contract performance reports from August 2010 to July 2011. We found various data anomalies that call into question the reliability of the contractor’s EVM data for the contract performance reports we reviewed. For example, we found negative values, actual cost of work performed being reported without work being scheduled and/or performed. Project officials stated that these anomalies were caused by performance being taken in later months and also because these issues did not trip the threshold reporting. Figure 18 below illustrates that as of July 2011, the airborne system platform contractor was reporting a positive cumulative cost variance of $3.2 million and a negative schedule variance of $0.6 million. The favorable cost variance is due to efficiencies in project oversight and engineering. The unfavorable cumulative schedule variance is being driven by a lack of government furnished equipment supposed to be received from NASA that is causing a backlog of tasks. Due to the cost and schedule variances we are forecasting a positive variance at completion ranging from $3 million to $4 million. Tracking and Data Relay Satellite Replenishment The Tracking and Data Relay Satellite (TDRS) System consists of in-orbit communication satellites stationed at geosynchronous altitude coupled with two ground stations located in New Mexico and Guam. The satellite network and ground stations provide mission services for near-Earth user satellites and orbiting vehicles. TDRS-K and L are the 11th and 12th satellites, respectively, to be built for the TDRS system. They will contribute to the existing network by providing continuous high bandwidth digital voice, video, and mission payload data, as well as health and safety data relay services to Earth-orbiting spacecraft such as the International Space Station and the Hubble Space Telescope. NASA is planning to launch TDRS-K in December 2012 followed by the TDRS-L launch in February 2014. NASA is collecting EVM data from both the spacecraft and sustainment efforts. In December 2007, NASA awarded a fixed price incentive contract to design, develop, fabricate, integrate, test, ship, provide launch support, conduct on-orbit checkout operations and provide sustaining engineering support for two spacecraft, TDRS-K and TDRS-L, to Boeing Satellite Systems, Inc (Boeing). Spacecraft Contractor Has a Certified EVM System Compliant with the ANSI/EIA Standard The spacecraft contractor, Boeing, met the three fundamental ANSI/EIA- 748 practices necessary for a reliable EVM system. Boeing has a certified EVM system that complies with the ANSI/EIA EVM standard. Though the contractor has a certified system, the implementation of that system is questionable based on our findings below. We also assessed whether the spacecraft contractor’s EVM data against three ANSI/EIA guidelines. These guidelines state that the authorized work elements for the project should be defined typically using a WBS that has been tailored to the project and the WBS is the same for the cost estimate, schedule, and EVM. We found that the WBS used in the spacecraft integrated master schedule was consistent with the WBS used for the EVM data. The ANSI/EIA guidelines also state that projects should have a schedule that describes the sequence of work by listing activities in the order in which they are to be carried out and identifying significant task interdependencies required to meet project requirements. We found some sequencing issues in the contractor’s schedule. For example, 13 percent of the remaining activities were constrained. When schedules are not sequenced properly, float values and the calculated critical path will not be valid. Project officials acknowledged the constraints and said they are a result of having to adjust support activities due to spacecraft integration and test delays as well as alignments to the current/actual manifest dates, which differ from the Boeing contractual and launch readiness dates. In addition, critical path metrics are generated and analyzed monthly to track the Boeing performance against the critical path activities. Finally, the ANSI/EIA guidelines state that a project should establish and maintain a time-phased budget baseline to track cost and schedule variances in an EVM system. Though resource loading the schedule is not required to meet the ANSI/EIA guidelines, it is a best practice and therefore resources should be accounted for in the schedule in order to develop this baseline, according to the GAO cost guide. We found that the schedule was not resource loaded. Project officials said the integrated master schedule is resource loaded but not inside the Microsoft Project schedule because it does not directly interface with the Boeing financial/EVM system. They stated that the integrated master schedule is produced using Microsoft Project and imported into a planning software tool. After the resource loading effort is performed, the planning data is transferred into the EVM system. The tool integrates the Microsoft Project schedule with the Boeing financial system and the resource allocations for each task. When adjustments are required Boeing again utilizes the planning tool. Project Conducted an Integrated Baseline Review An integrated baseline review was conducted in 2008 where 211 issues were raised. All of these issues have since been resolved and closed. EVM Surveillance Is Being Performed DCMA prepares a monthly EVM analysis report and performs continuous surveillance of Boeing’s EVM implementation by sampling various control account managers for interviews about the process. Data Resulting from the EVM System Are Somewhat Reliable We examined contract performance reports from August 2010 through August 2011. Figure 19 illustrates that as of August 2011, the project was reporting a negative cumulative cost variance of approximately $131 million and a negative cumulative schedule variance of $7 million. The negative cumulative cost variance was being driven by higher staffing levels to support integration, the Preliminary Design Review and Critical Design Review, as well as an incorrect assessment of project requirements and the inability to use heritage specifications. Labor costs were also higher than expected due to part failures and the late completion of component qualifications. Extended test activities also contributed to the cumulative negative cost variance. Finally, more than expected resources were needed to complete board and slice designs, generate drawings, and assemble and test components because of the complexity of the design. Due to the negative cost and schedule variance we are forecasting a variance at completion ranging from $152 million to $185 million. Project officials said GAO’s independent variance at completion gives the impression that NASA may request additional funding to complete the TDRS K and L. Because this is a fixed price, incentive fee contract, NASA officials said the agency is obligated only to pay up to the price ceiling of the contract. TDRS Sustainment Effort EVM Summary Boeing is the contractor for both the spacecraft and sustainment efforts. As noted above, Boeing met all three fundamental practices for a reliable EVM system. In addition, the 2008 integrated baseline review was conducted for both the spacecraft and sustainment effort. As well, the formal surveillance performed by Boeing and the Defense Contract Management Agency applies to the sustainment effort We reviewed contract performance reports from April 2011 to August 2011. Because we had only 5 months of data, we were not able to forecast an independent estimate at completion. Figure 20 below illustrates that as of August 2011, the project was reporting a positive cumulative cost variance of $0.08 million. Because there were no variance analysis reports accompanying the sustaining effort, we were unable to determine what was causing the positive cost variance. Project officials stated that the Performance Measurement Baseline for this effort was almost entirely level of effort, so minimal variances would be occurring. They added that during the August 2011 time period, the contract reflected an April 2012 launch date even though the launch was being delayed. Consequently, Boeing’s reports were reflecting work scheduled to occur in support of the earlier launch date when in fact very little effort was being done. As a result, since minimal costs were incurred, this resulted in a positive cumulative cost variance. Appendix IV: Comments from the National Aeronautics and Space Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Shelby S. Oakley and Karen Richey (Assistant Directors), Greg Campbell; Jennifer K. Echard; Tisha D. Derricotte; Laura Greifner; Kristine R. Hassinger; Ben Jaskiewicz; William Laing; Richard Lee; Eric Lofgren; Kenneth E. Patton; Jose A. Ramos; Carrie W. Rogers; Stacey L. Steele; Roxanna T. Sun; and Umesh Thakkar made key contributions to this report.
NASA historically has experienced cost growth and schedule slippage in its portfolio of major projects and has taken actions to improve in this area, including adopting the use of EVM. EVM is a tool developed to help project managers monitor risks. GAO was asked to examine (1) the extent to which NASA is using EVM to manage its major space flight acquisitions, (2) the challenges that NASA has faced in implementing an effective EVM system, and (3) NASA's efforts to improve its use of EVM. To address these questions, GAO obtained contractor and project EVM data and used established formulas and tools to analyze the data and assess NASA's implementation of EVM on 10 major spaceflight projects; interviewed relevant NASA headquarters, center and mission directorate officials on their views on EVM; and reviewed prior reports on EVM and organizational transformations. GAO compared NASA policies and guidance on EVM to best practices contained in GAO's cost estimating best practices guide. The National Aeronautics and Space Administration's (NASA) 10 major spaceflight projects discussed in this report have not yet fully implemented earned value management (EVM). As a result, NASA is not taking full advantage of opportunities to use an important tool that could help reduce acquisition risk. GAO assessed the 10 projects against three fundamental EVM practices that, according to GAO's best practices cost guide, are necessary for maintaining a reliable EVM system. GAO found shortfalls in two of three fundamental practices. Specifically, we found that More than half of the projects did not use an EVM system that was fully certified as compliant with the industry EVM standard. Only 4 of the 10 projects established formal surveillance reviews, which ensure that key data produced by the system was reliable. The remaining 6 projects provided evidence of monthly EVM data reviews; however, the rigor of both the formal and informal surveillance reviews is questionable given the numerous data anomalies GAO found. GAO also found that 3 projects had reliable EVM data while 7 had only partially reliable data. For the EVM data to be considered reliable per best practices it must be complete and accurate with all data anomalies explained. NASA EVM focal points, headquarters officials, project representatives, and program executives cited cultural and other challenges as impediments to the effective use of EVM at the agency. Traditionally, NASA's culture has focused on managing science and engineering challenges and not on monitoring cost and schedule data, like an effective EVM system produces. As a result, several representatives said this information traditionally has not been valued across the agency. This sentiment was also echoed in a NASA study of EVM implementation. Also cited as a challenge to the effective use of EVM was NASA's insufficient number of staff with the skills to analyze EVM data. Without a sufficient number of staff with such skills, NASA's ability to conduct a sound analysis of the EVM data is limited. However, NASA has not conducted an EVM skills gap analysis to determine the extent of its workforce needs. NASA has undertaken several initiatives aimed at improving the agency's use of EVM. For example, NASA strengthened its spaceflight management policy to reflect the industry EVM standard and has developed the processes and tools for projects to meet these standards through its new EVM system. While these are positive steps, the revised policy contains only the minimum requirements for earned value management. For example, it lacks a requirement for rigorous surveillance of how projects are implementing EVM and also does not require use of the agency's newly developed EVM system to help meet the new requirements. NASA has attempted to address EVM shortcomings through policy changes over the years, but these efforts have failed to adequately address the cultural resistance to implementing EVM.
Background Responding to corporate failures and fraud that resulted in substantial financial losses to institutional and individual investors, Congress passed the Sarbanes-Oxley Act in 2002. As shown in table 1, the act contains provisions affecting the corporate governance, auditing, and financial reporting of public companies, including provisions intended to deter and punish corporate accounting fraud and corruption. The Sarbanes-Oxley Act generally applies to those companies required to file reports with SEC under the Securities Exchange Act of 1934 and does not differentiate between small and large businesses. The definition of small varies among agencies, but SEC generally calls companies that had less than $75 million in public float non-accelerated filers. Accelerated filers are required by SEC regulations to file their annual and quarterly reports to SEC on an accelerated basis compared to non-accelerated filers. As of 2005, SEC estimated that about 60 percent —5,971 companies—of all registered public companies were non-accelerated filers. SEC recently further differentiated smaller companies from what it calls “well-known seasoned issuers”—those largest companies ($700 million or more in public float) with the most active market following, institutional ownership, and analyst coverage. Title I of the act establishes PCAOB as a private-sector nonprofit organization to oversee the audits of public companies that are subject to the securities laws. PCAOB is subject to SEC oversight. The act gives PCAOB four primary areas of responsibility: registration of accounting firms that audit public companies in the U.S. securities markets; inspections of registered accounting firms; establishment of auditing, quality control, and ethics standards for registered accounting firms; and investigation and discipline of registered accounting firms for violations of law or professional standards. Title II of the act addresses auditor independence. It prohibits the registered external auditor of a public company from providing certain nonaudit services to that public company audit client. Title II also specifies communication that is required between auditors and the public company’s audit committee (or board of directors) and requires periodic rotation of the audit partners managing a public company’s audits. Titles III and IV of the act focus on corporate responsibility and enhanced financial disclosures. Title III addresses listed company audit committees, including responsibilities and independence, and corporate responsibilities for financial reports, including certifications by corporate officers in annual and quarterly reports, among other provisions. Title IV addresses disclosures in financial reporting and transactions involving management and principal stockholders and other provisions such as internal control over financial reporting. More specifically, section 404 of the act establishes requirements for companies to publicly report on management’s responsibility for establishing and maintaining an adequate internal control structure, including controls over financial reporting and the results of management’s assessment of the effectiveness of internal control over financial reporting. Section 404 also requires the firms that serve as external auditors for public companies to attest to the assessment made by the companies’ management, and report on the results of their attestation and whether they agree with management’s assessment of the company’s internal control over financial reporting. SEC and PCAOB have issued regulations, standards, and guidance to implement the Sarbanes-Oxley Act. For instance, both SEC regulations and PCAOB’s Auditing Standard Number 2, “An Audit of Internal Control Over Financial Reporting Performed in Conjunction with an Audit of Financial Statements” state that management is required to base its assessment of the effectiveness of the company’s internal control over financial reporting on a suitable, recognized control framework established by a body of experts that followed due process procedures, including the broad distribution of the framework for public comment. Both the SEC guidance and PCAOB’s auditing standard cite the COSO principles as providing a suitable framework for purposes of section 404 compliance. In 1992, COSO issued its “Internal Control—Integrated Framework” (the COSO Framework) to help businesses and other entities assess and enhance their internal control. Since that time, the COSO framework has been recognized by regulatory standards setters and others as a comprehensive framework for evaluating internal control, including internal control over financial reporting. The COSO framework includes a common definition of internal control and criteria against which companies could evaluate the effectiveness of their internal control systems. The framework consists of five interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring. While SEC and PCAOB do not mandate the use of any particular framework, PCAOB states that the framework used by a company should have elements that encompass the five COSO components on internal control. Internal control generally serves as a first line of defense in safeguarding assets and preventing and detecting errors and fraud. Internal control is defined as a process, effected by an entity’s board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of the following objectives: (1) effectiveness and efficiency of operations; (2) reliability of financial reporting; and (3) compliance with laws and regulations. Internal control over financial reporting is further defined in the SEC regulations implementing section 404. These regulations define internal control over financial reporting as providing reasonable assurance regarding the reliability of financial reporting and the preparation of financial statements, including those policies and procedures that pertain to the maintenance of records that, in reasonable detail, accurately and fairly reflect the transactions and dispositions of the assets of the company; provide reasonable assurance that transactions are recorded as necessary to permit preparation of financial statements in conformity with generally accepted accounting principles, and that receipts and expenditures of the company are being made only in accordance with authorizations of management and directors of the company; and provide reasonable assurance regarding prevention or timely detection of unauthorized acquisition, use, or disposition of the company’s assets that could have a material effect on the financial statements. PCAOB’s Auditing Standard No. 2 reiterates this definition of internal control over financial reporting. Internal control is not a new requirement for public companies. In December 1977, as a result of corporate falsification of records and improper accounting, Congress enacted the Foreign Corrupt Practices Act (FCPA). The FCPA’s internal accounting control requirements were intended to prevent fraudulent financial reporting, among other things. The FCPA required companies to: (1) make and keep books, records, and accounts that in reasonable detail accurately and fairly reflect the transactions and dispositions of assets and (2) develop and maintain a system of internal accounting controls sufficient to provide reasonable assurance over the recording and executing of transactions, the preparation of financial statements in accordance with standards, and maintaining accountability for assets. Smaller Public Companies Have Incurred Disproportionately Higher Audit Costs in Implementing the Act, but Impact on Access to Capital Remains Unclear Based on our analysis, costs associated with implementing the Sarbanes- Oxley Act—particularly those costs associated with the internal control provisions in section 404—were disproportionately higher (as a percentage of revenues) for smaller public companies. In complying with the act, smaller companies noted that they incurred higher audit fees and other costs, such as hiring more staff or paying for outside consultants, to comply with the act’s provisions. Further, resource and expertise limitations that characterize many smaller companies as well as their general lack of familiarity or experience with formal internal control frameworks contributed to the challenges and increased costs they faced during section 404 implementation. Along with other market factors, the act may have encouraged a relatively small number of smaller public companies to go private, foregoing sources of funding that were potentially more diversified and may be less expensive for many of these companies. However, the ultimate impact of the Sarbanes-Oxley Act on smaller public companies’ access to capital remains unclear because of the limited time that the act has been in effect and the large number of smaller public companies that have not yet fully implemented the act’s internal control provisions. Smaller Public Companies Incurred Disproportionately Higher Audit Costs Our analysis indicates that audit fees have increased considerably since the passage of the act, particularly for those smaller public companies that have fully implemented the act. Both smaller and larger public companies have identified the internal control provisions in section 404 as the most costly to implement. However, audit fees may have also increased because of the current environment surrounding public company audits including, among other things, the new regulatory oversight of audit firms, new requirements related to audit documentation, and legal risk. Figure 1 contains data reported by public companies on audit fees paid to external auditors before and after the section 404 provisions became effective for accelerated filers in 2004. Based on this data, we found that (1) audit fees already were disproportionately greater as a percentage of revenues for smaller public companies in 2003 and (2) the disparity in smaller and larger public companies’ audit fees as a percentage of revenues increased for those companies that implemented section 404 in 2004. For example, of the companies that reported implementing section 404, public companies with market capitalization of $75 million or less paid a median $1.14 in audit fees for every $100 of revenues compared to $0.13 in audit fees for public companies with market capitalization greater than $1 billion. Among public companies with market capitalization of $75 million or less (2,263 in total), the 66 companies that implemented section 404 paid a median $0.35 more per $100 in revenues compared to those that had not implemented section 404. However, using publicly reported audit fees as an indicator of the act’s compliance costs has some limitations. First, the audit fees reported by companies that complied with section 404 should include fees for both the internal control audit and the financial statement audit. As a result, we could not isolate the audit fees associated with section 404. Second, the fees paid to the external auditor do not include other costs companies incurred to comply with section 404 requirements, such as testing and documenting internal controls and fees paid to external consultants. While the spread between what smallest and largest public companies that implemented section 404 paid as a percentage of revenue increased between 2003 and 2004, we also noted that, as a percentage of revenue, the relative disproportionality between the audit fees paid by smaller public companies and the largest public companies remained roughly the same between 2003 and 2004. However, unlike audit fees, these costs are not separately reported and, therefore, are difficult to analyze and measure. Smaller Public Companies Incurred Other Costs in Complying with the Act According to executives of smaller public companies that we contacted, smaller companies incurred substantial costs in addition to the fees they paid to their external auditors to comply with section 404 and other provisions of the act. For example, 128 of the 158 smaller public companies that responded to our survey (81 percent of respondents) had hired a separate accounting firm or consultant to assist them in meeting section 404 requirements. Services provided included assistance with developing methodologies to comply with section 404, documenting and testing internal controls, and helping management assess the effectiveness of internal controls and remediate identified internal control weaknesses. These smaller companies reported paying fees to external consultants for the period leading up to their first section 404 report that ranged from $3,000 to more than $1.4 million. Many also reported costs related to training and hiring of new or temporary staff to implement the act’s requirements. Additionally, some of the smaller companies that responded to our survey reported that their CFOs and accounting staff spent as much as 90 percent of their time for the period leading up to their first section 404 report on Sarbanes-Oxley Act compliance-related issues. Finally, many of the smaller public companies incurred missed “opportunity costs” to comply with the act that were significant. For example, nearly half (47 percent) of the companies that responded to our survey reported deferring or canceling operational improvements and more than one-third (39 percent) indicated that they deferred or cancelled information technology investments. While most companies, including the majority of the smaller public companies that responded to our survey and that we interviewed, cited section 404 as the most difficult provision to implement, smaller public companies reported challenges in complying with other Sarbanes-Oxley Act provisions as well. Nearly 69 percent of the smaller public companies that responded to our survey said that the act’s auditor independence requirements had decreased the amount of advice that they received from their external auditor on accounting- and tax-related matters. About half the companies that responded to our survey indicated that they incurred additional expenses by hiring outside counsel for assistance in complying with various requirements of the act. Examples mentioned included legal assistance with drafting charters for board committees, drafting a code of ethics, establishing whistleblower protection, and reviewing CEO and CFO certification requirements. About 13 percent of the smaller public companies reported incurring costs to appoint a financial expert to serve on the audit committee, and about 6 percent reported incurring costs to appoint other independent members to serve on the audit committee. While these types of costs were consistent with those reported for larger companies, the impact on smaller public companies was likely greater given their more limited revenues and resources. Smaller Companies Have Different Characteristics Than Larger Companies, Some of Which Contributed to Higher Implementation Costs While public companies—both large and small—have been required to establish and maintain internal accounting controls since the Foreign Corrupt Practices Act of 1977, most public companies and their external auditors generally had limited practical experience in implementing and using a structured framework for internal control over financial reporting as envisioned by the implementing regulations for section 404. Our survey of smaller public companies and our discussions with external auditors indicated that the internal control framework—that is the COSO framework—referred to in SEC’s regulations and PCAOB’s standards implementing section 404 was not widely used by public companies, especially smaller companies, prior to the Sarbanes-Oxley Act. Many companies documented their internal controls for the first time as part of their first year implementation efforts to comply with section 404. As a result, many companies probably underestimated the time and resources necessary to comply with section 404, partly because of their lack of experience or familiarity with the framework. These challenges were undoubtedly compounded in companies that needed to make significant improvements in their internal control systems to make up for deferred maintenance of those systems. While this was largely true for both larger and smaller companies, regulators (SEC and PCAOB), public accounting firms, and others have indicated that smaller public companies often face particular challenges in implementing effective internal control over financial reporting. Resource limitations make it more difficult for smaller public companies to achieve economies of scale, segregate duties and responsibilities, and hire qualified accounting personnel to prepare and report financial information. Smaller companies are inherently less able to take advantage of economies of scale because they face higher fixed per unit costs than larger companies with more resources and employees. Implementing the functions required to segregate transaction duties in a smaller company absorbs a larger percentage of the company’s revenues or assets than in a larger company. About 60 percent of the smaller public companies that responded to our survey said that it was difficult to implement effective segregation of duties. Several executives told us that it was difficult to segregate duties due to limited resources. According to COSO’s draft guidance for smaller public companies, smaller companies can develop and implement compensating controls when resource constraints compromise the ability to segregate duties. The American Institute of Certified Public Accountants noted that smaller public companies often do not have the internal audit functions referred to in COSO’s internal framework guidance. Other executives commented that it was difficult to achieve effective internal control over financial reporting because they lacked expertise within their internal accounting staff. For example, according to an executive from a company that reported a material weakness in its section 404 report, the financial accounting standards for stock options were too complex for his staff and it was easier to have its auditor fix the mistakes and cite the company for a material weakness in internal control over financial reporting. Two other executives told us that their auditors cited their companies with material weaknesses in internal controls over financial reporting for not having appropriate internal accounting staff; to remediate this weakness, the companies had to hire additional staff. According to COSO, however, some of the unique characteristics of smaller companies create opportunities to more efficiently achieve effective internal control over financial reporting and more efficiently evaluate internal control which can facilitate compliance with section 404. These opportunities can result from more centralized management oversight of the business, and greater exposure and transparency with the senior levels of the company that often exist in a smaller company. For instance, management’s hands-on approach in smaller companies can create opportunities for less formal and less expensive communications and control procedures without decreasing their quality. To the extent that smaller companies have less complex product lines and processes, and/or centralized geographic concentrations in operations, the process of achieving and evaluating effective internal control over financial reporting could be simplified. According to SEC, another characteristic of smaller public companies is that they tend to be much more closely held than larger public companies; insiders such as founders, directors, and executive officers hold a high percentage of shares in the companies. Further, CFOs of smaller public companies frequently play a more integrated operational role than their larger company counterparts. According to a recommendation by participants at the September 2005 Government-Business Forum on Small Business Capital Formation hosted by SEC, these types of shareholders are classic insiders who do not need significant SEC protection. According to SEC’s Office of Economic Analysis, among public companies with a market capitalization of $125 million or less, insiders own on average approximately 30 percent of the company’s shares. Although the “insider” shareholders owners may not have the same need for significant investor SEC protection as investors in broadly held companies, minority shareholders who are not insiders may have a need for such protection. Complexity, Scope, and Timing of PCAOB Guidance also Appeared to Influence Cost of Section 404 Implementation Accounting firms and public companies also have noted that the scope, complexity, and timing of PCAOB’s Auditing Standard No. 2 contributed to the challenges and higher costs in the first year of implementation of section 404. PCAOB’s Auditing Standard No. 2 establishes new audit requirements and governs both the auditor’s assessment of controls and its attestation to management’s report. PCAOB first issued an exposure draft of the standard for comment by interested parties on October 7, 2003. The Board received 194 comment letters from a variety of interested parties, including auditors, investors, internal auditors, public companies, regulators, and others. Due to the time needed to draft the standard, evaluate the comment letters, and finalize the standard, PCAOB did not issue the final standard until March 2004—more than 8 months after SEC issued its final regulations on section 404 and part way into the initial year of implementation for accelerated filers. SEC, which under the act is responsible for approving standards issued by PCAOB, did not approve Auditing Standard No. 2 until June 17, 2004. As a result of both timing and unfamiliarity with PCAOB’s Auditing Standard No. 2, auditors were not prepared to integrate the internal control over financial reporting attestation and financial audits in the first year of implementation as envisioned by Auditing Standard No. 2. Furthermore, according to PCAOB, auditors were not always consistent in their interpretation and application of Auditing Standard No. 2. In PCAOB’s report on the initial implementation of Auditing Standard No. 2, the Board found that both auditors and public companies faced enormous challenges in the first year of implementation arising from the limited time frames for implementing the new requirements; a shortage of staff with prior training and experience in designing, evaluating, and testing controls; and related strains on available resources. The Board found that some audits performed under these circumstances were not as effective or efficient as they should have been. Auditing firms and a number of public companies have stated that they expect subsequent years’ compliance costs for section 404 to decrease. Costs Associated with the Sarbanes-Oxley Act May Have Impacted the Decision of Some Smaller Public Companies to Go Private, but Other Factors also Influenced Decision to Go Private Since the passage of the act in July 2002, the number of companies going private (that is, ceasing to report to SEC by voluntarily deregistering their common stock) increased significantly. As shown in figure 2, the number of public companies that went private has increased significantly from 143 in 2001 to 245 in 2004, with the greatest increase occurring during 2003. However, the 245 companies represented 2 percent of public companies as of January 31, 2004. Based on the trends observed in 2003 and 2004 and the 80 companies that went private in the first quarter of 2005, we project that the number of companies going private will have risen more than 87 percent, from the 143 in 2001 to a projected 267 through the end of 2005. Our analysis also indicated that companies going private during this entire period were disproportionately small by any measure (market capitalization, revenue, or assets). The costs associated with public company status were most often cited as a reason for going private (see table 2). While there are many reasons for a company deregistering—including the inability to benefit from its public company status—the percentage of deregistered companies citing the direct cost associated with maintaining public company status grew from 12 percent in 1998 to 62 percent during the first quarter of 2005. These costs include the accounting, legal, and administrative costs associated with compliance with SEC’s reporting requirements as well as other expenses such as those related to managing shareholder accounts. The number of companies citing indirect costs, such as the time and resources needed to comply with securities regulations, also has increased since the passage of the Sarbanes-Oxley Act. In 2002, 64 companies that went private cited cost as one of the reasons for the decision; however, that number increased to 143 and 130 companies in 2003 and 2004, respectively. Many of the companies mentioned both the direct and indirect costs associated with maintaining their public company status. Over half of the companies that cited costs mentioned the Sarbanes-Oxley Act specifically (roughly 58 percent in 2004 and 2005 and 41 percent in 2003). For smaller public companies, the costs of complying with securities laws likely required a greater portion of their revenues, and cost considerations (indirect and direct) were the leading reasons for companies exiting the public market, even prior to the enactment of the Sarbanes-Oxley Act. Further, the benefits of public company status historically appeared to have been disproportionately smaller for smaller companies, companies with limited need for external funding, and companies whose public shares were traded infrequently or in low volume at low prices. As a result, issues unrelated to the Sarbanes-Oxley Act, such as market and liquidity issues and the benefits of being private, are also major reasons for companies going private. From 1999 to 2004, more companies cited market and liquidity issues than the indirect costs associated with maintaining their public company status. Companies in this category cited a wide variety of issues related to the company’s publicly traded stock such as a lack of analyst coverage and investor interest, poor stock market performance, limited liquidity (trading volume), and inability to use the secondary market to raise additional capital. Smaller companies also have cited advantages of private status such as greater flexibility, freedom from the short-term pressures of Wall Street, belief that the markets had consistently undervalued the company, and the ability to avoid disclosures of information that might benefit their competitors (see app. II). Companies that elect to go private reduce the number of financing options available to them and must rely on other sources of funding. In aggregate, equity is cheaper when it is supplied by public sources, net of any costs of regulatory compliance. However, in some circumstances, private equity or bank lending may be preferable alternatives to the public market. Statistics suggest bank loans are the primary source of funding for U.S. companies that rely on external financing. Some companies with insufficient market liquidity had little opportunity for follow-on stock offerings and going private would not have fundamentally altered the way they raised capital. We found that almost 25 percent of the companies that deregistered from 2003 through the end of the first quarter of 2005 were not trading on any market at all (see fig. 3). Approximately 37 percent of the companies that went private during this period were traded on the Over-the-Counter Bulletin Board (OTCBB); the general liquidity of this market is significantly less than major markets traded on the NASDAQ Stock Market, Inc. (NASDAQ) or the New York Stock Exchange (NYSE). Additionally, 14 percent were traded in the Pink Sheets and, therefore, were most likely closely held and traded sporadically, if at all. Pink Sheets LLC is not registered with SEC, has no minimum listing standards, does not require quoted companies to provide detailed information to its investors, and is regarded as high-risk by many investors. As a result, trading on the Pink Sheets may produce negative reputational effects that can further reduce liquidity and the market value of the company’s stock, thereby increasing the cost of equity capital. It Is Too Soon to Determine How Sarbanes- Oxley Affected Access to Capital for Smaller Public Companies As previously discussed, a large number of smaller public companies have not fully implemented all the requirements of the Sarbanes-Oxley Act, notably non-accelerated filers (public companies with less than $75 million in public float). As a result, it is unlikely that the act has affected access to the capital markets for these companies. Moreover, the limited time that the act’s provisions have been in force would limit any impact on access to capital, even for the companies that have implemented section 404. For instance, more than 80 percent of the smaller public companies that responded to our survey indicated that the act has had no effect or that they had no basis to judge the effect of the act on their ability to raise equity or debt financing or on their cost of capital. There are indications that the Sarbanes–Oxley Act at a minimum has contributed to some smaller companies rethinking the costs and benefits of public company status. For example, more than 20 percent of the smaller companies that responded to our survey also stated that the act encouraged them to consider going private or deregistering. In contrast, a number of the smaller public companies that responded to our survey cited positive effects associated with the implementation of the act, notably positive impacts on audit committee involvement (60 percent), company awareness of internal controls (64 percent), and documentation of business processes (67 percent). SEC and PCAOB Have Been Addressing Smaller Company Concerns Associated with the Implementation of Section 404 SEC and PCAOB have taken actions to address smaller public company concerns about implementation of Sarbanes-Oxley Act provisions, particularly section 404, by giving smaller companies more time to comply, issuing or refining guidance, increasing communication and education opportunities, and establishing an advisory committee on smaller public companies. In particular, SEC has extended deadlines for complying with section 404 requirements several times since issuing its final rule in 2003 (see table 3). In its final rulemaking on section 404 requirements, SEC stated that it was sensitive to concerns that many smaller public companies would experience difficulty in evaluating their internal control over financial reporting because these companies might not have as formal or well-structured a system of internal control over financial reporting as larger companies. In November 2004, SEC granted “smaller” accelerated filers an additional 45 days to file their reports on internal control over financial reporting out of concern that these companies were not in a position to meet the original deadline. SEC granted non-accelerated filers two additional extensions in March 2005 and September 2005, with the latter extension giving non-accelerated filers until their first fiscal year after July 2007 before having to report under section 404. SEC also considered the particular challenges facing smaller companies when granting these extensions. Further, SEC noted that there were other small business initiatives underway that could improve the effectiveness of non- accelerated company filers’ implementation of the section 404 reporting requirements. While SEC’s final rule serves as basic guidance for public company implementation of section 404 requirements, PCAOB’s Auditing Standard Number 2 provides the auditing standards and requirements for an audit of the financial statements and internal control over financial reporting, as part of an integrated audit. It is a comprehensive document that addresses the work required by the external auditor to audit internal control over financial reporting, the relationship of that work to the audit of the financial statements, and the auditor’s attestation on management’s assessment of the effectiveness of internal control over financial reporting. The standard requires technical knowledge and professional expertise to effectively implement. While both SEC regulations and the PCAOB standard refer to COSO’s internal control framework, many companies were unfamiliar with or did not use this framework, despite the fact that public companies have been required by law to have implemented a system of internal accounting controls since 1977. According to SEC, smaller public companies and their auditors had expressed concern that the COSO internal control framework was designed primarily for larger public companies and smaller companies lacked sufficient guidance on how they could use COSO’s internal control framework, resulting in disproportionate section 404 implementation costs. As a result, SEC staff asked COSO to develop additional guidance to assist smaller public companies in implementing COSO’s internal control framework in a small business environment. In October 2005, COSO issued a draft of the guidance for public comment, and anticipated issuing final guidance for smaller public companies in early 2006. The draft guidance outlined 26 principles for achieving effective internal control over financial reporting and provides examples on how companies can implement them. The draft guidance states that the fundamental concepts of good internal control over financial reporting are the same whether the company is large or small. At the same time, the draft guidance points out differences in approaches used by smaller companies versus their larger counterparts to achieve effective internal control over financial reporting and discusses the unique challenges faced by smaller companies. While intended to provide additional clarity to smaller companies for implementing an internal control framework, the guidance has received mixed reviews with some questioning whether it will significantly change the disproportionate cost and other burdens for smaller public companies associated with section 404 compliance. In December 2004, SEC announced its intention to establish its Advisory Committee on Smaller Public Companies to assess the current regulatory system for smaller companies under the securities laws, including the impact of the Sarbanes-Oxley Act. In addition to granting companies more time to meet the act’s requirements, SEC has been considering how its section 404 guidance and overall approach to implementation might be revised. SEC chartered the advisory committee on March 23, 2005. The committee plans to issue its final report to SEC by April 2006. On March 3, 2006, the committee published an exposure draft of its final report for public comment that contained 32 recommendations related to securities regulation for smaller public companies. Due to the number of recommendations, the advisory committee refers to its 14 highest priority recommendations as “primary recommendations.” One of its primary recommendations is an overarching recommendation calling for a “scaled” approach to securities regulation, whereby smaller public companies are stratified into two groups, “microcap” and “smallcap” companies. Under this recommendation, microcap companies would consist of companies whose common stock in the aggregate make up the lowest 1 percent of U.S. equity market capitalization. The advisory committee estimates, based on data from SEC’s Office of Economic Analysis, that the microcap category would include public companies whose individual market capitalization is less than $128 million, approximately 53 percent of all U.S. public companies. For the smallcap category, the advisory committee estimates that the category would include public companies whose individual market capitalization is less than $787 million and greater than $128 million, and would encompass an additional 26 percent of U.S. public companies and an additional 5 percent of U.S. market capitalization. Taken together, the categories of microcap and smallcap companies, as defined by the advisory committee draft recommendations, would include approximately 79 percent of all U.S. public companies and 6 percent of market capitalization, according to the advisory committee’s analysis of SEC data. The recommendation calling for a scaled approach for securities regulation based on company size was also incorporated into the committee’s preliminary recommendations related to internal control over financial reporting. While acknowledging that some have questioned whether smaller public companies’ problems with section 404 have been overstated, the advisory committee concluded that section 404, as currently structured, “represents a clear problem for smaller public companies and their investors, one for which relief is urgently needed.” In part, the advisory committee based its conclusion on a belief that smaller public company compliance with section 404 has resulted in disproportionate costs and less certain benefits. The advisory committee’s primary recommendations related to internal control over financial reporting address regulatory relief from section 404 for a subset of the microcap and smallcap categories described above by the inclusion of revenue criteria. Specifically, the committee’s preliminary recommendations are that: Unless and until a framework for assessing internal control over financial reporting for such companies is developed that recognizes their characteristics and needs, provide exemptive relief from all of the requirements of section 404 of the Sarbanes-Oxley Act to microcap companies with less than $125 million in annual revenue and to smallcap companies with less than $10 million in annual product revenue. Unless and until a framework for assessing internal control over financial reporting for smallcap companies is developed that recognizes the characteristics and needs of those companies, provide exemptive relief from section 404(b) of the act—the external auditor involvement in the section 404 process—to smallcap companies with less than $250 million but greater than $10 million in annual product revenues and microcap companies with between $125 million and $250 million in annual revenues. By including the revenue criteria, the committee’s recommendations regarding section 404 cover a subset of the public companies included within its microcap and smallcap definitions. The committee estimated that, after applying the revenue criteria, 4,641 “microcap” public companies (approximately 49 percent of 9,428 public companies identified in data developed for the advisory committee by SEC’s Office on Economic Analysis) may potentially qualify for full exemption from section 404 and another 1,957 “smallcap” public companies (approximately 21 percent of the SEC-identified public companies)—a total of 70 percent of SEC-identified public companies—may potentially qualify for exemption from the external audit requirement of section 404(b). It is likely that a number of public companies that would qualify for exemptive relief under the committee’s recommendations have probably already complied with both sections of 404(a) and (b), based on their status as accelerated filers. If adopted, these recommendations would effectively establish a “tiered approach” for compliance with section 404, “unless and until” a framework for assessing internal control over financial reporting is developed for microcap and smallcap companies. Under the tiered approach, larger public companies that do not meet the committee’s size criteria for exemption would continue to be required to comply with both section 404(a)—management’s assessment of and reporting on internal control over financial reporting—and section 404(b)—the external auditors’ attestation on management’s assessment and the effectiveness of the company’s internal control. “Smallcap” public companies that meet the revenue criteria would be exempt from complying with section 404(b), but the companies would still be required to comply with section 404(a). “Microcap” and some “smallcap”companies that meet the revenue criteria would be entirely exempt from both section 404(a) and (b). The committee’s two primary recommendations related to regulatory relief from section 404 for smaller public companies also include additional requirements that affected public companies apply additional corporate governance provisions and report publicly on known material internal control weaknesses. In its next primary recommendation on internal control over financial reporting, which is premised on the adoption of the recommendation for microcap companies described above, the committee acknowledged that SEC might conclude, as a matter of public policy, that an audit requirement is necessary for smallcap companies. In that case, the committee recommended SEC provide for the external auditor to perform an audit of only the design and implementation of internal control over financial reporting, which by its nature would be more limited than the audit of the effectiveness of internal control over financial reporting required by section 404(b) and PCAOB’s Auditing Standard No. 2, and that PCAOB develop a new auditing standard for such an engagement. While this recommendation is based on the view that having the external auditor perform a review of the design and implementation of internal control over financial reporting would be more cost-effective than the work otherwise required under Auditing Standard No. 2, the committee’s report does not address the extent to which costs for such a review would be lower than that required under Auditing Standard No. 2 and whether the lower costs would be worth the reduced assurances provided by reduced scope of the external auditors’ work on internal control over financial reporting. While not specifically focused on small business issues, SEC also conducted a public “roundtable” in April 2005 that gave public companies, accounting firms, and others an opportunity to provide feedback to SEC and PCAOB on what went well and what did not during the first year of section 404 implementation. GAO also participated in this roundtable. Following the roundtable, the SEC and PCAOB Chairmen noted the importance of section 404 requirements but acknowledged that initial implementation costs had been higher than expected and noted the need to improve the cost-benefit equation for small and mid-sized companies. Both agencies issued additional guidance in May 2005 based on findings from the roundtable. PCAOB’s guidance clarified that auditors (1) should integrate their audits of internal control over financial reporting with their audits of the client’s financial statements, (2) exercise judgment and tailor their audit plans to best meet the risks faced by their clients rather than relying on standardized “checklists,” (3) use a top-down approach beginning with company-level controls and use the risk assessment required by the standard, (4) take advantage of the work of others, and (5) engage in direct and timely communication with their audit clients, among other matters. Guidance by SEC and its staff emphasized the need for reasonable assurance, risk-based assessments, better communication between the auditor and client, and clarified what should be in material weakness disclosures. Representatives of the smaller public companies that we interviewed indicated that the additional guidance that SEC and PCAOB issued was helpful. SEC and PCAOB plan to hold a second roundtable in May 2006 to discuss companies’ second year experiences with implementing section 404. Both chairs of SEC and PCAOB have said that they would consider additional guidance if necessary. On November 30, 2005, PCAOB also issued a report on the initial implementation of its auditing standard on internal control over financial reporting. The report included observations by PCAOB—based in significant part, but not exclusively, on its inspections of public accounting firms, which in the 2005 cycle included a review of a limited selection of audits of internal control over financial reporting—on why the internal control audits were not as efficient or effective as the standard intended. PCAOB also amplified the previously issued guidance of May 2005, discussing how auditors could achieve more effective and efficient implementation of the standard. Further, PCAOB has held a series of forums nationwide to educate the small business community on the PCAOB inspections process and the new auditing standards. The goal of the forums was to provide small accounting firms and smaller public companies an opportunity to discuss PCAOB-related issues with Board members and staff. PCAOB also established a Standing Advisory Group to advise PCAOB on standard- setting priorities and policy implications of existing and proposed standards. The Standing Advisory Group has considered ways to improve the application of its internal control over financial reporting requirements—Auditing Standard No. 2—with respect to audits of smaller public companies. Finally, both SEC and PCAOB have acknowledged the challenges that smaller public companies faced and continue to face in implementing section 404 and have begun to address those challenges. SEC also has emphasized that smaller companies need to focus on the quality of their internal control over financial reporting. Data provided by SEC’s Office of Economic Analysis and other studies have pointed to the increased level of restatements as an indicator that the Sarbanes-Oxley Act—section 404 in particular—has gotten companies to identify and correct weaknesses that led to financial reporting misstatements in prior fiscal years. For example, according to recent research conducted by Glass, Lewis and Co., the restatement rate for smaller public companies was more than twice the rate for the largest public companies (9 percent for companies with revenues of less than $500 million and 4 percent for companies with more than $10 billion). SEC staff also noted that smaller public companies had a disproportionately higher rate of material weaknesses in internal control over financial reporting during the first year of implementing section 404. Our discussions with accounting firms confirmed that smaller public companies have had a higher rate of reported material weaknesses in internal control over financial reporting than larger public companies. A major challenge in considering any regulatory relief from section 404 is that the overriding purpose of the Sarbanes-Oxley Act is investor protection. Investor confidence in the integrity and reliability of financial reporting is a critical element for the efficient functioning of our capital markets. The purpose of internal control over financial reporting is to provide reasonable assurance over the integrity and reliability of the financial statements and related disclosures. Market reactions to financial misstatements illustrate the importance of accurate financial reporting, regardless of a company’s size. Given the anticipated regulatory changes, particularly those relating to section 404’s internal control reporting requirements, smaller public companies may be limiting or not taking definitive actions to improve internal control over financial reporting based on a perception that they could become exempt from section 404. Further, PCAOB officials noted that such a perception may have limited smaller business involvement in PCAOB forums. Sarbanes-Oxley Act Requirements Minimally Affected Smaller Private Companies, Except for Those Seeking to Enter the Public Market While the act does not impose new requirements on privately held companies, companies choosing to go public realistically must spend additional time and funds in order to demonstrate their ability to comply with the act, section 404 in particular, to attract investors. This may have been a contributing factor in the reduction of the number of initial public offerings (IPO) issued by small companies since 2002. However, other factors—stock market performance and changes in listing standards— likely also have affected the number of IPOs. While a number of states proposed legislation with provisions similar to the Sarbanes-Oxley Act, three states actually enacted legislation requiring private companies or nonprofit organizations to adopt requirements similar to certain Sarbanes- Oxley Act provisions. Finally, some privately held companies have been adopting the act’s enhanced governance practices because these companies believe these practices make good business sense. Sarbanes-Oxley May Have Affected IPO Activity; however, Other Important Factors also Influence Entry into the Public Market and Access to Capital Small businesses that are not public companies typically rely on a variety of sources to finance their operations, including personal savings, credit cards, and collateralized bank loans. In addition, small businesses can use private equity capital sources such as venture capital funds—private partnerships that provide private equity financing to early- and later-stage high-growth small businesses—to fund their growth. Small businesses may also issue equity shares to other types of investors to finance further growth. These shares may be sold through private placements where shares are sold directly to investors (direct placement) or through a public offering where the shares are sold through an underwriter (going public). In addition, some small companies issue equities that trade on smaller markets such as the Pink Sheets. For those private companies desiring to enter the public market, the IPO process has always been recognized as a time-consuming and expensive endeavor. However, venture capitalists and private company officials told us that, as a result of the act and other market factors, many private companies have been spending additional time, effort, and money to convince investors that they can meet the requirements of the act. For example, investors have become more cautious and demanding of the private companies in which they invest. Consequently, private companies have hired auditors and additional staff to make substantial changes to their financial system and data-reporting capabilities, document internal controls and processes, and review or change accounting procedures. According to venture capitalists and private company officials with whom we spoke, a private company’s ability to meet the Sarbanes-Oxley Act’s requirements can significantly decrease some of the investment risk associated with becoming a public company. For example, both groups told us that companies with well-documented internal control and governance policies were more attractive and able to secure investor funding at a much lower cost. Moreover, they noted that underwriters expected private companies to consider and comply with the act well in advance of going public. If a private company were unable to meet the act’s requirements, venture capitalists would want the company to show evidence of a plan for becoming compliant as soon as the company became public. If not, venture capitalists noted that they would be less likely to invest in such a company and look elsewhere for investment opportunities. These new expectations may have served to increase the expenses associated with the IPO process through changes in the professional fees charged by auditors and potentially other costs as well. Specifically, we found that there has been a disproportionate increase for the smallest companies when IPO expenses were viewed as a percentage of revenue. As shown in table 4, the direct expenses (excluding underwriting fees) associated with the IPO represented a significant portion of a small company’s revenues, relative to larger companies, from 1998 through the second quarter of 2005. These expenses have increased disproportionably since 2002 for small companies going public—especially for the smallest of these companies ($25 million or less in revenues). While Sarbanes-Oxley Act requirements could explain some of this increase, legal, exchange listing, printing, and other fees unrelated to the act could also account for this increase. Moreover, other market factors also could explain the increase in IPO expenses paid to auditors. In addition to the requirements of the Sarbanes-Oxley Act and the general increase in direct expenses, other important factors likely have influenced IPO activity. To illustrate, the downward trend in IPOs occurred before the passage of the Sarbanes-Oxley Act in mid-2002. It is widely acknowledged that IPO filings and pricings tend to be closely associated with stock market performance. As shown in figure 4, companies generally issued (priced) significantly more IPOs when stock market valuations were higher. Companies with smaller reported revenues now make up a smaller share of the IPO market. The number of IPOs by companies with revenues of $25 million or less decreased substantially, from 70 percent of all IPOs in 1999 to about 48 percent in 2004 and 31 percent during the first two quarters of 2005. Venture capitalists told us that, on average, a private company had to demonstrate at least 6 quarters of profitability before it could go public and hire an auditor to carry it through the IPO process. According to the venture capitalists, an increasing number of small and mid-sized private companies have been pursuing mergers and acquisitions as a means of growing without going through the IPO process, which now typically costs more than a merger or acquisition. Potential Spillover Effects of the Sarbanes-Oxley Act on Private Companies Have Been Minimal While the Sarbanes-Oxley Act has increased corporate governance and accountability awareness throughout business and investor communities, our research and discussions with representatives of financial institutions suggest that financiers are not requiring privately held companies to meet Sarbanes-Oxley Act requirements as a condition to obtaining access to capital or other financial services. For example, the representatives said they emphasize utilization of credit scoring to make decisions and may make lending decisions using “personal guarantees” in lieu of audited financial statements and reported cash flow on financial statements for the smallest private companies. For larger private companies, the representatives stated that they require audited financial statements and cash flow information, but that their lending requirements existed well before the Sarbanes-Oxley Act and have not changed as a result of its passage. Overall, they noted that they do not believe that the act has affected the way financial institutions and lenders conduct business with private companies. They also noted that financial institutions and lenders have always enjoyed the freedom to obtain virtually any information about a potential borrower and to inquire about the company’s financial reporting process and corporate governance practices. For example, if it were considered necessary to help determine a company’s ability repay a debt, a lender could ask the company to provide copies of any corporate governance guidelines, business ethics policies, and key committee charters that the company had adopted. Immediately following the act’s passage, several states proposed legislation to enact corporate governance and financial reporting reforms for private companies and nonprofit organizations. Specifically, several state legislatures proposed instituting requirements similar to those in the Sarbanes-Oxley Act for privately held state-registered companies. Subsequently, three states—Illinois, Texas, and California—passed legislation that mandates corporate governance and accountability requirements that resemble certain provisions of the Sarbanes-Oxley Act. For example, Illinois passed legislation in 2004 that requires enhanced disclosures for certain nonpublic companies and additional licensing requirements for certified public accountants and, in 2003, Texas passed legislation that imposes strict ethics and disclosure requirements for outside financial advisors and service providers, public or private, that provide financial services to the state government. On September 29, 2004, California adopted the Nonprofit Integrity Act of 2004, becoming the first state in the nation to require nonprofit organizations to meet requirements that resemble some provisions of the Sarbanes-Oxley Act. For instance, nonprofits with gross revenues of $2 million or more operating within the state of California currently are required to have independent auditors and, in the case of charitable corporations, audit committees. Further, two other states—Nevada and Washington—have passed legislation that require accounting firms to retain work papers for 7 years for audits of both public and private companies. Furthermore, based on our research and discussions with representatives from the National Association of State Boards of Accountancy, we found that some state boards made changes to regulations that focus on key governance and accountability issues similar to those mandated by the Sarbanes-Oxley Act. For example, New Jersey adopted enhanced peer review requirements and Tennessee instituted additional work paper retention requirements for certified public accountants. Based on our discussions with private equity providers and private company officials, it appears that some privately held companies increasingly have incorporated certain elements of the Sarbanes-Oxley Act into their governance and internal control policies. Specifically, they have adopted practices such as CEO/CFO financial statement certification, appointment of independent directors, corporate codes of ethics, whistleblower procedures, and approval of nonaudit services by the board. According to these officials, some private companies have reported receiving pressure from board members, auditors, attorneys, and investors to implement certain “best practice” policies and guidelines, modeled after the requirements of the act. They noted that the act has raised the bar for what constitutes best practices in corporate governance and for expectations regarding internal control. Additionally, the officials told us that some private companies may have chosen to voluntarily adopt certain practices that resemble Sarbanes-Oxley Act provisions to satisfy external auditors and legal counsel looking for comparable assurances to reduce risk, increase confidence, and improve credibility with many stakeholders. Based on our research, we found that many of the aspects of corporate governance reform currently being adopted by private companies were those relatively inexpensive to implement, but information on the specific costs associated with adopting these provisions was not available. Smaller Companies Appear to Have Been Able to Obtain Needed Auditor Services, Although the Overall Audit Market Remained Highly Concentrated Since the enactment of the Sarbanes-Oxley Act, smaller public companies have been able to obtain needed auditor services; however, auditor changes suggest smaller companies have moved from using the services of a large accounting firm to using services of mid-sized and small firms. Some of this activity has resulted from the resignation of large accounting firms from providing audit services to small public companies. Reasons for these changes range from audit cost and service concerns cited by companies to client profitability and risk concerns cited by accounting firms, including capacity constraints and assessments of client risk. In recent years, public accounting firms have been categorized into three categories—the largest firms, “second tier” firms (mid-sized), and regional and local firms (small). From 2002 to 2004, 1,006 companies reported auditor changes involving a departure from a large accounting firm. Over two-thirds of these companies reported switching to a mid-sized or small accounting firm. Most of the companies that switched to a mid-sized or small accounting firm were smaller public companies with market capitalization or revenues of $250 million or less. Overall, mid-sized and small accounting firms conducted 30 percent of the total number of public company audits in 2004—up from 22 percent in 2002. Despite client gains for mid-sized and small firms, the overall market for audit services remained highly concentrated, with mid-sized and smaller firms auditing just 2 percent of total U.S. publicly traded company revenue. In the long run, mid-sized and small accounting firms could increase opportunities to enhance their recognition and acceptance among capital market participants as a result of the gains in public companies audited and operating under PCAOB’s registration and inspection process. Smaller Companies Found It Harder to Keep or Obtain the Services of a Large Accounting Firm, but Overall Access to Audit Services Appeared Unaffected Our limited review did not find evidence to suggest that the Sarbanes- Oxley Act has made it more difficult for smaller public companies to obtain needed audit services, but did suggest that smaller public companies may have found it harder to retain a large accounting firm as a result of increased demand for auditing services, largely due to the implementation of section 404 and other requirements of the act, and the capacity limitations of the large accounting firms. Of the 2,819 auditor changes from 2003 through 2004 that we identified using Audit Analytics data, 79 percent were made by companies that represented the smallest of publicly listed companies (companies with $75 million or less in market capitalization or revenue). Although fewer mid-sized and small accounting firms conducted public company audits in 2004 because some firms did not register with PCAOB or merged with other firms, the market appears to have absorbed these changes effectively, with other firms taking on these clients. Recent Auditor Changes Resulted in Small Accounting Firms Gaining Clients Our analysis showed that 1,006 of the 2,819 changes, or 36 percent, involved departures from a large accounting firm. Of the 1,006 auditor changes, less than one-third (311 or 31 percent) resulted in the public company moving to another large accounting firm, and slightly under two- thirds (651 or 65 percent) retained a mid-sized or small accounting firm (see table 5). Over the same period, mid-sized and small accounting firms lost fewer public company clients to the large accounting firms; as a result, mid-sized and small firms experienced a net increase of 510 public company clients—a net gain of 161 and 349 companies for mid-sized and smaller firms, respectively. Because we had no data on companies’ selection processes, we could not determine whether mid-sized and small firms competed for these clients with other large accounting firms or if they received these clients by default with no competition from the other large accounting firms. According to Who Audits America, small and mid-sized accounting firms increased their percentage public company audit from 22 percent in 2002 to 27 percent in 2003, and by 2004 they audited 30 percent of all U.S. publicly traded companies. Small and mid-sized firms audited over 38 percent of all public clients in 2004 according to Audit Analytics data, which include, in addition to publicly traded companies, other SEC reporting companies including foreign registered entities, registered funds and trusts, and registered public companies that are not publicly traded. The majority of the clients the mid-sized and small firms gained were smaller companies with market capitalization or revenues averaging $200 million or less. As shown in table 5 and figure 5, the companies leaving a large accounting firm and retaining another large firm tended to be very large—with average market capitalization (or revenue) of more than $1 billion. However, the average market capitalization (or revenue) of companies leaving a large accounting firm and retaining a mid-sized accounting firm was less than $175 million and the capitalization (or revenue) of companies retaining a small firm was significantly smaller— less than $53 million. Similarly, companies leaving smaller and mid-sized firms that retained a large accounting firm tended to be much larger than those that retained another mid-sized or small firm. Reasons for Auditor Changes May Have Included Costs Related to the Act and Risk Assessments While the reasons for the movement of smaller public companies to mid- sized and small accounting firms may be somewhat speculative at this point, the Sarbanes-Oxley Act may have contributed to this shift. Some smaller companies may have preferred a large firm because of the perception that large accounting firms—by virtue of their reputation or perceived skills—can help attract investors and improve access to capital. Workload demands placed on the large firms by larger public companies, which represent the overwhelming majority of their clients, have increased with section 404 and other Sarbanes-Oxley Act implementing regulations. The resulting increases in workload and audit fees appear to have constrained smaller companies’ access to large accounting firms—either because smaller companies were unable to afford a large accounting firm or because large accounting firms resigned from smaller clients. According to Audit Analytics, the largest accounting firms resigned from three times as many clients in 2004 as in 2001, and three-quarters of those were companies with revenues of less than $100 million. Beyond resignations by large accounting firms in response to increased demand for audit services, the act may have caused large accounting firms to reevaluate the risk in their aggregate client portfolios by increasing the responsibilities and liability of auditors, leading them to shed smaller public companies. According to the large accounting firms with whom we spoke, they did not have enough resources to retain all of their clients after the Sarbanes-Oxley Act and cited risk as a significant factor in choosing which clients to keep. Moreover, the largest audit firms could be applying stricter profitability guidelines in selecting their clients, eliminating those engagements where profit margins are smaller. While former clients of large accounting firms may represent opportunities for mid-sized and small accounting firms, they also represent some risks. For example, we found that a disproportionate percentage of the companies that left a large accounting firm for a small firm had accounting or risk issues. Overall, about 69 percent of the companies that left a large accounting firm switched to a mid-sized or small accounting firm. However, 92 percent of the companies that received a going concern qualification went to a mid-sized or small accounting firm. In addition, about 81 percent of the companies with at least one accounting issue (such as restatement, reportable condition, scope limitation, management found to be unreliable, audit opinion concerns, illegal acts, or SEC investigation) went from a large to a mid-sized or small accounting firm. In contrast, 63 percent of the companies with no going concern qualification or any additional “risk” issues went to mid-sized and small firms. We also found that, if a large accounting firm resigned as the auditor of record, the company was more likely to switch to a mid-sized or small accounting firm. Roughly 85 percent of the smallest companies that were dropped by one of the largest accounting firms retained a smaller audit firm. Mid-sized and Small Accounting Firms Continued to Operate in a Highly Concentrated Market Although mid-sized and small accounting firms gained clients in 2003 and 2004, they continued to operate in a market dominated by large accounting firms. The market for audit services in 2004 changed little from the market we described in our 2003 report. For example, mid-sized and small accounting firms increased their share of all public company revenues by 1 percentage point in 2002–2004. The market for audit services remained highly concentrated—a tight oligopoly, where in 2004 the four largest firms audited 98 percent of the market and the remaining firms audited 2 percent—and the potential market power was significant. The market for smaller public company audits was much more competitive than the overall and large public company market. As shown in figure 6, while the market for audit services for large company clients remained dominated by large accounting firms, the market for the smallest public company clients appeared to indicate healthy competition. Mid- sized and small firms audited 59 percent of all public company clients with revenues of $25 million or less, 45 percent of all clients with revenues greater than $25 million up to $50 million, and 32 percent of all clients with revenues greater than $50 million up to $100 million. When these revenue categories were combined, the large accounting firms combined with the mid-sized firms audited 75 percent of companies with revenues of $100 million or less, while the small firms audited the remaining 25 percent. As noted in our 2003 report, as companies expanded operations around the world, the large audit firms globally expanded through mergers in order to provide service to their international clients. More recently, mid-sized and small accounting firms gained more large clients. In 2004, these accounting firms audited approximately 3 percent of the companies with revenues greater than $500 million, up from 2 percent in 2002. However, as shown in table 5, the average revenue of the clients lost to the largest accounting firms was $1.1 billion while the average revenue of the client gained from the largest accounting firms was $138.8 million. Overall, mid-sized and small accounting firms conducted 30 percent of the total number of public company audits in 2004—up from 22 percent in 2002. While these companies make up just 2 percent of total public company revenue, they are a large segment of the market of publicly traded clients. Sarbanes-Oxley Act May Impact the Continuing Competitive Challenges Faced by Mid-Sized and Small Accounting Firms According to some experts, competitive challenges related to the ability of mid-sized and small firms to compete for public companies such as capacity, expertise, recognition, and litigation risks may have been strengthened since the passage of the Sarbanes-Oxley Act. For example, in a recent American Assembly report, a number of industry professionals indicated that large accounting firms’ facility with new requirements was seen as increasingly important as audits have become more complex and time-consuming and the financial consequences of noncompliance more severe. Additionally, even though some experts believe that large accounting firms’ regulatory competence has been overstated, a perception may exist among many large and some small U.S. companies as well as other market influencers and stakeholders that only the large accounting firms can provide the required auditing services necessary to meet the requirements of the act. For example, the venture capital industry representatives that we spoke with stated that this perception has been especially prevalent for companies issuing IPOs. As shown in figure 7, companies large and small tended to use large accounting firms for IPOs. Over the long run, the Sarbanes-Oxley Act could ease some of these challenges. For example, mid-sized and small accounting firms have continued to confront the perceptions of capital market participants that only large firms have the skills and resources necessary to perform public company audits. These perceptions have constrained firms from obtaining or retaining many clients that the firms believed were within their capacity to audit. However, the increase in public company audits performed by mid-sized and small accounting firms has given these firms additional opportunities to enhance their recognition and acceptance among more public companies and capital market participants. Also, as smaller public companies begin complying with section 404 in 2007, small accounting firms will gain additional experience with the implementation of the act. Taking on additional clients will provide an important growth opportunity. Effectively matching company size and needs with accounting firm size and capabilities could allow smaller public companies to find the best combination of quality, service value, and reach. In addition, the PCAOB registration and inspection process and the establishment of attestation, quality control, and ethics standards to be used by registered public accounting firms in the preparation and issuance of audit reports could provide increased assurance of the quality of small accounting firm audits. Similarly, as more information will become available through PCAOB’s ongoing inspection program, small accounting firms could establish a “track record,” allowing for additional opportunities for recognition and acceptance among analysts, investment bankers, investors, and public companies. Conclusions The Sarbanes-Oxley Act was a watershed event—strengthening disclosure and internal control requirements for financial reporting, establishing new auditor independence standards, and introducing new corporate governance requirements. Regulators, public companies, audit firms, and investors generally have acknowledged that many of the act’s provisions have had a positive and significant impact on investor protection and confidence. Yet, for smaller public companies and companies of all sizes that have complied with the various provisions of the Sarbanes-Oxley Act, compliance costs have been higher than anticipated—with the higher cost being associated with the internal control over financial reporting requirements of section 404. There is widespread agreement that several factors contributed to the costs of implementing section 404 for both larger and smaller public companies. Few public companies or their audit firms had prior direct experience with evaluating and reporting on the effectiveness of internal control over financial reporting or with implementing the COSO internal control framework, particularly in a small business environment. This was despite previous requirements, dating back to 1977, that public companies implement a system of internal accounting controls. The first year costs were exacerbated because many companies were documenting their internal control over financial reporting for the first time and remediating poor or nonexistent internal controls as part of their first-year implementation efforts to comply with section 404, both of which could be viewed as a positive impact of the act. In addition, the nature, timing, and extent of available guidance on establishing and assessing internal control over financial reporting made it more difficult for most public companies and audit firms to efficiently and effectively implement the requirements of section 404. As a result, management’s implementation and assessment efforts were largely driven by PCAOB’s Auditing Standard No. 2, as guidance at a similar level of detail was not available for management’s implementation and assessment process. These factors, in conjunction with the changed environment and expectations resulting from the act, contributed to a considerable amount of “learning curve” activities and inefficiencies during the initial year of implementation. Auditing firms and a number of public companies have stated that they expect subsequent years’ compliance costs for section 404 to decrease. This is not unexpected given the significance and nature of the changes and a preexisting environment that did not place enough emphasis on effective internal control over financial reporting. Consistent with the findings of the Small Business Administration on the impact of regulations generally on smaller public companies, it is reasonable to conclude that smaller public companies face disproportionately greater costs, as a percentage of revenues, than larger companies in meeting the requirements of the act. While facing the same basic requirements, smaller public companies generally have more limited resources, fewer shareholders, and generally less complex structures and operations. Again, this is to be expected given the economies of scale and differing levels of corporate infrastructure and resources. However, some of the unique characteristics of smaller companies can create opportunities to efficiently achieve effective internal control over financial reporting. Those characteristics include more centralized management oversight of the business, more involvement of top management in the business operations, simpler operations, and limited geographic locations. The ultimate impact of the Sarbanes-Oxley Act on the majority of smaller public companies remains unclear because the time frame to comply with section 404 of the act was extended until fiscal years ending after July 2007 for the approximately 5,971 public companies with less than $75 million in public float. Recognizing the challenges that smaller public companies have faced in meeting the requirements of the act, particularly section 404, SEC formed an advisory committee on smaller public companies to analyze the impact of the act and other securities laws on smaller public companies. The advisory committee has issued an exposure draft of its final reporting stating that certain smaller public companies need relief from section 404, “unless and until” a framework for assessing internal control over financial reporting is developed that recognizes the characteristics and needs of smaller public companies. The exposure draft contains specific recommendations that would essentially result in a “tiered approach” for compliance with section 404 requirements, where larger public companies would continue to be required to fully comply with all requirements of section 404, while smaller public companies consisting of “microcap” and “smallcap” companies would be granted differing levels of exemptions until an adequate framework was in place. We have two specific concerns regarding the advisory committee’s recommendations. First, the recommendations propose relief “unless and until a framework for assessing internal control over financial reporting” for smaller companies is developed that “recognizes the characteristics and needs of those companies.” While the recommendations hinge on the need for a framework that recognizes smaller public company characteristics and needs of smaller public companies, they do not address what needs to be done to establish such a framework or how such a framework should take into consideration the characteristics and needs of smaller public companies. Many, if not most, of the significant problems and challenges encountered by large and small companies in implementing section 404 related to problems with implementation, rather than the internal control framework itself. In addition to having a useful internal control framework, appropriate implementation of a framework by public companies must be based on risk, facts and circumstances, and professional judgment. We believe that sufficient guidance covering both the internal control framework and the means by which it can be effectively implemented is essential to enable large and small public companies to implement a framework which would enable effective and efficient assessment and reporting on the effectiveness of internal control over financial reporting. Our second concern relates to the ambiguity surrounding the conditional nature of the “unless and until” provisions of the recommendations and its potential impact on a large number of companies that would likely qualify for the proposed exemptions. If resolution of small public company concerns about a framework and its implementation results in an extended period of exemption, then large numbers of public companies would potentially be exempted for additional periods from complying with this important investor protection component of the act. The categories of microcap and smallcap companies, as defined by the advisory committee recommendations, cover 79 percent of U.S. public companies and 6 percent of the U.S. equity market capitalization when combined. Although the categories of microcap and smallcap have been further refined by the advisory committee through the addition of a revenue size filter for purposes of its primary recommendations on section 404, it appears that a large number of companies, up to 70 percent of all U.S. public companies, would be potentially exempted. Specifically, the committee estimates that, after applying the revenue criteria, 4,641 “micro cap” public companies (approximately 49 percent of 9,428 public companies identified in data developed for the advisory committee by SEC’s Office on Economic Analysis) may potentially qualify for the proposed full exemption from section 404 and another 1,957 “smallcap” public companies (approximately 21 percent of the identified public companies) may potentially qualify for the proposed exemption from the external audit requirement of section 404(b). These estimates do not include those public companies trading on the Pink Sheets that would be covered by the Advisory Committee’s preliminary recommendations. In addition, it is likely that a number of public companies qualifying for exemptive relief under the committee’s recommendations are likely to have already complied with both sections of 404(a) and (b) of the act under the current category of accelerated filers. Also, regarding the committee’s third primary internal control recommendation calling for a review of the design and implementation of internal control if SEC concludes, as a matter of public policy, that the external auditor’s involvement is required, it is not clear from the committee’s report the extent to which, particularly in the present environment, such a review would result in lower costs than those being associated with the implementation of PCAOB’s Auditing Standard No. 2. Any lower costs that might result must be considered in light of the reduced independent assurances on the effectiveness of internal control over financial reporting that would result and the potential for confusion on the part of users of the public company’s financial statements and audit reports. Until sufficient guidance is available for smaller public companies, some interim regulatory relief on a limited scale may be appropriate. However, given the number of public companies that would potentially qualify for relief under the recommendations being considered, we believe that a significant reduction in scope of the proposed relief needs to occur to preserve the overriding investor protection purpose of the Sarbanes-Oxley Act. The purpose of internal control over financial reporting is to provide reasonable assurance over the integrity and reliability of the financial statements and related disclosures. Public and investor confidence in the fairness of financial reporting is critical to the effective functioning of our capital markets. Market reactions to financial statement misstatements illustrate the importance of accurate financial reporting, regardless of a company’s size. SEC staff and others have pointed to the increased level of restatements as an indicator that the Sarbanes-Oxley Act—section 404 in particular—has prompted companies to identify and correct weaknesses that led to financial reporting misstatements in prior fiscal years. Indicators also show that in some respects, smaller companies have a higher risk profile for investors. For instance, smaller public companies have higher rates of restatements generally and showed a disproportionately higher rate of reported material weakness in internal control over financial reporting during the initial year of section 404 implementation. Over time, having the effective internal control over financial reporting envisioned by the act can reduce some aspects of the higher risk profile of smaller public companies. When SEC receives and considers the final recommendations of SEC’s small business advisory committee, it is essential that SEC consider key principles, under the umbrella principle of investor protection, when deciding whether or to what extent to provide smaller public companies with alternatives to full implementation of the section 404 requirements. These principles include (1) assuring that smaller public companies have sufficient useful guidance to implement, assess, and report on internal controls over financial reporting to meet the requirements of section 404, (2) if additional relief is considered appropriate, conducting further analysis of small public company characteristics to significantly reduce the scope of companies that would qualify for any type of additional relief while working to ensure that the Sarbanes-Oxley Act’s goal of investor protection is being met, and (3) acting expeditiously such that smaller public companies are encouraged to continue improving their internal control over financial reporting. First, it is critical that SEC carefully assess the available guidance, including that being developed by COSO, to determine whether it is sufficient or whether additional action needs to be taken, such as issuing supplemental or clarifying guidance to smaller public companies to help them meet the requirements of section 404. Our analysis of available research and discussions with smaller public companies and audit firms indicate that public companies and external auditors have had limited practical experience with implementing internal control frameworks in a smaller company environment and that additional guidance is needed. Moreover, it is critical that SEC coordinate its actions with PCAOB, which is responsible for establishing standards for the external auditor’s internal control attestations, to ensure that external auditors are using standards and guidance on section 404 compliance that are consistent with guidance for public companies and that they are doing so in an effective and efficient manner. As SEC considers the need for additional implementation guidance, it will be important that the guidance and related PCAOB audit standards be consistent and compatible. Also, it will be important for the PCAOB to continue to identify ways in which auditors can achieve more economical, effective, and efficient implementation of audit-related standards and guidance. Second, as SEC considers whether and to what extent it might be appropriate to provide additional interim relief to some categories of smaller public companies, it will be important to balance the needs of the investing public with the concerns expressed by small businesses. In doing so, it is important to determine whether there are unique characteristics, in addition to size, that could influence the extent that some regulatory accommodation might be appropriate in order to arrive at a targeted and limited category of companies being provided with potential exemptions. For example, if these companies were closely held or have a higher rate of insider investors, regulatory relief may raise less of an investor protection concern. These investors may be more knowledgeable about company operations and receive fewer benefits from section 404’s enhanced disclosures. For companies that are widely traded, regulatory relief would raise more concerns about investor protection and relief would appear less appropriate. Furthermore, although the “insider” shareholder owners may not have the same need for investor protection as investors in broadly held companies, minority shareholders who are not insiders may need such protection. For other purposes, certain provisions of SEC’s securities regulations and the Employee Retirement Income Security Act of 1974 regulations condition different types of relief, in part, on the nature and/or the financial sophistication of the investor, and SEC may wish to consider whether such approaches would help serve to balance the concerns of small businesses against the needs of investors. The criteria and characteristics used should be linked to the investor protection goals of the Sarbanes-Oxley Act and be geared toward limiting the numbers of companies that would be eligible based on those investor protection goals. In addition, the advisory committee’s preliminary recommendations to exempt “smaller public companies” from the external audit requirements of section 404 would include a number of companies that have already complied with section 404, and SEC needs to carefully consider whether it is appropriate to provide regulatory relief on this basis. Finally, we believe that SEC has an obligation to resolve section 404 implementation requirements for smaller public companies in a way that creates incentives for smaller public companies to take actions to improve their internal control over financial reporting. Rather than delaying implementation, which would likely result in smaller public companies anticipating future extensions or relief, SEC’s resolution of these issues would provide needed clarity and certainty over the scope and timing of smaller companies’ compliance with section 404 and provide incentives to smaller public companies to begin the process of implementing section 404. Recommendations In light of concerns raised by the SEC Advisory Committee on Smaller Public Companies and others regarding the ability of smaller public companies to effectively implement section 404, we recommend that the Chairman of SEC assess the guidance available, with an emphasis on implementation guidance for management’s assessment of internal control over financial reporting, to determine whether the current guidance is sufficient and whether additional action is needed, such as issuing supplemental or clarifying guidance to help smaller public companies meet the requirements of section 404, and coordinate with PCAOB to (1) help ensure that section 404-related audit standards and guidance are consistent with any additional guidance applicable to management’s assessment of internal control and (2) identify additional ways in which auditors’ can achieve more economical, effective, and efficient implementation of the standards and guidance related to internal control over financial reporting. If, in evaluating the recommendations of its advisory committee, SEC determines that additional relief is appropriate beyond the current July 2007 compliance date for non-accelerated filers, we recommend that the Chairman of SEC analyze and consider, in addition to size, the unique characteristics of smaller public companies and the knowledge base, educational background, and sophistication of their investors in determining categories of companies for which additional relief may be appropriate to ensure that the objectives of investor protection are adequately met and any relief is targeted and limited. Agency Comments and Our Evaluation We provided a draft of this report to the Chairman, SEC, and the Acting Chairman, PCAOB, for their review and comment. We received written comments from SEC and PCAOB that are summarized below and reprinted in appendixes III and IV. SEC agreed that the Sarbanes-Oxley Act has had a positive impact on investor protection and confidence, and that smaller public companies face particular challenges in implementing certain provisions of the act, notably section 404. SEC stated that our recommendations should provide a useful framework for consideration of its advisory committee’s final recommendations. PCAOB stated that it is committed to working with SEC on our recommendations and that it is essential to maintain the overriding purpose of the Sarbanes-Oxley Act of investor protection while seeking to make its implementation as efficient and effective as possible. Both SEC and PCAOB provided technical comments that were incorporated into the report as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and subcommittees; the Chairman, SEC; the Acting Chairman, PCAOB; and the Administrator, SBA. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov, or Jeanette M. Franzel at (202) 512-9471 or franzelj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix V for a list of other staff who contributed to the report. Appendix I: Objectives, Scope, and Methodology Our reporting objectives were to (1) analyze the impact of the Sarbanes- Oxley Act on smaller public companies in terms of costs of compliance and access to capital; (2) describe the Securities and Exchange Commission’s (SEC) and Public Company Accounting Oversight Board’s (PCAOB) efforts related to the implementation of the act and their responses to concerns raised by smaller public companies and the accounting firms that audit them; (3) analyze the impact of the act on smaller privately held companies, including costs, ability to access public markets, and the extent to which states and capital markets have imposed similar requirements on smaller privately held companies; and (4) analyze smaller companies’ access to auditing services and the extent to which the share of public companies audited by small accounting firms has changed since the enactment of the Sarbanes-Oxley Act. In arriving at our report objectives, we incorporated nine specific questions contained in your request letter. See table 6 for a cross-sectional comparison of the nine specific questions contained in your letter, the four report objectives, and our findings. To address our four objectives, we reviewed and analyzed information from a variety of sources, including the legislative history of the act, relevant regulatory pronouncements and related public comment letters, and available research studies and papers. We also interviewed officials at SEC, PCAOB, and the Small Business Administration (SBA). In addition, we held discussions with the chief financial officers (CFO) of smaller public and private companies, representatives of relevant trade associations, accounting firms, market participants, and experts. Impact of Sarbanes-Oxley Act on Smaller Public Companies We could not analyze the impact of the act on many smaller public companies because SEC has extended the date by which public registrants with less than $75 million public float (known as “non-accelerated” filers) must comply with Section 404 of the act to their first fiscal year ending on or after July 15, 2007. According to SEC, non-accelerated filers represent about 60 percent of all registered public companies and about 1 percent of total available market capitalization. As a result, we analyzed public data and other information related to the experiences of public companies that have fully implemented the act’s provisions. We also compared the information from companies that had implemented the act with information from smaller companies that took the SEC extension to gain some insight into the potential impact of these provisions on the non- accelerated filers. Audit Fees and Auditor Changes Audit Analytics, an on-line market intelligence service maintained by Ives Group, Incorporated provides, among other things, a database of audit fees by company back to 2000 along with demographic and financial information. Using this database, we analyzed changes in the audit fees companies have paid by various size categories. Audit Analytics also provides a comprehensive listing of all reported auditor changes, which includes data on the date of change, departing auditor, engaged auditor, whether the change was a dismissal or resignation, whether there was a going concern flag or other accounting issues, and whether a fee dispute or fee reduction occurred. Using this database, we identified 2,819 auditor changes from 2003 through 2004. We performed several checks to verify the reliability of the Audit Analytics data. For example, we crosschecked random samples from each of the Audit Analytics databases with SEC proxy and annual filings and other publicly available information. While we determined that these data were sufficiently reliable for the purpose of presenting trends in audit fees and auditor changes, the descriptive statistics on audit fees contained in the report should be viewed in light of a number of data challenges. First, the Audit Analytics audit fee database does not include fees for companies who did not disclose audit fees paid to their independent auditor in an SEC filing. Second, some companies included in the database—especially small companies—did not report complete financial data. We handled missing data by dropping companies with incomplete financial data from any analysis involving the use of such data. Therefore, it should be noted that we are not dealing with the entire population included in the Audit Analytics database but rather a large subset. Because of these issues, the results should be viewed as estimates of audit fees based on a large sample rather than precise estimates of all fees charged over the entire population. It should also be noted that SEC found issues with the data on market capitalization (used largely in our discussion of auditor changes and companies going private) which are being addressed by Audit Analytics. Deregistrations To determine the number of companies that have deregistered before and after the implementation of the Sarbanes-Oxley Act, we obtained and analyzed data filed with SEC. From 1998 through April 24, 2005, over 15,000 companies filed SEC Form 15 (Certification and Notice of Termination of Registration). First, we analyzed all the companies to determine whether the company was deregistering its common stock to continue to operate as a privately held company. During this step, we eliminated companies that filed the Form 15 as a result of acquisitions, mergers that were not “going private” transactions liquidations, reorganizations, or bankruptcy filings or re-emergences. We also eliminated duplicate filings and filings by foreign registrants. For the remaining companies, we reviewed their SEC filings and press releases and other press articles to determine their reasons for deregistration. We grouped the reasons into seven categories for our final analysis. We took a number of steps to ensure the reliability of the database, including testing of random samples of the coded data, 100 percent verification of certain areas of the database, and various other quality control measures. For the initial coding, we found the error rates to be 0.6 percent or lower for all years except 2001 and 1998. Because the initial error rate exceeded 1.5 percent for these 2 years, we performed 100 percent verification and corrected any errors. However, because the error rate for the remaining years was positive, it is unlikely that we captured every company going private in 1998–2005. We also excluded all companies with one or zero holders of record unless that company also filed a Schedule 13E-3 (Going private transaction by certain issuers) with SEC. In doing so, we may have missed some companies going private. However, an outside study found only 12 companies that filed a Form 15 but did not file a Schedule 13E-3 from 1998 through 2003. Additionally, our analysis of the companies that listed more than one holder of record on the Form 15 should have picked up some of these types of firms. As a result, this limitation is minor in the context of this report and does not alter the trends also found by a number of research reports. Survey of Public Company Views on Implementing the Sarbanes-Oxley Act To obtain information about public companies’ views on implementing Sarbanes-Oxley Act requirements, we conducted a Web-based survey of companies with market capitalization of $700 million or less and annual revenues of $100 million or less that reported to SEC that they had complied with the act’s requirements related to internal control over financial reporting. To develop and test our questionnaire, we interviewed officials at 14 smaller public companies. We then pretested drafts of our questionnaire with 10 companies and then discussed their answers and experiences with our social science survey specialists. The pretests were conducted in person and by telephone with company executives in Virginia, Maryland, New York, Connecticut, California, Georgia, and Illinois. To identify the smaller public companies eligible to participate in the survey, we analyzed company SEC filings from the Audit Analytics database. Our survey universe consisted of 591 companies that met the following five criteria: (1) $700 million or less in market capitalization as of the end of the company’s 2004 fiscal year; (2) $100 million or less in revenues as of the end of the company’s 2004 fiscal year end; (3) completed section 404 requirements by filing related reports of management and the company’s external auditor as of August 11, 2005; (4) were not foreign companies; and (5) were not investment vehicles such as mutual funds and shell companies. Of the 591, we could not reach 168 within the survey period because we were not able to obtain e-mail addresses for the CFO or other executive. We began our Web-based survey on September 21, 2005, and included all useable responses as of November 1, 2005. We sent follow-up e-mails on three occasions to remind respondents to complete the survey. One hundred fifty-eight companies completed the survey for an overall response rate of 27 percent. Only one respondent indicated that his company was a non-accelerated filer. The low response rate raised concerns that the views of 158 respondents might not be representative of all smaller public company experiences with the Sarbanes-Oxley Act. While we could not test this possibility for our primary questions (whether the act places a disproportionate burden on smaller companies or compromises their ability to raise capital), we did conduct an analysis to determine whether our sample differed from the population of 591 in company assets, revenue, and market capitalization and type (based on the North American Industrial Classification System code). We found no evidence of substantial non-response bias based on these characteristics. However, because of the low response rate we still could not assume that the views of the 158 respondents were representative of the views of other smaller public companies on implementing Sarbanes-Oxley Act requirements. Therefore, we do not consider these data to be a probability sample of all smaller public companies. In addition to potential non-response bias, the practical difficulties of conducting any survey may introduce other non-sampling errors. For example, difference in how a particular question is interpreted or the sources of information available to respondents may introduce errors. We took steps to minimize such non-sampling errors in both the data collection and data analysis stages. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second independent analyst checked all the computer analyses. Further, we used GAO’s Questionnaire Programming Language (QPL) system to create and process the Web-based survey. This system facilitates the creation of the instrument, controls access, and ensures data quality. It also automatically generates code for reading the data into SAS (statistical analysis software). This tool is commonly used for GAO studies. We used QPL to automate some processes, but also used analysts to code the open-ended questions and then had a second, independent analyst review them. (The survey contained both open- and close-ended questions.) We entered a set of possible phrases, called tags, which we identified for each question into QPL. When the analysts reviewed the text responses, they assign the tags that best reflect the meaning of what the respondent has written. The system then compares the tags assigned by the independent reviewers. Multiple tags may be assigned to a single response; thus, it is possible for reviewers to agree on some tags and not on others. Although it is possible to have reviewers resolve their differences until agreement is found, for this survey we only considered tags that were selected by all reviewers on the first pass. Tags assigned by only one reviewer were dropped. This process allowed a quantitative analysis of open comments made by respondents. Finally, we verified all data processing on the survey in house and found it to be accurate. SEC and PCAOB Efforts to Address Smaller Company Concerns To address our second objective describing SEC’s and PCAOB’s efforts related to the implementation of the act and their responses to concerns raised by smaller public companies and the accounting firms that audit them, we interviewed SEC and PCAOB staff on the rulemaking and standard setting processes. We also interviewed public company executives, representatives of relevant trade associations, and market participants for their reaction to the agencies’ rules, guidance, and other public announcements. During the course of our review, both SEC and PCAOB held forums and other open meetings to allow a public discourse on the act’s impact on public companies, accounting firms, investors, and other market participants. We attended most of these forums and open meetings and reviewed submitted comments. Specifically, from November 2004 to February 2006, we attended either in person or through a Web cast the following: SEC’s Advisory Committee on Smaller Public Companies open meetings; SEC’s Roundtable on Implementation of Internal Control Reporting Provisions; SEC’s Government-Business Forum on Small Business Capital Formation; PCAOB’s Standing Advisory Group Meetings; and PCAOB’s forums on auditing in the small business environment. We reviewed the guidance that SEC and PCAOB separately issued on May 16, 2005, as a result of comments received at SEC’s section 404 roundtable. Impact of Act on Smaller Privately Held Companies To determine the act’s impact on smaller privately held companies, we analyzed available research and studies. We also interviewed officials of the National Association of State Boards of Accountancy in states that required or were considering requiring privately held companies to comply with corporate accountability, governance, and financial reporting measures comparable to key provisions in the Sarbanes-Oxley Act. Further, we analyzed data and interviewed officials on whether lenders, financial institutions, private equity providers, or others were imposing the act’s requirements on privately held companies as a condition of obtaining capital or financial services. Finally, we interviewed officials and analyzed available data on whether, as a result of the act, privately held companies were voluntarily adopting key provisions of the act as best practices or whether they had faced challenges in trying to reach the public markets. To assess the impact of the act on privately held companies trying to reach the public markets, we obtained a sample from SEC’s Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system, a database that includes companies’ initial public offering (IPO) and secondary public offering (SPO) filings. Our sample contained registration statements, pricings and applications for withdrawal filed with SEC from 1998 through July 2005. We performed various analyses of IPO and SPO activity prior to and after enactment of Sarbanes-Oxley, including analyses of the sizes of companies coming and returning to the market, types and amounts of IPO expenses, and the reasons cited by companies for withdrawing their IPO filing. We analyzed IPO expenses as a percentage of revenue and offering amount for companies in various size categories to determine whether the differences between the groups changed over time and whether the differences were statistically significant when controlling for other determining factors. SEC’s EDGAR database is considered the definitive source for information on IPOs since all companies issuing securities that list on the major exchanges and the OTCBB, as well as those that meet certain criteria listing on the Pink Sheets, must register the securities with SEC. Nevertheless, we crosschecked the descriptive statistics retrieved from EDGAR with NASDAQ’s IPO data. However, there was no financial data available on several companies, while others failed to provide information to complete all of the fields. In cases where revenue was left blank, individual filings were reviewed and actual revenue, 9-month revenue or pro-forma data was used to determine the size of the company. In cases were this data was not available we dropped these companies from any analysis involving the use of such data. Additionally, there can be significant lag between the dates a company initially files for an IPO with SEC and when the stock of the company is finally priced (begins trading). Because we had data on IPO filings during the last 2 months of 1997, we were able to include those companies that priced IPOs over the 1998-2005 period that initially filed for an IPO during that time. However, any IPOs that were priced during this time but had an initial filing that occurred prior to November 1, 1997, are not included. For this reason the number of priced IPOs for 1998 (and to an even lesser extent 1999) may understate somewhat the actual numbers of companies coming to the public market during that year. This limitation is insignificant in the context of this report. Company Access to Auditing Services and Changes in Share of Public Companies That Small Firms Audit To assess changes in the domestic public company audit market, we used public data—for 2002 and 2004—on public companies and their external accounting firm to determine how the number and mix of domestic public company audit clients had changed for firms other than the large accounting firms. To be consistent with our 2003 study of the structure of the audit market, we used the Who Audits America database, a directory of public companies with detailed information for each company, including the auditor of record. Only domestic public companies traded on the major exchanges or over-the-counter with available financial data were included in our analysis of audit market concentration and the results do not include a number of clients of the smallest audit firms. Users of our 2003 study will also note that we used the term “sales” when referring to auditor concentration but use the term “revenue” in this report. Although Who Audits America refers to sales, our conversations with the provider of the data, confirmed that although the terms can be used interchangeably, “revenue” is a better term than “sales” in accurately describing the contents of the database. To verify the reliability of these data sources, we performed several checks to test the completeness and accuracy of the data. Previously GAO crosschecked random samples of the Who Audits America database with SEC proxy filings and other publicly available information. Descriptive statistics calculated using the database were also compared with similar statistics from published research. Moreover, academics who worked with GAO in the past also compared random samples from Compustat, Dow- Jones Disclosure, and Who Audits America and found no discrepancies. We also crosschecked the results with estimates obtained using Audit Analytics’ audit opinion database. The results were not significantly different and confirm the finding outlined in the body of the report. However, because of the lag in updating some of the financial information and the omission of a number of small public clients, the results should be viewed as estimates useful for describing the market for audit services. We conducted our work in California, Connecticut, Georgia, Maryland, New Jersey, New York, Virginia, and Washington, D.C., from November 2004 through March 2006 in accordance with generally accepted government auditing standards. Appendix II: Additional Details about GAO’s Analysis of Companies Going Private A number of research studies and anecdotal evidence suggest that a significant number of small companies have gone private as a result of costs associated with the increased disclosure and internal control requirements introduced by the Sarbanes-Oxley Act of 2002. To provide a better understanding of companies going private, we analyzed Form 15s filed by companies, related Securities and Exchange Commission (SEC) filings and press releases to determine the total number of companies exiting the public market and the reasons for the change in corporate structure. See appendix I for our scope and methodology. This appendix provides additional information on the construction of our database and descriptive statistics. Our Database Included Firms That “Went Dark” as Well as Firms That Completely Exited the Public Market Although there is no consensus on the term “going private,” we started with the description used in the “Fast Answers” section of SEC’s Web site: a company “goes private” when it reduces the number of its shareholders to fewer than 300 (or 500 in some instances) and is no longer required to file reports with SEC. To reduce the number of holders of record, a company can undertake a number of transactions including tender offers, reverse stock splits, and cash-out mergers. In many cases, the company already meets the requirement for deregistration and therefore the registrant need only file a Form 15 (which notifies SEC of a company’s intent to deregister) with SEC to meet this description of “going private.” As a result, we use the terms “going private” and “deregistering” interchangeably. However, not all companies that deregister completely exit the public markets; some elect to continue trading on the less regulated Pink Sheets. Companies that deregister their shares with SEC but continue public trading on the Pink Sheets are often considered as having “gone dark” rather than private in the academic literature. However, our final “going private” numbers include companies that no longer trade on any exchange and those that continue to trade on the less regulated Pink Sheets (“went dark”). It should be noted that SEC does not have rules that define “going dark” and the term is used here as it is used in academic research. The companies contained in our database include only those companies that deregistered common stock, were no longer subject to SEC filing requirements, and were headquartered in the United States. Moreover, the database excludes most cases where the company was acquired by, or merged into another company; filed for, or was emerging from, bankruptcy; or was undergoing or planning liquidation. We also excluded a significant number of companies that filed for an initial public offering and subsequently filed a Form 15 within a year; filed no annual or quarterly financials between the first filing with SEC and the Form 15; or filed as a result of reorganization where the company remained a public registrant. Based on the information contained on the Form 15, we were able to exclude four types of filers: (1) companies that deregistered securities other than their common stock; (2) companies that continued to be subject to public reporting requirements; (3) companies that were headquartered in a foreign country; and (4) companies for which a Form 15 could not be retrieved electronically. In addition to SEC filings, we used press releases located through Lexis- Nexis to investigate whether the companies experienced any of the disqualifying conditions (bankruptcy, merger, acquisition, liquidation, etc.). Companies that were merged into, or were acquired by, another company were only included if the transaction was initiated by an affiliate of the company (either the company filed a Schedule 13E-3 with SEC or our analysis found evidence of a “going private” transaction in the case of Over-the-Counter Bulletin Board (OTCBB) and Pink Sheet-quoted companies). Moreover, if the transaction resulted in the company becoming a subsidiary of another publicly traded company or a foreign entity, or if the transaction met any of the other disqualifying conditions, that company was excluded from our final numbers. Each Form 15 also contained the number of holders of record. We excluded all companies with one or zero holders of record unless that company also filed a Schedule 13E-3 with SEC. A test of a random sample of 200 of these companies found that merging, bankrupt, and liquidating firms typically reported one or zero as the number of holders of record. Because there may have been some companies that went private by way of merger that did not file a Schedule 13E-3, our database may have excluded some companies going private as a result of using this qualifier. However, this limitation is minor in the context of this report (see app. I for additional information on data reliability). In total, these exclusions left us with 1,093 U.S. companies going private from 1998 through the first quarter of 2005 out of the 15,462 Form 15 filings initially provided to us by SEC. Consistent with Outside Studies, We Found That the Number of Companies That Went Private Increased Significantly from 2001 through 2004 The number of public companies going private increased significantly from 143 in 2001 to 245 in 2004 (see fig. 8). Based on the number of companies going private during the first quarter of 2005, we project that the number of companies going private will increase, to 267 companies by the end of 2005. While these numbers constitute a small percentage of the total number of public companies, the trends we identified suggest that more small companies are reconsidering the cost and benefits of remaining public and raising capital on domestic public equity markets. As figure 8 shows, the number of companies going private increased significantly, whether or not we excluded the types of companies explicitly considered as speculative investments by SEC—blank check and shell companies. Overall, these companies, identified as such by Standard Industry Classification code, represent 17 percent of the companies going private in 2004 but just 2.5 percent of the companies going private during the first quarter 2005 and 8.4 percent of the overall sample. A number of research reports have also found that the number of companies exiting the public market has increased since 2002. Although there are differences in the search methodologies and types of companies included, each study found similar trends and reached similar conclusions (see fig. 9). For example, in Leuz et al. (2004) the number of companies going dark or private increased from 144 to 313 between 2002 and 2003. Moreover the authors found that the bulk of the increase was made up of companies that continued trading on the Pink Sheets after deregistration. Engel et al. (2004), which was based on a smaller subset of deregistering companies, found a statistically significant increase in the rate at which companies went private. Marosi and Massoud (2004) excluded all merger- related transactions and found that the number of companies going dark increased from 71 in 2002 to 127 in 2003. We Grouped Reasons for Company Decisions to Go Private into Seven Categories In analyzing company decisions, we used various sources to determine why the companies included in our database deregistered their common stock. Because companies did not always disclose the reasons for their decision in an SEC filing, we also searched press releases and newswire announcements using the Lexis-Nexis search engine. We then used the reasons given in the various filings and other media to construct seven broad categories, summarized in table 7. Because companies often gave multiple reasons for the decision to deregister (go private) and it was difficult to tell which were the most important, we allowed up to six reasons for each company included in our database. For example, Westerbeke Corporation went private in 2004 and cited the following reasons for the decision: “a small public float,” inability to use its stock as currency for acquisitions, benefits the company would receive as private entity such as “greater flexibility,” the ability to make “decisions that negatively affect quarterly earnings in the short run,” and the costs and time devoted by employees and management “resulting from the adoption of the Sarbanes-Oxley Act of 2002.” This company is included in our database with following coded reasons for going private: (1) market/liquidity issues; (2) private company benefits; (3) direct costs; and (4) indirect costs. More Companies Have Cited Costs as Reasons for Going Private Since 2002 Although companies go private for a variety of reasons, in recent years, more companies cited the direct costs of maintaining public company status as at least one of the reasons for going private. As shown in figure 10, the number of companies citing costs as at least one reason for going private increased from 64 in 2002 to 143 and 130 in 2003 and 2004. However, the percentage of companies citing cost as the only reason for exiting the market has increased significantly in recent years. While only 21 cited costs and no other reason in 2003 (15 percent of the total citing cost), 43 did so in 2004 (33 percent of the total citing cost). During the first quarter of 2005, nearly 50 percent of the companies mentioning cost, cited costs as the only reason for going private. Companies Going Private Typically Were among the Smallest of Publicly Traded Companies By any measure (market capitalization, revenue or assets), the companies that went private over the 2004–2005 period represent some of the smallest companies in the public arena (see figs. 11 and 12). Because these companies were on average very small, they enjoyed limited analyst coverage and limited market liquidity—one of the primary benefits cited for going or remaining public. The median market capitalization and revenue for these companies was less than $15 million. Figure 12 also illustrates that companies going private were disproportionately small, which reflected that the net benefits from being public likely were smallest for small firms and the costs of complying with securities laws likely required a higher proportion of a smaller company’s revenue. For example, 84 percent of the companies that went private in 2004 and 2005 had revenues of $100 million or less and nearly 69 percent had revenues of $25 million or less. We also found that a significant portion of these companies—12.5 percent of those that went private in 2004–2005—had not filed quarterly or annual financial statements with SEC in more than 2 years; therefore, we did not have access to recent financial information. Appendix III: Comments from the Securities and Exchange Commission Appendix IV: Comments from the Public Company Accounting Oversight Board Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts GAO Contacts Acknowledgments In addition to those named above, Harry Medina and John Reilly, Assistant Directors; E. Brandon Booth; Michelle E. Bowsky; Carolyn M. Boyce; Tania L. Calhoun; Martha Chow; Bonnie Derby; Barbara El Osta; Lawrance L. Evans Jr.; Gabrielle M. Fagan; Cynthia L. Grant; Maxine L. Hattery; Wilfred B. Holloway; Kevin L. Jackson; May M. Lee; Kimberly A. McGatlin; Marc W. Molino; Karen V. O’Conor; Eric E. Petersen; David M. Pittman; Robert F. Pollard; Carl M. Ramirez; Philip D. Reiff; Barbara M. Roesmann; Jeremy S. Schwartz; and Carrie Watkins also made key contributions to this report.
Congress passed the Sarbanes-Oxley Act to help protect investors and restore investor confidence. While the act has generally been recognized as important and necessary, some concerns have been expressed about the cost for small businesses. In this report, GAO (1) analyzes the impact of the Sarbanes-Oxley Act on smaller public companies, particularly in terms of compliance costs; (2) describes responses of the Securities and Exchange Commission (SEC) and Public Company Accounting Oversight Board (PCAOB) to concerns raised by smaller public companies; and (3) analyzes smaller public companies' access to auditing services and the extent to which the share of public companies audited by mid-sized and small accounting firms has changed since the act was passed. Regulators, public companies, audit firms, and investors generally agree that the Sarbanes-Oxley Act of 2002 has had a positive and significant impact on investor protection and confidence. However, for smaller public companies (defined in this report as $700 million or less in market capitalization), the cost of compliance has been disproportionately higher (as a percentage of revenues) than for large public companies, particularly with respect to the internal control reporting provisions in section 404 and related audit fees. Smaller public companies noted that resource limitations and questions regarding the application of existing internal control over financial reporting guidance to smaller public companies contributed to challenges they face in implementing section 404. The costs associated with complying with the act, along with other market factors, may be encouraging some companies to become private. The companies going private were small by any measure and represented 2 percent of public companies in 2004. The full impact of the act on smaller public companies remains unclear because the majority of smaller public companies have not fully implemented section 404. To address concerns from smaller public companies, SEC extended the section 404 deadline for smaller companies with less than $75 million in market capitalization, with the latest extension to 2007. Additionally, SEC and PCAOB issued guidance intended to make the section 404 compliance process more economical, efficient, and effective. SEC also encouraged the Committee of Sponsoring Organizations of the Treadway Commission (COSO), to develop guidance for smaller public companies in implementing internal control over financial reporting in a cost-effective manner. COSO's guidance had not been finalized as of March 2006. SEC also formed an advisory committee to examine, among other things, the impact of the act on smaller public companies. The committee plans to issue a report in April 2006 that will recommend, in effect, a tiered approach with certain smaller public companies partially or fully exempt from section 404, "unless and until" a framework for assessing internal control over financial reporting is developed that recognizes the characteristics and needs of smaller public companies. As SEC considers these recommendations, it is essential that the overriding purpose of the Sarbanes-Oxley Act--investor protection--is preserved and that SEC assess available guidance to determine if additional supplemental or clarifying guidance for smaller public companies is needed. Smaller public companies have been able to obtain access to needed audit services and many moved from the largest accounting firms to mid-sized and small firms. The reasons for these changes range from audit cost and service concerns cited by companies to client profitability and risk concerns cited by accounting firms, including capacity constraints and assessments of client risk. Overall, mid-sized and small accounting firms conducted 30 percent of total public company audits in 2004--up from 22 percent in 2002. However, large accounting firms continue to dominate the overall market, auditing 98 percent of U.S. publicly traded company sales or revenues.
Background DOD’s Use of Dual-Hatting “Dual-hatting” is a term used to describe a position in which an incumbent officer has responsibilities in two organizations simultaneously—usually to that officer’s particular military service, and to a joint, combined, or international organization or activity. DOD officials told us that dual- hatting senior leaders is a relatively common practice within DOD to help align authorities, improve mission effectiveness, and use a senior leader’s experience and expertise while balancing the scope of responsibility. Some prominent examples of dual-hatting include the Commander of U.S. Northern Command also serving as the Commander of the North American Aerospace Defense Command; and the Commander of U.S. European Command also serving as the Supreme Allied Commander Europe (that is, the commander of military operations conducted by the North Atlantic Treaty Organization). Additionally, the Air Force and Navy commanders who support U.S. European Command are dual-hatted as service component commanders for U.S. Africa Command. The Dual-Hat Leadership of NSA/CSS and CYBERCOM When the Secretary of Defense directed CYBERCOM’s establishment in 2009, he also recommended to the President that the position of Director of NSA be assigned the responsibility for leading this new command. DOD officials told us the dual-hat leadership arrangement originated to allow CYBERCOM to use NSA/CSS infrastructure and tools to carry out its mission more quickly and to establish unity of command and effort for DOD in the cyberspace domain. As the sole leader of these organizations, the dual-hatted leader is responsible for a broad set of roles and responsibilities, as outlined below in table 1. Since its establishment, CYBERCOM has operated as a sub-unified command organized under U.S. Strategic Command, and this arrangement continues as of April 2017. The National Defense Authorization Act for Fiscal Year 2017 included a provision directing the President to establish CYBERCOM as a unified combatant command. When the President and DOD implement this provision, CYBERCOM will no longer be organized under U.S. Strategic Command. Figure 1 below depicts the NSA/CSS and CYBERCOM leadership arrangement, as of April 2017, and where those offices fit within DOD’s organization. The figure also provides an overview of CYBERCOM’s 133 Cyber Mission Force Teams, which carry out particular parts of CYBERCOM’s mission. As CYBERCOM has matured, leaders—including Congress, the President, the Director of National Intelligence, and the current leader of NSA/CSS and CYBERCOM—have discussed the concept of ending the dual-hat leadership of the two organizations, such that one individual would lead NSA/CSS and another individual would lead CYBERCOM. Section 1642 of the National Defense Authorization Act for Fiscal Year 2017 enumerated a number of conditions that the Secretary of Defense and the Chairman of the Joint Chiefs of Staff must jointly certify before the dual-hat leadership arrangement for NSA and CYBERCOM can be terminated. While DOD officials have considered ending the dual-hat leadership arrangement, as of April 2017, the department has not decided whether to do so. DOD Components with Cybersecurity Responsibilities To establish a cybersecurity program to protect and defend DOD information and information technology, DOD has assigned some of its components and senior officials with cybersecurity responsibilities, summarized in table 2 below. Key DOD Strategic Cybersecurity Guidance DOD has issued guidance to support the implementation of its cybersecurity capabilities. Table 3 below lists key DOD strategic-level cybersecurity documents and includes a description of each document, as well as the component organization(s) primarily responsible for their implementation. Advantages and Disadvantages of the Dual-Hat Arrangement, and Actions That Could Mitigate Potential Risks Associated with Ending the Arrangement Officials from various DOD components identified advantages and disadvantages of the dual-hat leadership of NSA/CSS and CYBERCOM. Additionally, DOD and Congress have identified actions that could mitigate the risks associated with ending the dual-hat leadership arrangement. As of March 2017, DOD officials informed us that DOD had not determined whether it would end the dual-hat leadership arrangement and was reviewing the steps and funding necessary to meet the requirements established in the law. Advantages and Disadvantages of the Dual-Hat Leadership Arrangement According to officials, DOD does not have an official position on the advantages and disadvantages of the dual-hat leadership arrangement of NSA/CSS and CYBERCOM. Through responses to interviews and questionnaires, officials from DOD components provided their perspectives on the advantages and disadvantages associated with the dual-hat leadership arrangement as summarized in table 4, below. DOD Components and Congress Have Identified Actions That Could Mitigate Potential Risks Associated with Ending the Dual-Hat Leadership Arrangement Actions Identified by DOD Component Officials to Mitigate Potential Risks Associated with Ending the Dual-Hat Leadership Arrangement In response to the National Defense Authorization Act for Fiscal Year 2017, President Obama supported elevating CYBERCOM to a unified combatant command and stated that NSA/CSS and CYBERCOM should have separate leaders who are able to devote themselves to each organization’s respective missions and responsibilities, but who should continue to leverage the shared capabilities and synergies developed under the dual-hat arrangement. As of April 2017, DOD officials told us that the department supports elevating CYBERCOM to a unified combatant command, but recognizes that there are potential risks in ending the dual-hat leadership arrangement. Prior to the passage of the National Defense Authorization Act for Fiscal Year 2017, DOD components, such as CYBERCOM and the Joint Staff, had already developed internal lists of conditions and prerequisites that could mitigate risks prior to ending the dual-hat leadership arrangement. These considerations were presented to senior leadership within the respective components to help inform their positions on ending the dual-hat leadership arrangement. According to DOD officials, discontinuing the dual-hat arrangement would require DOD to put the necessary policies and processes in place to continue the mutually beneficial partnership between NSA/CSS and CYBERCOM. Specifically, the arrangement in conjunction with support agreements has enabled CYBERCOM to leverage the capability development, personnel, facilities, infrastructure, testing capabilities, and business processes of NSA/CSS to support CYBERCOM operations. DOD officials also cited the potential for less communication between CYBERCOM and NSA/CSS and slower decision-making if the leadership arrangement were ended. Table 5 below lists the various actions reported to us by officials from DOD components that could mitigate risks associated with ending the dual-hat command structure of NSA/CSS and CYBERCOM, as well as the status of these actions, as of March 2017. According to DOD officials, many of these factors relate as much to the growth and maturation of CYBERCOM as they do the dual-hat status. Separate from actions identified by select DOD components, Congress has requested information from the department and required it to meet specific conditions and to certify that the termination of the dual-hat arrangement, if pursued by DOD, will not pose risks to the military effectiveness of CYBERCOM that are unacceptable to the national security interests of the United States. Specifically, House Report 114- 537 accompanying a bill for the National Defense Authorization Act of Fiscal Year 2017 and House Report 114-573 accompanying a bill for the Intelligence Authorization Act of Fiscal Year 2017 directed the Secretary of Defense to provide the House defense and intelligence committees with a briefing and an assessment of the dual-hat command by November 1, 2016. According to the committee direction, DOD was to address the following: 1. roles and responsibilities, including intelligence authorities, of each 2. assessment of the current impact of the dual-hat relationship, including both advantages and disadvantages; 3. recommendations on courses of action for separating the dual-hat command relationship between the Commander of CYBERCOM and the Director of the NSA/Chief of CSS, if appropriate; 4. suggested timelines for carrying out such courses of action; and 5. recommendations for legislative actions, as necessary. DOD did not perform this briefing and assessment. Performing this assessment would have provided DOD with an opportunity to articulate its perspectives on the advantages and disadvantages of the dual-hat leadership arrangement. Further, it would have allowed DOD to present its preferred course of action in relation to separating the leadership of NSA/CSS and CYBERCOM. DOD officials told us that they believed the assessment they were directed to provide to the House defense and intelligence committees in November 2016 was no longer necessary, based on Section 1642 of the National Defense Authorization Act for Fiscal Year 2017. However, while the act did not include the same briefing and assessment requirement identified in the two House reports, the act also did not cancel the committee direction. In addition, the briefing requirement still exists, as evidenced by the explanatory statement accompanying the Intelligence Authorization Act for Fiscal Year 2017, for DOD to brief and provide an assessment of the dual-hat leadership arrangement. According to Section 1642 of the National Defense Authorization Act for Fiscal Year 2107, DOD cannot terminate the dual-hat leadership arrangement until the Secretary of Defense and the Chairman of the Joint Chiefs of Staff jointly certify that the termination will not pose risks to the military effectiveness of CYBERCOM that are unacceptable to the national security interests of the United States. Section 1642 also requires DOD to conduct an assessment that, among other things, evaluates CYBERCOM’s operational dependence on NSA and evaluates each organization’s ability to carry out its roles and responsibilities independently. In addition, Section 1642 requires DOD to determine whether the following conditions have been met before deciding to end the dual-hat leadership arrangement: Robust operational infrastructure has been deployed that is sufficient to meet the unique cyber mission needs of CYBERCOM and NSA, respectively. Robust command and control systems and processes have been established for planning, deconflicting, and executing military cyber operations. The tools and weapons used in cyber operations are sufficient for achieving required effects. Capabilities have been established to enable intelligence collection and operational preparation of the environment for cyber operations. Capabilities have been established to train cyber operations personnel, test cyber capabilities, and rehearse cyber missions. The cyber mission force has achieved full operational capability. Office of the Under Secretary of Defense for Intelligence and Joint Staff officials told us that they regularly discuss matters related to ending the dual-hat leadership arrangement of NSA/CSS and CYBERCOM. However, as of April 2017, DOD’s senior leaders had not decided whether the dual-hat leadership should be ended, and the department was reviewing the steps and funding necessary to meet the statutory requirement of Section 1642. DOD’s Implementation of Key Strategic Cybersecurity Guidance Reflects Varied Progress DOD’s implementation of key strategic cybersecurity guidance—the DOD Cloud Computing Strategy, The DOD Cyber Strategy, and the DOD Cybersecurity Campaign—to help manage and focus its cybersecurity efforts has varied. The department has implemented the cybersecurity objectives identified in the DOD Cloud Computing Strategy, and it has made progress in implementing The DOD Cyber Strategy and DOD Cybersecurity Campaign. However, the department’s process for monitoring implementation of The DOD Cyber Strategy has resulted in the closure of tasks as implemented before the tasks were fully implemented. In addition, the DOD Cybersecurity Campaign lacked timeframes for completion and a process to monitor progress, which together provide accountability to ensure implementation. DOD Has Implemented the Four Cybersecurity Objectives of the 2012 DOD Cloud Computing Strategy DOD has implemented the four cybersecurity objectives of the 2012 DOD Cloud Computing Strategy. In July 2012, the DOD CIO issued the DOD Cloud Computing Strategy, which laid the groundwork for accelerating cloud adoption in the department, consistent with the Federal Cloud Computing Strategy. The DOD Cloud Computing Strategy includes four objectives aimed at enhancing the department’s cybersecurity, as listed in table 6 below, along with DOD’s status in implementing the objectives. In March 2016 DOD issued the DOD Cloud Computing Security Requirements Guide, which outlines the security controls and requirements necessary for using cloud-based solutions. According to DOD officials, the Cloud Computing Security Requirements Guide is the basis for authorizing commercial cloud service providers in the DOD environment and is closely aligned with the Federal Risk and Authorization Management Program—the fundamental cloud approval process for the federal government. This guide establishes a standardized infrastructure for cloud-based services, continuous monitoring, and identity and access management. According to DOD CIO officials, DOD has approved more than 50 commercial cloud networks for various levels of use based on this guidance. Additionally, in October 2016, DOD finalized the Defense Federal Acquisition Regulation Supplement interim rule on network penetration reporting and contracting for cloud services, which further standardized infrastructure requirements for cloud service providers. In April 2017, DOD also submitted a Data Center Optimization Strategic Plan, as required by the Office of Management and Budget, which lays out the number of data centers DOD expects to close through fiscal year 2018 as well as estimated cost savings associated with those closures. The plan DOD submitted shows that the department closed more than 150 data centers in fiscal year 2016 and planned to close more over the following 2 years, which would help reduce network seams through network and data center consolidation. DOD Has Taken Some Actions on All Cybersecurity Tasks Supporting The DOD Cyber Strategy, but the Current Process for Monitoring Implementation Limits Oversight of Tasks to Completion DOD has taken some actions on all 22 cybersecurity-related tasks identified in The DOD Cyber Strategy, although it has closed some tasks before they were fully implemented. The purpose of The DOD Cyber Strategy, issued in April 2015, is to guide the development of DOD’s cyber forces and strengthen cyber defense and cyber deterrence postures. This strategy, according to DOD, presents an aggressive, specific plan for leaders from across the department to take action and hold their organizations accountable to achieve the strategy’s objectives. The DOD Cyber Strategy sets prioritized strategic goals and objectives for DOD’s cyber activities and missions to achieve over the ensuing five years (that is, through 2020). The Office of the Under Secretary of Defense for Policy; Office of the Principal Cyber Advisor to the Secretary of Defense; the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Joint Staff work with the DOD components to prioritize and oversee the implementation of this strategy and its objectives and to assign responsibility for managing each objective. In a June 2015 memorandum, then Secretary of Defense Ashton Carter identified the implementation of The DOD Cyber Strategy as one of his top priorities and stated that the department should ensure that the outcomes articulated in the strategy were achieved. DOD has taken actions on all 22 of the tasks associated with the cybersecurity goal of the strategy that focuses on network defense, mission assurance, and security of the defense industrial base. For example, the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics and the Office of the Undersecretary of Defense for Intelligence formed a working group—the joint acquisition protection and exploitation cell—that links intelligence, counterintelligence, and law enforcement agents with acquisition program managers to prevent and mitigate data loss and theft. This working group ensures that federal acquisition rules and guidance mature over time in a manner consistent with standards, and it establishes an analysis capability to improve protection of controlled technical information and other critical information department-wide. In another example, DOD has adopted activities that include both regulatory and voluntary programs to improve the cybersecurity of defense industrial base companies. Companies maintaining covered defense information or providing critical support are now contractually required to report cyber incidents and use security standards identified in a National Institute of Standards and Technology special publication on protecting controlled unclassified information in nonfederal information systems. In addition, in October 2015, DOD modified eligibility criteria for participants in the defense industrial base cybersecurity information sharing program. Since this revision, DOD officials reported that program participation expanded from 124 to 207 industry partners. While DOD has taken some actions on all 22 cybersecurity tasks, we found that it has closed some tasks before they were fully implemented. This increases the risk that DOD will not fully implement those tasks, and also increases the risk that leadership will not be aware of delays or complications related to fully implementing The DOD Cyber Strategy. Specifically, the Principal Cyber Advisor has closed tasks when that office confirms that the DOD component primarily responsible for implementation has begun taking action on the tasks and follow-on work to complete the tasks has been integrated into existing DOD processes, operations, or policies. For example, DOD closed the task that required the department to assess the cybersecurity of current and future weapon systems. According to The DOD Cyber Strategy, DOD is to assess and initiate cybersecurity improvements for existing weapon systems; mandate cybersecurity requirements for future weapon systems; and update acquisition and procurement policies to promote effective cybersecurity. The Deputy Principal Cyber Advisor approved closing this task in The DOD Cyber Strategy monitoring process when the Under Secretary of Defense for Acquisition, Technology, and Logistics submitted a plan to Congress that was required by a provision in the National Defense Authorization Act for Fiscal Year 2016 and established a process to develop cybersecurity requirements for future weapon systems. In response to both The DOD Cyber Strategy task and the provision in the National Defense Authorization Act for Fiscal Year 2016, DOD—under the leadership of the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics—initiated assessments of its existing weapon systems. In addition to initiating these assessments and establishing a process to develop cybersecurity requirements for future weapon systems, the office updated acquisition and procurement policies to promote effective cybersecurity by adding an enclosure to its acquisition guidance, entitled Cybersecurity in the Defense Acquisition System. All three of these efforts demonstrate that DOD has taken actions toward implementing this one task. However, while DOD has taken actions to implement this task, we found that the task had not been fully implemented. Officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics acknowledged that the task had not been fully implemented and that the office was on schedule for completing the initial 136 assessments by December 31, 2019, as required by statute. Similarly, DOD has made progress toward establishing cybersecurity requirements for future weapon systems, but the effort is not complete. In January 2017, the Joint Staff issued a memorandum that established a process requiring weapon systems to incorporate cyber resilience as they are designed and built. The process is undergoing a one-year trial period and is scheduled to be reassessed in 2018. In addition, once tasks have been closed by the Principal Cyber Advisor, DOD has not continuously monitored task implementation; rather, monitoring occurs on a case-by-case basis. For example, officials from the office of the Principal Cyber Advisor told us that the office approved the closure of a task to enhance the protection for critical acquisition programs and technology on the condition that the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics provide regular reports on progress in order to allow the Principal Cyber Advisor to track progress and provide assistance when necessary. Further, the officials from the office of the Principal Cyber Advisor told us that the office reserves the right to re-open or initiate reviews of closed tasks; as of April 2017, the office had not re-opened any closed tasks. We have previously determined that DOD components do not consistently implement cybersecurity-related actions that are identified in DOD directives, instructions, or memorandums from senior DOD officials. In 2014, we reported on DOD’s continuity planning and cyber resiliency efforts. DOD closed a related cyber resiliency task identified in The DOD Cyber Strategy after the department issued interim guidance for incorporating cyber resilience into DOD component continuity of operations plans; and DOD directed all of its components to establish or update their continuity of operations plans to include cyber resiliency measures by December 2017. While issuing a memorandum may have initiated the process, the task has not been fully implemented. The Office of Management and Budget’s Management Responsibility for Enterprise Risk Management and Internal Control provides guidance for management to identify risks and establish internal controls, as appropriate, to provide reasonable assurance that objectives are achieved and discusses the responsibility to continuously monitor the effectiveness of those internal controls. Standards for Internal Control in the Federal Government explains that in defining objectives management should clearly define what is to be achieved, how it will be achieved, and the timeframes for achievement. Further, the standards state that ongoing monitoring should be built into an entity’s operations and be performed continually. Based on these internal control standards, DOD’s process is not sufficient to ensure the completion or implementation of tasks. Unless DOD modifies its process for deciding whether a task identified in The DOD Cyber Strategy is implemented, the department may not be able to ensure that the outcomes articulated in the strategy are achieved. DOD Has Made Progress in Implementing the DOD Cybersecurity Campaign but Does Not Have Timeframes and Monitoring DOD has made some progress in implementing the seven objectives required by the DOD Cybersecurity Campaign; however, the department does not have established timeframes for achieving full implementation. In June 2015, the DOD CIO; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Commander of CYBERCOM initiated a DOD Cybersecurity Campaign to identify specific actions that drive commanders and DOD senior leaders to enforce full cybersecurity compliance and accountability across the department. According to the DOD Cybersecurity Campaign, its goals are to educate commanders, civilian leaders, and all personnel responsible for the cybersecurity of the DOD information network on the risk to the mission. The three senior DOD leaders identified seven objectives to enable commanders and DOD senior leaders to enforce full cybersecurity compliance and accountability across the department. Table 7 below lists each of these objectives and shows our determination of the status of their implementation. As noted in the table above, DOD has implemented two of the objectives identified in the DOD Cybersecurity Campaign. Specifically, in February 2017, the DOD Information Security Risk Management Committee finalized a charter establishing a Platform Information Technology Cybersecurity Working Group to focus on the cybersecurity of DOD platform information technology systems, including but not limited to weapon systems and industrial control systems. According to the charter, this working group—chaired by the DOD CIO; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Commander of CYBERCOM—will provide expertise on the cybersecurity of platform information technology systems across DOD. A working group official explained that platform information technology is currently applicable to special purpose systems controlled and operated solely by technology, including industrial control systems. The department has also implemented the DOD Cybersecurity Campaign objective to develop a framework for defensive cyberspace operations concept of operations that integrates defensive cyberspace operations and DOD information operations across the DOD information network forces. Specifically, in March 2017, the commander of CYBERCOM approved operational guidance titled, Defensive Cyberspace Operations. The operational guidance is applicable to CYBERCOM, all of its supporting elements, and all DOD components performing defensive cyberspace operations. Among other things, the guidance requires defensive cyberspace operations to be integrated with other cyberspace operations and information related activities. An official from CYBERCOM told us that the operational guidance came from the work to develop a concept of operations and was intended to address DOD’s need for guidance on defensive cyberspace operations. DOD has also developed processes and procedures to monitor the implementation of four of the five DOD Cybersecurity Campaign objectives that are in the process of being implemented. Specifically, the DOD Cybersecurity Culture and Compliance Initiative monitors the department’s efforts in implementing two DOD Cybersecurity Campaign objectives—(1) execute priority initiatives for individual cybersecurity awareness, and (2) develop and implement a program to reinforce the traits and attributes of a healthy cybersecurity culture. In addition, DOD uses the Cybersecurity Scorecard to monitor two other DOD Cybersecurity Campaign objectives that are in progress—development of a DOD Cybersecurity Scorecard and the execution of the DOD Cybersecurity Discipline Implementation Plan. According to DOD CIO officials, the Cybersecurity Scorecard has allowed for better oversight of DOD components’ implementation of key cybersecurity measures and provides a forum for elevating issues to the Secretary of Defense. However, the scorecard is not fully implemented throughout the DOD components, and DOD continues to work on automating data collection to improve the data’s reliability. Additionally, recognizing the importance of resource prioritization, DOD CIO officials told us that the next phase is to move to a risk-based scorecard, which DOD expects to have implemented by March 2019. While DOD has taken steps to implement the DOD Cybersecurity Campaign objectives that are still in process, the department does not know when it will achieve full implementation for one of the objectives because the department did not establish a timeframe for completing or monitoring it to help ensure accountability for full implementation. Specifically, the department does not have timeframes for the objective associated with transitioning to commander-driven operational risk assessments for cybersecurity readiness. DOD has begun implementing the objective to shift the focus of its existing Command Cyber Readiness Inspection process to an operational cybersecurity readiness assessment. Specifically, the Defense Information Systems Agency and the Joint Force Headquarters-DOD Information Network are leading an effort to transition the department from a compliance-based Command Cyber Readiness Inspection process to an operational risk-based inspection focused on missions, vulnerabilities, and threats—currently referred to as the Command Cyber Operational Readiness Inspection process—as the initial phase of CYBERCOM’s broader Command Cyber Readiness Inspection improvement initiative. According to the Joint Force Headquarters-DOD Information Network, the results of the new inspection process will be expressed in terms of risk to the mission and the department’s information network, unlike the previous readiness inspection process, which expressed results as pass or fail. DOD officials indicated that the new operational risk assessments will better allow DOD components to relate their cyber vulnerabilities to their mission. The department has piloted the new process in three organizations; however, DOD has not established a timeframe for implementation or identified a process to hold DOD leaders accountable for implementing these assessments across the department. The Office of Management and Budget’s Management Responsibility for Enterprise Risk Management and Internal Control requires agencies to implement risk management in coordination with a number of internal control processes, including those contained in Standards for Internal Control in the Federal Government. Standards for Internal Control in the Federal Government highlight the need to (1) define objectives in specific terms, to include how objectives are to be achieved and timeframes for their achievement; and (2) enforce accountability by evaluating performance and holding organizations accountable. Until the DOD CIO; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Commander of CYBERCOM establish a timeframe for completing and a process for monitoring the objective associated with cybersecurity readiness assessments, DOD may be unable to assess its progress in achieving the objective, or to determine when it will achieve the strategic goals and objectives of the DOD Cybersecurity Campaign. DOD Has Implemented Fifteen of Twenty-seven Cybersecurity Recommendations from Prior GAO Reports As of March 2017, DOD had implemented 15 of the 27 cybersecurity recommendations (56 percent) we made in fiscal years 2011 through 2016. DOD is continuing to take actions to address 11 open recommendations, and 1 recommendation has been closed as not implemented. DOD’s 56 percent implementation rate is slightly lower than the government-wide 60 percent rate for implementing recommendations aimed at improving the security of federal systems and information. Table 8 below shows our analysis of the implementation status of the 27 cybersecurity recommendations. DOD Has Implemented More Than Half of Prior GAO Recommendations As of March 2017, DOD had implemented 15 of the 27 cybersecurity recommendations (56 percent) we made in fiscal years 2011 through 2016. Among them are the following: Cyberspace Activities. In July 2011, we reported on DOD’s organization and planning of cyberspace operations, including its defensive and offensive efforts to address cybersecurity threats. DOD lacked clear and complete guidance on command and control responsibilities, and DOD did not have a comprehensive approach to assess its cyberspace capability needs and prioritize capability gaps. We made 4 recommendations to strengthen DOD’s cyberspace doctrine and operations to better address cybersecurity threats and ensure a more comprehensive approach to developing and prioritizing the department’s cyberspace capability needs. DOD has implemented those 4 recommendations. In 2011, DOD updated its guidance related to cyberspace command and control relationships. In May 2012, DOD issued a capability gap assessment memorandum that includes DOD cyberspace capability gaps, proposed mitigation actions, and estimated completion dates. In February 2013, DOD issued a joint doctrine publication on cyberspace operations. These actions allowed the department to take a more comprehensive approach to its cyberspace capabilities and clarify its cyberspace command and control relationships. Cyberspace Budget Estimates. In July 2011, we reported that DOD’s cybersecurity budget estimates did not include all full-spectrum cyberspace operations, including computer network attack, computer network exploitation, and classified funding costs. The department also lacked a central organization or a methodology for collecting and compiling budget information on cyberspace operations. We made 2 recommendations to improve DOD’s ability to develop and provide consistent and complete budget estimates for its cyberspace operations. DOD implemented these recommendations by issuing new guidance for cyberspace operations budget submissions. The new guidance documents enabled DOD to develop a single cyberspace operations budget estimate that provides a complete picture of its cyberspace operations investments. Small Business Cybersecurity Efforts. In September 2015, we recommended that DOD identify and disseminate cybersecurity resources to defense small businesses, because DOD’s Office of Small Business Programs had not done so. In their response to the draft report, DOD officials stated that the department would implement training events and education programs. Since then DOD has implemented this recommendation by making a reference guide for its workforce to use when engaging with small businesses. These steps better position defense small businesses to protect their information and networks from cyber threats. DOD Has Not Yet Implemented Critical and High-Priority Cybersecurity Recommendations DOD has not yet implemented 12 recommendations we have made to address cybersecurity weaknesses and strengthen its cyberspace posture. These 12 recommendations include 6 that address critical issues identified by The DOD Cyber Strategy and that we also previously identified as a priority for implementing. Among the recommendations not yet implemented (open, or closed as not-implemented) are the following: Continuity of Operations. In April 2014, we found that some DOD components had not developed continuity plans, conducted continuity exercises, or established oversight to hold the components accountable. Therefore, we made 4 recommendations to strengthen DOD’s cyber continuity program by: (1) updating DOD’s continuity guidance; (2) providing planning tools for exercises with cyber degradation; (3) increasing oversight of the components; and (4) evaluating its process for tasking the components and evaluating their continuity readiness. DOD concurred with and subsequently took actions to begin implementing the 4 recommendations; however, DOD has not fully implemented these recommendations. For example, DOD drafted an update to its defense continuity policy guidance, but as of January 2017, the revisions had not been completed. Without this guidance, it will be difficult for the DOD components to provide reasonable assurance that the systems and networks needed to maintain continuity of operations in a degraded cyber environment will be reliable, accessible, or available within needed timeframes. Insider Threat. In June 2015, we made 4 recommendations to address challenges with DOD’s insider threat program—of which DOD concurred with two, and partially concurred with two. DOD is developing an insider threat implementation plan to address two of the recommendations, but that plan has not yet been published. DOD officials told us that the department is no longer taking action to address our recommendation to evaluate the extent to which its insider-threat programs address capability gaps. This recommendation originated in part when DOD did not complete a continuing analysis of gaps in security measures and of technology, policies, and processes that are needed to increase the capability of its insider-threat program to address these gaps, as required by statute. This survey would have allowed DOD to define existing insider-threat program capabilities; identify gaps in security measures; and advocate for the technology, policies, and processes necessary to increase capabilities in the future. In their comments to the draft report, DOD officials stated that the department analyzes security gaps each quarter through its self-assessments, which identify program capability gaps. However, DOD has not evaluated and documented the extent to which the current assessments describe existing insider-threat program capabilities, as required by law. Without a documented evaluation, the department will not know whether its capabilities to address insider threats are adequate, or whether the capabilities address statutory requirements. Defense Civil Support. In April 2016, we found that DOD’s guidance did not clearly define the roles and responsibilities of key DOD entities—such as DOD components—for domestic cyber incidents. For example, U.S. Northern Command’s Defense Support of Civil Authorities response concept plan states that U.S. Northern Command would be the supported command for a mission to support civil authorities in responding to a domestic cyber incident. However, other guidance directs and DOD officials confirmed that a different command, CYBERCOM, would be responsible for supporting civil authorities in the event of a domestic cyber incident. Therefore, we made a recommendation that DOD issue or update guidance that clarifies roles and responsibilities to support civil authorities in a domestic cyber incident. DOD concurred with this recommendation. As of April 2017, the department had not implemented this recommendation, but officials indicated that they are in the process of drafting guidance that will clarify these roles. Specifically, the department is drafting a memorandum on defense support for cyber incident response that DOD officials believe will clearly articulate how DOD would support domestic cyber incident response efforts. DOD also scheduled exercises and a workshop to help it prepare to support civil authorities in the event of a cyber incident. However, until DOD clarifies the roles and responsibilities of its key entities for cyber incidents, it will remain unclear which DOD component or command should be providing support to civil authorities in the event of a major cyber incident. We continue to believe that implementing these 12 recommendations would improve DOD’s cyberspace posture. We will continue to monitor DOD’s implementation of these recommendations, paying particular attention to the 6 high-priority recommendations that are still not implemented. Appendix I lists each report issued from fiscal years 2011 through 2016 that included recommendations for DOD, along with each recommendation’s implementation status. Conclusions DOD continues to face complex and evolving cyberspace threats to its networks and information. It has taken actions to implement the tasks and objectives from the DOD Cloud Computing Strategy, The DOD Cyber Strategy, and the DOD Cybersecurity Campaign. However, gaps in the department’s processes for monitoring implementation of this guidance limits DOD’s ability to monitor the status of, and hold organizations accountable for, implementing key cybersecurity actions—such as its goal to identify, prioritize, and defend its most important networks and data so that it can carry out its missions effectively. DOD has made progress in implementing the seven actions required by the DOD Cybersecurity Campaign, but it does not know when it will achieve full implementation of one of the six remaining actions. DOD’s continuing progress is highlighted by CYBERCOM’s recent release of defensive cyber operations guidance in accordance with one of the objectives of the DOD Cybersecurity Campaign. Addressing the gaps in DOD’s plans and timeframes for completing the remaining action will help DOD find and fix any root causes of cybersecurity breaches. Failure to implement this objective makes DOD vulnerable to cyber threats that may negatively affect mission readiness and could hinder mission accomplishment. Recommendations for Executive Action To ensure that DOD implements the tasks and objectives of key cybersecurity guidance to strengthen its cybersecurity posture, we recommend that the Secretary of Defense take the following two actions: Direct the Principal Cyber Advisor to modify the criteria for closing tasks from The DOD Cyber Strategy to reflect whether tasks have been implemented, and to re-evaluate tasks that have been previously determined to be completed to ensure that they meet the modified criteria; Direct the Commander of CYBERCOM, in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics and DOD CIO, to establish a timeframe and monitor implementation of the DOD Cybersecurity Campaign objective to develop cybersecurity readiness assessments to help ensure accountability. Agency Comments and Our Evaluation We provided a draft of our report to DOD for review and comment. In its written comments, DOD partially concurred with both of our recommendations. DOD’s written comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report where appropriate. DOD partially concurred with our recommendation to modify the criteria for closing tasks from The DOD Cyber Strategy to reflect whether tasks have been implemented and re-evaluate tasks that have previously been determined to be completed to ensure that they meet the modified criteria. The department stated that it has a robust process in place to ensure that tasks are normalized within appropriate processes, operations, and/or policies. DOD stated that it will implement internal control standards to periodically reassess closed tasks and that the department will re-evaluate the word “closed” as it relates to enduring activities that have active efforts ongoing across the department. If DOD implements these actions it will help ensure that the department monitors the status of these cybersecurity tasks to completion and will meet the intent of our recommendation. DOD partially concurred with our recommendation to establish timeframes and monitor implementation of the DOD Cybersecurity Campaign objectives related to readiness assessments and a defensive cyberspace concept of operations to help ensure accountability. The department stated that CYBERCOM will coordinate with the necessary components to develop timelines for implementing these objectives. Further, the DOD CIO and the Under Secretary of Defense for Acquisition, Technology, and Logistics will monitor the status of these objectives to help ensure accountability. If DOD takes the actions it outlined, it will meet the intent of our recommendation. Because CYBERCOM provided us a copy of their recently published defensive cyber operations guidance before completion of our audit, we adjusted our recommendation to omit reference to a defensive cyberspace concept of operations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, DOD’s Deputy Principal Cyber Advisor, the Commander of CYBERCOM, the Acting Under Secretary of Defense for Acquisition, Technology, and Logistics, and DOD’s Acting Chief Information Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Status of Cybersecurity Recommendations Made to the Department of Defense (DOD), Fiscal Years 2011 through 2016 Table 9 below summarizes the status of the 27 cybersecurity recommendations we made to DOD in 10 reports issued from fiscal years 2011 through 2016. We classify each recommendation as implemented, open, or not-implemented. Open and not-implemented recommendations are those that the agency has not yet taken sufficient steps to implement. Open recommendations are recommendations that the agency is still working toward implementing, while DOD is no longer taking actions on the recommendation that is not implemented. The recommendations are listed by report. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Tommy Baril, Assistant Director; Tracy Barnes; John Beauchamp; Lon Chin; Pamela Davidson; Ashley Houston; Jason Kelly; Amie Lesser; Randy Neice; and Cheryl Weissman made key contributions to this report. Related Unclassified GAO Products Defense Department Cyber Efforts: Definitions, Focal Point, and Methodology Needed for DOD to Develop Full-Spectrum Cyberspace Budget Estimates. GAO-11-695R. Washington, D.C.: July 29, 2011. Defense Department Cyber Efforts: DOD Faces Challenges In Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Defense Department Cyber Efforts: More Detailed Guidance Needed to Ensure Military Services Develop Appropriate Cyberspace Capabilities. GAO-11-421. Washington, D.C.: May 20, 2011. Defense Cyber Efforts: Management Improvements Needed to Enhance Programs Protecting the Defense Industrial Base from Cyber Threats. GAO 12-762SU, August 3, 2012. Defense Cybersecurity: DOD Needs to Better Plan for Continuity of Operations in a Degraded Cyber Environment and Provide Increased Oversight. GAO-14-404SU, April 1, 2014. Insider Threats: DOD Should Strengthen Management and Guidance to Protect Classified Information and Systems. GAO-15-544, June 2, 2015. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749, July 23, 2015. Defense Cybersecurity: Opportunities Exist for DOD to Share Cybersecurity Resources with Small Businesses. GAO-15-777, September 24, 2015. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332, April 4, 2016. Defense Civil Support: DOD Needs to Identify National Guard’s Cyber Capabilities and Address Challenges in Its Exercises. GAO-16-574, September 6, 2016.
DOD acknowledges that malicious cyber intrusions of its networks have negatively affected its information technology systems, and that adversaries are gaining capability over time. In 2010, the President re-designated the director of the NSA as CYBERCOM's commander, establishing a dual-hat leadership arrangement for these agencies with critical cybersecurity responsibilities. House Reports 114-537 and 114-573 both included provisions for GAO to assess DOD's management of its cybersecurity enterprise. This report, among other things, examines (1) DOD officials' perspectives on the advantages and disadvantages of the dual-hat leadership arrangement of NSA/CSS and CYBERCOM, and actions that could mitigate risks if the leadership arrangement ends, and (2) the extent to which DOD has implemented key strategic cybersecurity guidance. GAO analyzed DOD cybersecurity strategies, guidance, and information and interviewed cognizant DOD officials. Officials from Department of Defense (DOD) components identified advantages and disadvantages of the “dual-hat” leadership of the National Security Agency (NSA)/Central Security Service (CSS) and Cyber Command (CYBERCOM) (see table). Also, DOD and congressional committees have identified actions that could mitigate risks associated with ending the dual-hat leadership arrangement, such as formalizing agreements between NSA/CSS and CYBERCOM to ensure continued collaboration, and developing a persistent cyber training environment to provide a realistic, on-demand training capability. As of April 2017, DOD had not determined whether it would end the dual-hat leadership arrangement. DOD's progress in implementing key cybersecurity guidance—the DOD Cloud Computing Strategy , The DOD Cyber Strategy , and the DOD Cybersecurity Campaign —has varied. DOD has implemented the cybersecurity elements of the DOD Cloud Computing Strategy and has made progress in implementing The DOD Cyber Strategy and DOD Cybersecurity Campaign . However, DOD's process for monitoring implementation of The DOD Cyber Strategy has resulted in the closure of tasks before they were fully implemented; for example, DOD closed a task that, among other things, would require completing cyber risk assessments on 136 weapon systems. Officials acknowledged they are on track to complete the assessments by December 31, 2019, but as of May 2017, the task was not complete. Unless DOD modifies its process for deciding whether a task identified in its Cyber Strategy is implemented, it may not be able to achieve outcomes articulated in the strategy. Also, DOD lacks a timeframe and process for monitoring implementation of the DOD Cybersecurity Campaign objective to transition to commander-driven operational risk assessments for cybersecurity readiness. Unless DOD improves the monitoring of its key cyber strategies, it is unknown when DOD will achieve cybersecurity compliance.
Background FLTFA, commonly called the “Baca Act,” provides for the use of revenue from the sale or exchange of BLM land identified for disposal under land use plans in effect as of the date of its enactment—July 25, 2000. The act does not apply to land identified for disposal after its enactment, such as through a land use plan amendment approved after that date. Revenue generated under FLTFA are available to the Secretaries of Agriculture and of the Interior for acquiring inholdings within certain federally designated areas, or land adjacent to those areas and containing exceptional resources, and for administrative and other expenses necessary to carry out the land disposal program under the FLTFA. To implement FLTFA, BLM has designated a program lead realty specialist in headquarters, in each state office involved, and in each field office within those states. The program lead duties are sometimes split between land and realty staff who specialize in sales and others who specialize in the acquisition process. In addition, to facilitate the use of FLTFA funds for acquisition, the other three agencies sharing in the revenue, the Forest Service, the Park Service, and the Fish and Wildlife Service, have also designated realty staff to participate in interagency groups to decide on acquisitions in each BLM state. BLM manages the FLTFA account through its Division of Business Services. Federal Land Sales Authorities and Process Although FLTFA authorizes proceeds from eligible land sales and exchanges to be used in acquiring land, it does not provide any new sales authority. The sales authority, as stated in FLTFA, is provided by the Federal Land Policy and Management Act of 1976 (FLPMA). FLPMA authorizes the Secretary of the Interior to dispose of certain federal lands—through sale and exchange, among other disposal methods—and authorizes the Secretaries of Agriculture and of the Interior to acquire certain nonfederal lands. FLPMA also authorizes the Secretary of Agriculture to exchange land. FLPMA requires the Secretary of the Interior to develop land use plans to determine which lands are eligible for disposal and acquisition. The level of specificity differs in land use plans, from describing general areas to naming specific parcels. In developing these land use plans, agencies must work closely with federal, state, and local governments and allow for public participation. Land use plans are typically revised every 15 to 20 years to address changing land use conditions in the area covered. Sales and acquisitions must comply with requirements of FLPMA and other applicable laws, which can require, among other things, an assessment of the environmental impacts of the proposed land transaction, assessment of natural and cultural resources, preparation of appraisals, and public involvement. Furthermore, with regard to land sales specifically, FLPMA requires that land be sold at the appraised fair market value or higher. Although BLM policy states that competitive sales are preferred when a number of parties are interested in bidding on a parcel for sale, regulations for the FLPMA land sales authority provide for other methods of sale when certain criteria are met. The regulations state that modified competitive sales may be used to permit the current grazing user or adjoining landowner to meet the high bid at the public sale. This procedure allows for limited competitive sales to protect ongoing uses, to assure compatibility of the possible uses with adjacent land, and to avoid dislocating current users. The regulations state that a direct sale may be used when the land offered for sale is completely surrounded by land in one ownership with no public access, when the land is needed by state or local governments or nonprofit corporations, or when the land is necessary to protect current equities in the land or resolve inadvertent unauthorized use or occupancy of the land. In completing the steps necessary to purchase land, third-party organizations, such as The Nature Conservancy and The Trust for Public Land, often provide assistance to the federal government. For example, third parties may assist by purchasing desired land for eventual resale to the federal government or by negotiating an option with the seller to purchase land within a specified period of time, which provides additional time for the federal agency to secure necessary funding for the purchase or to comply with laws and regulations governing the acquisition. Federal Land Acquisition Funding The primary source for land acquisition funding for BLM, the Park Service, the Forest Service, and the Fish and Wildlife Service, has traditionally been the Land and Water Conservation Fund (LWCF), which was created to help preserve, develop, and assure access to outdoor recreation resources. To receive LWCF funding, the agencies independently identify and set priorities for land acquisitions and then submit their list of priority acquisitions in their annual budget request to Congress. LWCF funding is available for land acquisition purposes only if appropriated by Congress, unlike the funds in the FLTFA account, which are available without further appropriation. LWCF land acquisition appropriations have been declining in recent years. Specifically, funds for the four agencies declined from $453.4 million appropriated in fiscal year 2001 to $120.1 million appropriated in fiscal year 2006, as depicted in figure 1. BLM has traditionally received the lowest amount of LWCF land acquisition funding among the four agencies. For example, in fiscal year 2006, BLM’s share of total appropriated LWCF land acquisition funding was only $8.6 million, or about 7 percent of the total appropriation. BLM’s land sales eligible under FLTFA have created another funding source for the four agencies to acquire land. FLTFA provides that if all funds in the account are not used by the sunset date in 2010, they will become available for appropriation under section 3 of the Land and Water Conservation Fund Act. Other Land Sale Laws Other laws allow BLM to retain certain proceeds from federal land sales and share them among agencies for land acquisitions, as well as other purposes. The most notable of these is the Southern Nevada Public Land Management Act of 1998 (SNPLMA). SNPLMA’s stated purpose is to “provide for the orderly disposal of certain federal lands in Clark County, Nevada, and to provide for the acquisition of environmentally sensitive land in the State of Nevada.” Since enactment, SNPLMA has generated just under $3 billion in revenue. As of September 2007, a portion of this revenue has been spent, in part, to complete 41 land acquisition projects in Nevada for a total of $129.1 million. Unlike FLTFA, SNPLMA has no expiration date and its sales receipts are placed in an interest bearing account. However, it has fewer acres available for disposal than FLTFA. FLTFA Requirements on Use of Revenue and Other Key Provisions FLTFA places a number of requirements on the use of revenue generated under the act. Among these requirements, BLM must provide 4 percent of sale proceeds to the state in which revenue was raised for education and transportation purposes. Figure 2 illustrates these requirements using an example of $1 million in revenue. FLTFA also limits land acquisitions to land within and adjacent to federally designated areas, such as national parks, national forests, and national conservation areas. While most lands managed by the Fish and Wildlife Service, the Forest Service, and the Park Service are federally designated areas, many of the lands managed by BLM are not federally designated areas; therefore, acquisitions within undesignated lands would not qualify under FLTFA. Furthermore, FLTFA requires that the Secretaries establish a procedure to identify and set priorities for acquiring inholdings. As part of this process, it called for the Secretaries to consider (1) the date the inholding was established, (2) the extent to which the acquisition would facilitate management efficiency, and (3) other criteria identified by the Secretaries. The act also requires a public notice be published in the Federal Register detailing the procedures for identifying inholdings and setting priorities for them and other information about the program. Memorandum of Understanding Implements FLTFA To improve FLTFA implementation, the four agencies signed a national MOU. Among other things, the MOU established a Land Transaction Facilitation Council, which consists of the heads of the four agencies and the U.S. Department of the Interior Assistant Secretary for Policy, Management, and Budget to oversee the implementation and coordination of activities undertaken pursuant to the MOU. The MOU also directed the agencies to establish state-level implementation plans that would establish roles and responsibilities, procedures for interagency coordination, and field-level processes for identifying land acquisition recommendations and setting priorities for these recommendations. Proposed Amendments to FLTFA The Administration has proposed revising and extending the act. Specifically, the U.S. Department of the Interior’s fiscal year 2007 and 2008 budgets included proposals to allow BLM to use updated land use plans to identify areas suitable for allow a portion of receipts to be used by BLM for restoration projects, require BLM to return 70 percent of net proceeds from eligible sales to the U.S. Treasury, and cap retention of receipts at $60 million per year. In addition, the U.S. Department of the Interior called for Congress to extend the FLTFA program to 2018. BLM Has Raised Most FLTFA Revenue from Land Sales in Nevada Since FLTFA was enacted in 2000, BLM has raised $95.7 million in revenue, mostly by selling 16,659 acres. As of May 2007, about 92 percent of the revenue raised, or $88 million, has come from land sales in Nevada—1 of the 11 western states under FLTFA. Nevada accounts for most of the sales because of rapidly expanding population centers coupled with a high percentage of BLM land in the state and experience selling land under the SNPLMA program. Most BLM field offices have not generated revenue under FLTFA. BLM Has Raised $95.7 Million from FLTFA Land Sales, Primarily in Two Nevada Field Offices Between July 2000 and May 2007, BLM raised $95.7 million in revenue for selling 16,659 acres, according to data verified by BLM state offices. In addition, the BLM Division of Business Services reports exchange equalization payments totaling $3.4 million. Nevada has accounted for the great majority of the sales. As of May 2007, about 92 percent of the revenue raised, or $88 million, has come from land transactions in Nevada. More specifically, the Carson City and Las Vegas field offices generated a total of $86.2 million, or 90 percent of all revenue generated under FLTFA, mostly through a few competitive sales. For example, the Carson City Field Office raised $39.1 million through 3 sales and Las Vegas Field Office raised $33.6 million through 7 sales. Table 1 shows the state-by-state totals of sales revenue generated, acres sold, and number of sales. See appendix II for a listing of completed sales BLM state offices have reported to us. Some of BLM’s Nevada field offices, particularly Las Vegas and Carson City, have been in a unique position to raise the most funds under FLTFA to date because of rapidly expanding populations, development in those areas, and the availability of nearby BLM land. In addition, BLM Nevada staff had previous experience with SNPLMA, the land sales program in the Las Vegas area. In fact, the Nevada office used procedures and staff from this program to initiate FLTFA’s sales and acquisition programs. According to Nevada state office officials, BLM’s annual work plan for lands and realty work specifically directed the Nevada office to continue to hold FLTFA and SNPLMA land sales as appropriate. Revenue from land sales and exchanges under FLTFA grew slowly in the first years of the program but picked up in fiscal years 2004 and 2005, with $16.6 million and $4.8 million, respectively. Revenue reached a peak in fiscal year 2006, when a total of $71.1 million was collected. BLM officials said the land sales market in Nevada has cooled since its peak in 2006. Figure 3 shows the FLTFA revenue through May 2007. The FLTFA account benefits from the proceeds of all types of transactions, including land exchanges and land sales made on a competitive, modified competitive, or direct basis. BLM sets the appraised fair market value as the sales price for direct sales and as the minimum bid price for competitive sales. Of the 265 completed sales reported by BLM state offices, 149 were competitive, 33 were modified competitive, and 83 were direct. In terms of FLTFA revenue, the great majority, about 96 percent, has been raised from competitive sales. For example, in December 2005, the Las Vegas Field Office sold a 40-acre parcel through a competitive auction for $7.3 million, or 152 percent of its appraised fair market value of $4.8 million. On a much smaller scale in a December 2006 competitive auction, the Burns District Office in Oregon sold 240 acres for $47,000, or 163 percent of its appraised fair market value of $28,800. In 2006, the Carson City Field Office offered two parcels of about 100 and 106 acres with appraised fair market values of $10 million and $6.4 million, respectively, in north Douglas County, Nevada, just south of the Carson City limits. The former BLM parcels are contiguous and across a major highway from three shopping centers. Through competitive auctions, BLM received final prices of $16.1 million and $8.4 million, or 161 and 131 percent, respectively, of appraised value. Figure 4 shows areas in these two parcels. According to a GAO analysis of data from BLM’s Division of Business Services and BLM state offices on land sales revenue collected in the FLTFA account, only 12 of 144 field offices have conducted competitive sales. An additional 28 field offices have generated FLTFA revenue through direct or modified competitive sales. The remaining 104 offices have not generated sales revenue for the FLTFA account. Table 2 shows FLTFA sales by the method used and the amount of revenue generated. Using the data provided by BLM state offices on completed FLTFA sales as of May 31, 2007, we determined that the actual sales prices of the parcels sold exceeded the appraised fair market value of those parcels by 52 percent. BLM Faces Several Challenges to Future Sales under FLTFA BLM state and field office officials most frequently cited the availability of knowledgeable realty staff to conduct the sales as a challenge to raising revenue from FLTFA sales. These staff may not be available because they are working on activities that BLM has identified as a higher priority, such as reviewing and approving energy rights-of-way. We identified two additional issues hampering land sales activity under FLTFA. First, while BLM has identified land for sale in its land use plans, it has not made the sale of this land a priority during the first 7 years of the program. Furthermore, BLM has not set goals for FLTFA sales. Goals are an effective management tool for measuring and achieving results. Some BLM state offices reported that they have planned FLTFA sales through 2010, but BLM has no overall implementation strategy to generate funds to purchase inholdings, as mandated by FLTFA. Since BLM has not laid out a clear roadmap for FLTFA and did not make land sales a priority, it is difficult to determine if BLM took full advantage of the opportunities for generating revenue under the act. Second, BLM has revised some of its land use plans since 2000 and identified additional land for disposal. However, revenue from these potential sales is not eligible for the FLTFA account because the act only applies to land that was identified for disposal in a land use plan on or before the date of the act. BLM State and Field Officials Most Frequently Cited Availability of Knowledgeable Staff as a Challenge to Conducting FLTFA Sales According to BLM state and field officials, they face five challenges to raising FLTFA revenue through sales. First, the most frequently identified is the availability of knowledgeable realty staff to conduct the sales. This challenge is followed, in order of frequency cited, by the time, cost, and complexity of the land sales process; external factors, such as public opposition to a sale; program and legal restrictions; and the land use planning process. Except for FLTFA-specific program and legal restrictions, the other challenges that BLM state and field offices cited are probably faced in many public land sales. The following provides examples of these challenges: The availability of knowledgeable realty staff to conduct the sales. BLM staff said realty staff must address higher priority work before land sales. For example, Colorado BLM staff said that processing rights-of-way for energy pipelines takes a huge amount of realty staff time, 100 percent in some field offices, and poses one of the top challenges to carrying out FLTFA sales in Colorado. In Idaho, staff also cited the lack of realty staffing, which is down 40 percent from 10 years ago. Adding to the staffing issue, the workload for energy-related uses in Idaho, such as approving rights-of-way for transmission lines, has doubled. Other offices cited turnover in staff and the lack of staff with training and experience to conduct sales. Time, cost, and complexity of the sales process. Much preparation must be completed before a property can be sold. For example, several offices cited the cost and length of the process that ensures a sale complies with environmental laws and regulations. In addition, obtaining clearances from experts related to cultural and natural resources on a proposed sale can be time-consuming. For example, in the sale of 396 acres by the Las Cruces District Office, officials said that the sale of the property was delayed by the discovery of a significant cultural resource on the site. This was eventually resolved by BLM retaining the small portion of the original parcel containing the cultural resource. External factors. BLM officials cited such factors such as public opposition to a sale, market conditions, or lack of political support as challenges. For example, Colorado BLM officials said that they have faced strong local opposition to sales, and the El Centro Field Office staff in California cited the lack of demand for the land from buyers as a challenge. Some offices have experienced diminishing support of sales by local governments over the time a sale is prepared. Program and legal restrictions. The Arizona State Office staff and the Elko Field Office staff cited the sunset date of FLTFA, less than 3 years away, as a challenge because the sunset date may not allow enough time to complete many more sales. Other offices said the MOU provision requiring a portion of the land sale proceeds to be used by the three other agencies reduces BLM’s incentive to do land sales because BLM keeps only 60 percent of the revenue. Another challenge to the disposal of land under FLTFA, especially in Nevada, has been the passage of land bills for Lincoln and White Pine counties. The Lincoln County Land Act of 2000, as amended, directs BLM to deposit most of the proceeds from the disposal of not more than 103,328 acres into an account established by the act. The White Pine County Conservation, Recreation, and Development Act of 2006 directs BLM to deposit most of the proceeds from the disposal of not more than 45,000 acres into a similar account. In total, BLM staff estimate that, once mandated land use plan amendments are completed, the two acts will result in the removal of about 148,000 acres from FLTFA eligibility. Land use planning. Some offices cited problems with the land use plans. For example, the Idaho Falls District Office staff said that specific land for sale is hard to identify in old land use plans. Nevada’s Elko Field Office staff said that some lands that could be offered for sale were not available because they were not designated in the land use plan at the time of FLTFA’s enactment. Most BLM States Have Planned FLTFA Sales through 2010, but BLM Lacks National Goals for the Program BLM state offices reported planning FLTFA sales through 2010, but BLM has not established national goals for FLTFA or emphasized sales. BLM Plans FLTFA Sales through 2010 In response to our request to the 10 BLM state offices participating in FLTFA, 8 reported planning 96 FLTFA sales totaling 25,406 acres through 2010. The other two state offices reported no planned sales. Of the 96 planned sales, 34 are planned as competitive, 6 as modified competitive, and 52 as direct sales; the sales methods for 4 sales are unknown. The BLM state offices did not report a fair market value for some of these planned sales. Table 3 provides information on planned FLTFA sales and appendix III provides a complete listing of the planned sales that BLM state offices reported. Figure 5 shows an example of a planned sale—the “North Fork” parcel to be sold competitively in April 2008 by the field office in Las Cruces, New Mexico. This 167-acre parcel is on the eastern edge of Las Cruces across the street from residential subdivisions. BLM also plans to sell a similar adjacent 180-acre parcel at the same time. The field office reported that the purpose of these sales is to dispose of land that will serve important public objectives, including but not limited to, expansion of communities and economic development, which cannot be achieved prudently or feasibly on land other than public land. Although BLM offices plan sales, there is no assurance that these sales will occur. For fiscal years 2004 and 2005, BLM headquarters compiled a list of 100 planned sales under FLTFA from information the state offices provided. Because BLM headquarters did not know the status of these 100 planned sales, we followed up with the state offices and determined that 54 were actually completed. According to BLM’s state office leads for these sales, 46 properties did not sell for several reasons, such as environmental concerns; external factors; the availability of staff; and the time, cost, and complexity of the sales. For example, Utah State Office officials said a 1,450-acre parcel near St. George did not sell because threatened and endangered species and cultural resource issues were identified. In Wyoming, state office staff said only one of four proposed sales occurred because of inadequate staffing and the competing priority to address oil and gas-related realty issues. BLM Has Not Established Goals or an Implementation Strategy for FLTFA Sales BLM has established annual goals for the disposal of land through sales or other means. For example, BLM’s fiscal year 2008 budget justification contained a performance target to dispose of 11,500 acres and 30,000 acres of land in fiscal years 2007 and 2008, respectively. However, BLM has not established similar goals for FLTFA sales. For example, BLM’s fiscal year 2007 annual work plan for the lands and realty function—which guides the activities to be completed in a given year—does not contain specific goals for FLTFA. Rather, it states that lands and realty staff should continue to hold land sales under FLTFA, especially in Nevada. BLM did provide an estimate in its fiscal year 2008 budget justification for FLTFA revenue—$12 million in fiscal year 2007 and $50 million in 2008. However, BLM fell short of its estimate for fiscal year 2007; it reported generating only $0.7 million from sales and exchanges. Moreover, when we asked BLM headquarters staff for the basis of the fiscal year 2007 and 2008 revenue estimates, they said the estimates were based on professional judgment and that they had no supporting information. Our interviews with state and field office staff confirmed that there are few goals for conducting FLTFA sales. According to 27 of the 28 state and field office officials we spoke with, BLM headquarters had not provided any goals; one state office said headquarters had emphasized getting their land disposal program up and running in their office. According to 18 of these 28 officials, their state and field office management had set no targets or goals for FLTFA land sales. Of the 10 that did mention such goals, 8 described the goal as a plan to sell specific parcels of land. According to headquarters officials, BLM has tried to encourage FLTFA sales but is not pressuring field offices to conduct them, and there is no ongoing headquarters effort to oversee and manage sales because states are responsible for conducting their own sales programs. The realty managers explained that headquarters does not approve land sales but is aware of them through reviews of Federal Register notices of the sales. According to a headquarters official, BLM did not establish FLTFA goals because BLM lacked realty staff to conduct land sales and other impediments to sales generally, such as the lack of access, mineral leases, mining claims, threatened or endangered species habitat, floodplains, wetlands, cultural resources, hazardous materials, and title problems. The establishment of goals is an effective management tool for measuring and achieving results. As we have reported in the past on management under the Government Performance and Results Act of 1993, leading public sector organizations pursuing results-oriented management commonly took the following key steps: defined clear missions and desired outcomes, measured performance to gauge progress, and used performance information as a basis for decision making. BLM has not fully implemented these steps in managing the FLTFA program to sell land designated for disposal in its land use plans. To measure BLM’s success in generating revenue and disposing of land under FLTFA, actual performance would need to be compared with national sales goals for FLTFA. Without national goals for making these sales a priority, it is difficult for BLM to enhance the efficiency and effectiveness of federal land management as called for in FLTFA through the acquisition of inholdings and consolidation of public lands. FLTFA’s Restriction on Land Available for Sale Reduces Potential Revenue FLTFA requires BLM to deposit the proceeds into the special FLTFA account from the sale or exchange of public land identified for disposal under approved land use plans in effect on the date of its enactment. Other proceeds from land sales and exchanges are typically deposited into the U.S. Treasury’s general account. Many of BLM’s land use plans have been revised or have been proposed for revision since FLTFA’s enactment, and additional lands have been identified for disposal. For example, BLM reported the Boise District Office in Idaho is currently planning a sale of 35 parcels. Five of the 35 parcels, with a total estimated value of $10.7 million, are not FLTFA eligible. Because of the land use plan restriction, revenue from these five sales would not benefit the FLTFA account when sold. While this restriction reduces the potential revenue that could be dedicated to purchasing inholdings and adjacent land containing exceptional resources under FLTFA, it does benefit the U.S. Treasury’s general account. According to 17 of the 28 BLM state and field realty staff we interviewed, their office has land available for disposal that is not designated in an FLTFA-eligible land use plan. For example, New Mexico state office officials said that a number of land use plan amendments completed or under development since FLTFA’s enactment have identified land for disposal. They noted that the Las Cruces area land use plan is being amended to adjust to the new direction of the city’s growth that has occurred since the last plan was prepared in 1993. According to BLM New Mexico staff, different or additional lands are expected to be designated for disposal in the amended plan. Figure 6 shows land on the west side of Las Cruces, New Mexico, that is expected to be designated for disposal in the forthcoming revision to accommodate the community’s growth. Field office officials said that input from local governments and other interests have focused land sales growth in Las Cruces on the west side of the city in order to create a buffer for the Organ Mountains on the east side. Agencies Have Purchased Few Parcels with FLTFA Revenue Since the enactment of FLTFA 7 years ago, BLM reports that the four land management agencies have spent $13.3 million of the $95.7 million in FLTFA revenue—$10.1 million to acquire nine parcels of land and $3.2 million in administrative expenses for conducting FLTFA sales. Agencies spent the $10.9 million between August 2007 and January 2008 on the first land acquisitions completed under FLTFA using the secretarial discretion provided in the MOU. As of May 31, 2007, the agencies reported submitting eight acquisition nominations to state-level interagency teams for consideration. The New Mexico interagency team reported submitting six additional nominations as of July 1, 2007. None of these 14 nominations— valued at $71.9 million—has resulted in a completed acquisition. The state- level process has not yet resulted in acquisitions because of the time taken to complete interagency agreements and limited FLTFA funds available for acquisition outside of Nevada. Although Nevada has proposed five acquisitions, none have been completed. As for the remaining $3.2 million in expenditures, BLM reports spending these funds on administrative activities involved in preparing land for sale under FLTFA mostly between 2004 and 2007. BLM offices in Nevada spent $2.6 million of this total. Under a Secretarial Initiative, BLM Reports Agencies Spent $10.1 Million on the First Land Acquisitions 7 Years after FLTFA Was Enacted No land acquisitions had occurred during the first 7 years of FLTFA. Because the state–level implementation process had not resulted in any acquisitions, BLM decided to jump-start the acquisition program and conduct purchases under secretarial discretion, as provided for in the MOU. In the spring of 2006, BLM headquarters officials solicited nominations from the FLTFA leads in each of the other three agencies. Most of the nominations agency officials provided were previously submitted for funding under LWCF. This secretarial initiative was approved by the Secretaries of Agriculture and of the Interior in May 2007. The 2007 secretarial initiative provided $18 million in funding for 13 land acquisition projects, including 19 parcels of land located in seven states— Arizona, California, Colorado, Idaho, New Mexico, Oregon, and Wyoming. Specifically, the initiative consisted of 9,049 acres and included projects for each agency: six BLM projects for $10.15 million, two Fish and Wildlife Service projects for $1.75 million, two Forest Service projects for $3.5 million, and three Park Service projects for $2.6 million. Only 1 of the 19 parcels is an adjacent land; the rest are inholdings. Since the initiative was approved, BLM reported a number of changes that the agencies made to the original list of land acquisition projects. For example, the total number of acres increased to 9,987 in a total of eight states. As of January 23, 2008, BLM reported that the agencies had wholly or partially completed 8 of the 13 approved acquisition projects. Specifically, the agencies spent $10.1 million between August 2007 and January 2008 to complete the acquisition of the first nine parcels under the secretarial initiative. The acquisitions include 3,381 acres in seven states—Arizona, California, Idaho, Montana, New Mexico, Oregon, and Wyoming. See table 4 for a complete description of the current status of these projects. Figure 7 shows part of the acquisition site within the La Cienega Area of Critical Environmental Concern. According to BLM, it selected this site for acquisition because (1) it is an archeologically rich area preserving ancient rock art and (2) the riparian cottonwood and willow forest that line the Santa Fe River and its La Cienega Creek tributary provide critical habitat for threatened and endangered wildlife, such as the bald eagle and southwest willow flycatcher. The final purchase price was $2.2 million. To fund the acquisitions in the secretarial initiative of $18 million, the BLM FLTFA program lead told us that the Secretaries approved the use of $14.5 million of the funds from the 20 percent of revenue available for acquisitions outside the state in which they were raised, and $3.5 million of the revenue not used for administrative activities supporting the land sales program. Agencies Have Submitted Nominations under the State-Level Process, But None Have Resulted in a Land Acquisition In addition to the acquisitions in the secretarial initiative, the agencies have submitted 14 acquisition nominations valued at $71.1 million to state- level interagency teams for consideration, but not one has resulted in a completed acquisition. Of the $14.1 million in land acquisitions awaiting a secretarial decision, $13.7 million, or 97 percent, is for inholdings and $458,000 million—or 3 percent—is for adjacent land. Table 5 shows the data we gathered from BLM state offices on the status of the nominations that have been submitted. The Nevada interagency team has submitted a total of five nominations for secretarial approval under FLTFA. It nominated two Forest Service acquisitions—a total of 705 acres valued at $4.76 million—in 2004. These were the first nominations submitted for secretarial approval under FLTFA. The Forest Service was unable to complete the purchases because of negotiating differences with the sellers. Of the remaining three Nevada nominations, one valued at $16 million was approved in November 2007, one valued at $10.6 million awaits approval, and one valued at $29 million has been withdrawn by the Nevada interagency team. The recently approved Nevada nomination is for the Pine Creek State Park, an 80-acre inholding owned by the state of Nevada valued at $16 million (see fig. 8). BLM currently manages this inholding, which is located in BLM’s Red Rock Canyon National Conservation Area. According to the BLM nomination package, BLM would like to acquire this property to meet the increasing recreational and educational needs of the park. BLM explains that the property has recreational value; cultural resources; riparian habitat; and habitat for the desert tortoise, currently a threatened and endangered species. The nomination that was withdrawn by the Nevada interagency team is the 320-acre Winter’s Ranch property, which is adjacent to the Humbolt- Toiyabe National Forest and several other properties acquired by BLM under SNPLMA. BLM’s FLTFA program lead said the nomination of the parcel was withdrawn, in part because it is not adjacent to a federally designated area managed by BLM. In its nomination to acquire Winter’s Ranch, the Carson City Field Office said this parcel provides critical habitat for shorebirds, water fowl, and other water-dependent species; offers unique recreational opportunities for the public; and an undisturbed view for area commuters and tourists. According to a Carson City Field Office official, three creeks run through this property and irrigate the land, making it possible to sustain wildlife habitat, such as raptors and migratory birds. The official said that about $20 million of the estimated $29 million value of the Winter’s Ranch property is for the water rights to the property, and that if BLM did not obtain the water rights, other parties could acquire them and divert the water resources to other areas, such as developing communities near Reno. The Winter’s Ranch parcel is shown in figure 9. Over one-half of the state-level interagency teams—Colorado, Idaho, Montana, New Mexico, Oregon, and Utah—did not review any land acquisitions proposals between July 2000, when FLTFA was enacted, and May 2007. Furthermore, the Fish and Wildlife Service and the Park Service have yet to submit a nomination for review under the state-level interagency process. Fish and Wildlife Service and Park Service officials based in California said they lacked the FLTFA funding necessary to complete an acquisition and would have to wait until sufficient revenue were available to allow them to nominate an acquisition. In examining the headquarters review and approval process, we found that the Land Transaction Facilitation Council established in the national MOU has never met. The BLM FLTFA program lead explained that, as a practical matter, it has not been necessary for this council to meet. Rather, in practice, acquisition nominations are forwarded to the BLM lead and then routed to his counterparts at the other three agencies for review. Additional reviews are then conducted at the agency level and, ultimately, at the secretarial level. State-Level Process Has Not Yet Resulted in Acquisitions Because of the Time Taken to Complete Interagency Agreements and Limited Funds outside of Nevada Although the agencies envisioned it as the primary process for nominating land for acquisition under FLTFA, the state-level process established in the national MOU and state-level interagency agreements has yet to result in a completed land acquisition for two primary reasons. First, it has taken over 6 years for the four agencies to complete all interagency agreements—3 years for the agencies to complete a national MOU and an additional 3 years for the agencies to complete all state-level implementation agreements. Most agencies completed Federal Register notifications of their procedures to identify and set priorities for inholdings, as called for in the act, soon after state-level agreements were signed. Nevada was the first state to complete the implementation agreement in June 2004, and it published a Federal Register notice in August 2004. Utah was the last state to complete these actions, reaching an agreement in November 2006 and publishing its Federal Register notice in March 2007. Table 6 summarizes the completion of implementation agreements and the Federal Register publication for each state. BLM officials told us that completion of these agreements was delayed for a number of reasons, including attention to other priorities, difficulties coordinating the effort with four agencies, and lack of urgency due to limited revenue available for acquisitions. Second, funds for acquisitions have been limited outside of Nevada. Because FLTFA requires that at least 80 percent of funds raised must be spent in the state in which they were raised and because 92 percent of funds have been raised in Nevada, the majority of funds must be spent on acquisitions in Nevada. However, as discussed earlier, no acquisitions have yet been completed in Nevada. Additional factors, such as the fact that about 92 percent of Nevada is already federally owned and that SNPLMA has provided additional resources for land acquisitions in Nevada, may have also contributed to the lack of a completed acquisition under FLTFA in Nevada. Outside of Nevada, agencies have had little money to acquire land. Several agency officials, such as BLM state office officials in Utah and Oregon, told us that additional revenue needs to be generated under FLTFA for land acquisitions to occur. Moreover, Park Service and Forest Service officials in California told us they are waiting for adequate funding before they begin identifying and nominating acquisitions. The Forest Service official explained that the agency could not make significant purchases with their share of the FLTFA funds in California because of the high cost of real estate. BLM Reports Spending $3.2 Million on FLTFA Administrative Activities Between the time FLTFA was enacted and July 20, 2007, BLM reports spending $3.2 million on FLTFA administrative expenses to conduct land sales under the act. The three other agencies do not have land sale expenses under the program. The BLM Nevada offices spent 81 percent of the revenue, or $2.6 million. BLM offices in Arizona, California, New Mexico, and Oregon each spent over $100,000, and the remaining five states spent a combined total of less than $50,000. States with the most active sales programs generally spent the most FLTFA revenue. For example, Nevada field offices conducted 106 of the 265 total sales under FLTFA, or 40 percent of the sales. Table 7 summarizes administrative expenditures by state as reported by BLM’s Division of Business Services. BLM spent little FLTFA revenue on the administrative costs of land sales during the first 3 years of the program. According to the BLM FLTFA program lead, there was little incentive for BLM to sell its land because the MOU was not in place. Spending has generally increased since then, with a spike in fiscal year 2006. Figure 10 shows FLTFA expenditures from its enactment to July 2007. BLM’s Division of Business Services tracks FLTFA expenditures through eight expenditure types. As table 8 shows, BLM offices spent 72 percent of FLTFA expenditures —about $2.3 million—on personnel compensation and benefits (e.g., staff to conduct sales). Agencies Face Challenges in Completing Additional Acquisitions BLM managers and we identified several challenges in completing additional acquisitions before FLTFA expires in 2010. BLM officials most commonly cited the time, cost, and complexity of the land acquisition process as a challenge to conducting acquisitions under FLTFA. We also found that the act’s restriction on the use of funds outside of the state in which they were raised continues to limit acquisitions. Specifically, little revenue is available for acquisitions outside of Nevada. Furthermore, progress in acquiring priority land has been hampered by the agencies’ weak performance in identifying inholdings and setting priorities for acquiring them, as required by the act. Finally, the agencies have yet to develop effective procedures to fully comply with the act and national MOU. BLM Officials Most Commonly Cited the Time, Cost, and Complexity of the Land Acquisition Process as a Challenge, among Several, to Completing Acquisitions BLM state and field officials from the 10 BLM state offices and 18 BLM field offices we interviewed most commonly cited the time, cost, and complexity of the land acquisition process as a challenge they face in completing land acquisitions. The other most commonly cited challenges were, in the order of frequency cited, (1) identifying a willing seller, (2) the availability of knowledgeable staff to conduct acquisitions, (3) the lack of funding to purchase land, (4) restrictions imposed by laws and regulations, and (5) public opposition to land acquisitions. Some of the challenges BLM state and field officials cited are likely typical of many federal land acquisitions. Because they have had little experience with FLTFA acquisitions, officials from the other three agencies had few comments about challenges. The following provides examples of each of these challenges: Time, cost, and complexity of the land acquisition process. To complete an acquisition under FLTFA, four agencies must work together to identify, nominate, and rank proposed acquisitions, which must then be approved by the two Secretaries. Officials at two field offices estimated the acquisition process takes about 2-1/2 to 3 years. BLM officials from the Wyoming State Office and the Las Cruces Field Office said that, with this length of time, BLM must either identify a very committed seller willing to wait to complete a transaction or obtain the assistance of a third party in completing an acquisition. A third party could help either by purchasing the land first, holding it, and then selling it to the government at a later date, or by negotiating with the seller an option to buy the land within a specified period. In terms of cost, some offices noted that they did not have the funding required to complete all of the work involved to prepare land acquisitions. In terms of complexity, a Utah State Office official said BLM has more control over the process for submitting land acquisitions under LWCF than FLTFA because FLTFA requires four agencies in two departments to coordinate their efforts. Identifying a willing seller. Identifying a willing seller can be problematic because, among other things, the seller might have higher expectations of the property’s value. For example, an Ely Field Office official explained that, because of currently high real estate values, sellers believe they can obtain higher prices from developers than from the federal government. Further, an Idaho State Office official said that it is difficult to find a seller willing to accept the appraised price and wait for the government to complete the purchase. Even when land acquisition nominations are approved, they may not result in a purchase. For example, in 2004, under FLTFA, two approved acquisitions for inholdings within a national forest in Nevada were terminated. In one case, property values rose sharply during the nomination process and, in an effort to retain some of their land, the seller decided to reduce the acres for sale but maintain the price expectation. Furthermore, the landowner decided not to grant access through the parcel they were retaining to the Forest Service, thus eliminating the opportunity to secure access to an inaccessible area of the national forest. In the other case, during the course of the secretarial approval process, the landowner sold portions of the land included in the original transaction to another party, reducing the land available for the Forest Service to purchase. According to Forest Service officials, in both cases the purchase of the remaining parcels would not fulfill the original purpose of the acquisitions due to reductions in resource benefits. Therefore, the Forest Service terminated both projects. Similarly, the SNPLMA program in Nevada has had many terminated land acquisitions. Specifically, of the 116 land acquisition projects approved by the Secretary of the Interior from enactment in October 1998 through September 2007, 41 have been completed, 55 have been terminated, and 20 are pending. This represents a 47 percent termination rate. BLM did not report why these acquisitions were terminated. Availability of knowledgeable staff to conduct acquisitions. As is the case with selling federal land, BLM officials reported that they lack knowledgeable realty staff to conduct land acquisitions, as well as other BLM or department staff to conduct appraisals, surveys, and resource studies. Staff are occupied working on higher priority activities, particularly in the energy area. Lack of funding to purchase land. BLM officials in some states said they lack adequate funds to acquire land under FLTFA. For example, according to a field office official in Burns, Oregon, just one acquisition in a nearby conservation area would nearly drain that state’s FLTFA account. Restrictions imposed by laws and regulations. BLM officials said that legal and other restrictions pose a challenge to acquiring land. BLM Arizona and Grand Junction, Colorado, officials said that some federally designated areas in their jurisdictions were established after the date of FLTFA’s enactment, making the land within them ineligible for acquisition under the act. BLM New Mexico officials said that FLTFA’s requirement that land be inholdings or adjacent land is too limiting and argued that the law generally should allow for the acquisition of land that has high resource values. In terms of regulations, BLM Carson City Field Office officials told us that the requirements they must follow regarding the processing of title, survey, and hazardous materials issues pose a challenge to conducting acquisitions. Public opposition to land acquisitions. According to BLM officials from the Elko and Ely Field Offices in Nevada, the public does not support the federal government’s acquisition of federal land in their areas, arguing that the government already owns a high percentage of land and that such acquisitions result in the removal of land from the local tax base. Compliance with Specific Provisions in FLTFA Continue to Pose Challenges to Future Acquisitions FLTFA’s restriction on the use of funds outside of the state in which they were raised continues to limit acquisitions. Specifically, as mentioned earlier, little revenue is available for acquisitions outside of Nevada. Furthermore, the Secretaries of Agriculture and of the Interior have given only minimal attention to developing a procedure specific to FLTFA for identifying inholdings and adjacent land and setting priorities for acquiring them, as required by the act. According to BLM’s Assistant Director for Minerals, Realty, and Resource Protection, the four agencies met this requirement through their 2003 MOU. The official explained that the MOU establishes “a program for identification of eligible lands or interests in lands, and a process for prioritizing such lands or interests for acquisition.” However, we found that the MOU only restates the basic statutory language for this requirement and states that the Secretaries are to establish a mechanism for identifying and setting priorities for acquiring inholdings. We found no such mechanism or procedure at the national level. While the state-level agreements do establish a process for reviewing proposed acquisitions, six minimally elaborate and three do not elaborate on the basic FLTFA criteria: the date the inholding was established, the extent to which the acquisition will facilitate management efficiency, and other criteria the Secretaries consider appropriate. One exception to this is the Nevada state-level agreement. Because the agencies involved in SNPLMA had already developed an interagency agreement to implement that act, they modified that agreement to include FLTFA. The Nevada agreement is generally more detailed than other state agreements and includes more criteria for considering land acquisitions because of the differences between the SNPLMA and FLTFA land acquisition authorities. Also, unlike the other state agreements, the Nevada agreement uses a quantitative system to rank acquisitions. Table 9 is a summary of criteria each state-level agreement includes beyond the FLTFA criteria for acquisition nominations. When the agencies decided in 2006 to use the Secretaries’ discretionary authority to make the initial FLTFA acquisitions, officials from all four agencies told us they generally relied on acquisition proposals previously identified for LWCF funding to quickly identify the parcels to acquire. The agencies have systems to identify and set priorities for land acquisitions under LWCF. These existing systems could serve as a basis for systematically identifying and ranking FLTFA-eligible land for future acquisitions. The Agencies Have Yet to Establish Effective Procedures to Fully Comply with FLTFA and MOU Provisions With respect to FLTFA, the agencies—and primarily BLM, as the manager of the FLTFA account—have not established a procedure to track the act’s requirement that at least 80 percent of funds allocated toward the purchase of land within each state must be used to purchase inholdings and that up to 20 percent may be used to purchase adjacent land. The BLM FLTFA program lead said BLM considers this requirement when making land acquisition decisions but has not established a system to track it. The program lead noted that the requirement to use 80 percent for inholdings is hard to track, as the act is written, because the acquisition proposals are submitted in a piecemeal fashion. With respect to the national MOU, BLM has not established a procedure to track agreed-upon fund allocations—60 percent for BLM, 20 percent for the Forest Service, and 10 percent each for the Fish and Wildlife Service and the Park Service. The BLM FLTFA program lead told us the MOU allocations should be treated as a target or a goal on a national basis and they do not apply within a state. However, officials from the BLM Division of Business Services and BLM’s Budget Office told us there is no mechanism to track these allocations and were unable to tell us whether the allocations should be followed at the state or national level. Knowing whether the MOU fund allocations are set at the state or national level is important because allocations that apply nationally provide more flexibility than allocations at the state level. While BLM did not track the allocations, most state-level interagency agreements provide guidance on consideration of nominations that exceed the established allocations and some BLM state office officials we spoke with were mindful of these allocation targets. For example, in California, the interagency team had agreed to “lend” BLM the funds from their allocations for a proposed BLM acquisition because they themselves could not effectively use the small portions of funding allocated to them. In contrast, in Oregon, BLM officials said they had not considered such an arrangement. The BLM FLTFA program lead said the funding decisions made by the Secretaries will be tracked and further information will be provided to the state-level interagency teams to clear up any misunderstanding of the requirement. Conclusions Congress anticipated that FLTFA would increase the efficiency and effectiveness of federal land management by allowing the four agencies to use certain land sales revenue without further appropriation to acquire priority land. Seven years later, BLM has not taken full advantage of the opportunity FLTFA offered. BLM has raised most of the funds for the FLTFA account with land sales in just one state, and it and the other land management agencies have made limited progress in acquiring inholdings and adjacent land with exceptional resources. Because there are less than 3 years remaining until FLTFA expires and a significant amount of time is needed to complete both sales and acquisitions, relatively little time remains to improve the implementation of FLTFA. We recognize that a number of challenges have prevented BLM from completing many sales in most states, which limits the number of possible acquisitions. Many of the challenges that BLM cited are likely faced in many public land sales, as FLTFA did not change the land sales process. However, we believe that BLM’s failure to set goals for FLTFA sales and develop a sales implementation strategy limits the agency’s ability to raise revenue for acquisitions. Without goals and a strategy to achieve them, BLM field offices do not have direction for FLTFA sales. Moreover, the lack of goals makes it difficult to determine the extent of BLM’s progress in disposing of unneeded lands to raise funds for acquisitions. As with sales, progress in acquiring priority land has been hampered by weak agency performance in developing an effective mechanism to identify potential land acquisitions and set priorities for inholdings and adjacent land with exceptional resources, which FLTFA requires. Without such a mechanism, it is difficult to assess whether the agencies are acquiring the most significant inholdings and, thus, enabling them to more effectively and efficiently manage federal lands. Although the agencies do have systems to identify and set priorities for land acquisitions under LWCF that could potentially be adapted for the FLTFA acquisitions as well, they have not done so. Moreover, because the agencies have not tracked the amounts spent on inholdings and agency allocations, they cannot ensure compliance with the act or full implementation of the MOU. As Congress considers the Administration’s proposal to amend and reauthorize FLTFA, it may wish to reconsider the act’s requirements that eligible lands are only those designated in the land use plans at the time FLTFA was enacted and that most FLTFA revenue raised must be spent in that state. Adjusting the eligibility of land use plans, as the Administration has proposed, could provide additional resources for land acquisitions under FLTFA. In addition, providing the agencies with more flexibility over the use of funds may allow them to acquire the most desirable land nationwide. Matters for Congressional Consideration If Congress decides to reauthorize FLTFA in 2010, it may wish to consider revising the following provisions to better achieve the goals of the act: FLTFA limits eligible land sales to those lands identified in land use plans in effect as of July 25, 2000. This provision excludes more recently identified land available for disposal, thereby reducing opportunities for raising additional revenue for land acquisition. The requirement that agencies spend the majority of funds raised from eligible sales for acquisitions in the same state. This provision makes it difficult for agencies to acquire more desirable land in states that have generated little revenue. Recommendations for Executive Action We are making five recommendations. To improve the implementation of the FLTFA mandate to raise funds to purchase inholdings, we recommend that the Secretary of the Interior direct the Director of BLM to develop goals for land sales, and develop a strategy for implementing these goals during the last 3 years of the program. To enhance the departments’ compliance with the act, we recommend that the Secretaries of Agriculture and of the Interior improve the procedure in place to identify and set priorities for acquiring inholdings. To enhance the departments’ compliance with the act, we recommend that the Secretary of the Interior direct the Director of BLM to establish a procedure to track the percentage of revenue spent on inholdings and on adjacent land. To fully implement the National Memorandum of Understanding, we recommend that the Secretaries of Agriculture and of the Interior establish a procedure to track the fund allocations for land acquisitions by agency as provided in the MOU. Agency Comments The Department of the Interior provided written comments on a draft of this report. The department generally concurred with our report’s findings and recommendations, stating that it will implement all of the recommendations. These comments are presented in appendix IV of this report. In addition, Interior and the Department of Agriculture provided technical comments on the draft report, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of the Interior; the Secretary of Agriculture; the Directors of BLM, the Park Service, and the Fish and Wildlife Service; and the Chief of the Forest Service; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology With the Federal Land Transaction Facilitation Act of 2000 (FLTFA) set to expire in July 2010, we were asked to (1) determine the extent to which the Bureau of Land Management (BLM) has generated revenue for the FLTFA program, (2) identify challenges BLM faces in conducting future sales, (3) determine the extent to which the agencies have spent funds under FLTFA, and (4) identify challenges the agencies face in conducting future acquisitions. We also assessed the reliability of data BLM provided on revenue generated and on expenditures to date under FLTFA. For all four objectives, we reviewed FLTFA, other applicable laws, regulations, and agency guidance. We interviewed the FLTFA program leads at the headquarters offices for BLM, the Fish and Wildlife Service, and the Park Service within the U.S. Department of the Interior, and the Forest Service within the U.S. Department of Agriculture on program status, goals, and management oversight for the program. To understand BLM’s interpretation of key provisions of the act, we interviewed officials with Interior’s Office of the Assistant Secretary for Land and Minerals Management and Office of the Solicitor and, in some cases, requested the department’s views on these provisions in writing. To determine the extent to which BLM has generated and expended FLTFA program revenue, we obtained and analyzed data from BLM’s Division of Business Services on program revenue and visited Division of Business Services accounting officials in Lakewood, Colorado, to discuss the management of the FLTFA account. Using information provided by the Division of Business Services and information we obtained from the Federal Register, we prepared summary information on completed sales by state and asked the 10 BLM state office officials responsible for the FLTFA program in their state to verify and update that information. As part of the request to state offices, we obtained data on planned FLTFA land sales and completed and planned acquisitions through 2010. We subjected the data provided by the field offices to electronic and logic testing and followed up with the field contacts regarding questions. With regard to acquisitions, we reviewed available documentation for land acquisition proposals considered by the 10 FLTFA interagency teams at the state level, agency headquarters, and the Secretaries of Agriculture and of the Interior. During our visits to selected BLM state offices (California, Nevada, New Mexico, and Oregon) and field offices (Carson City, Nevada, and Las Cruces, New Mexico), we interviewed officials and visited planned land acquisition sites to learn about the land acquisition process. During these visits we also interviewed selected officials with the Fish and Wildlife Service, the Forest Service, and the Park Service to learn about their experience in drafting state-level interagency agreements and with implementing land acquisitions under FLTFA. To assess the reliability of data provided by the Division of Business Services on revenue and expenditures, we interviewed staff responsible for compiling and reporting the data at the Division of Business Services and at the state office and field locations visited. We examined reports of this data from BLM’s financial systems and related guidance and sought documentation on selected entries into the system. To determine whether BLM has sufficient internal controls over FLTFA receipts and expenditures, we interviewed officials at the bureau’s Division of Business Services and obtained, reviewed, and assessed the system of internal controls for the U.S. Treasury account established under FLTFA, including management’s written policies and procedures, as well as control activities over collections, expenditures, and the records for these transactions. We also reviewed documentation for a non- probability sample of 7 nonlabor FLTFA expenditures totaling $54,967 that were charged by the Las Cruces Field Office to ensure proper documentation. As of July 20, 2007, BLM offices had made a total of 15,706 expenditure transactions—858 nonlabor and 14,848 labor—nationwide. The seven we chose included expenditures for appraisals and cultural evaluations on properties being prepared for sale under FLTFA. We chose these transactions because they were the largest ones and included a single vendor. We also chose one expenditure made on a charge card because it was slightly less than a reporting limit. We checked to ensure that documentation for these expenditures included (1) an agreement or contract between BLM and the entity to have specific work completed, (2) an invoice detailing work performed, and (3) evidence of BLM supervisory approval to pay for such services. After our review of the internal control policies and procedures, testing and verification of data on revenue, and obtaining documentation of the selected expenditures, the revenue and expenditure data was considered sufficiently reliable for our report. To identify challenges to conducting land sales and acquisitions, we reviewed the FLTFA national memorandum of understanding, state-level interagency agreements, and documentation of headquarters and state- level interagency team activities to learn about the policies and procedures established for the implementation of FLTFA. We conducted semistructured interviews using a web-based protocol with (1) the 10 BLM state officials responsible for the FLTFA program in their state—Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon/Washington, Utah, and Wyoming; (2) officials at the seven BLM field offices that have raised 97 percent of the FLTFA revenue (as shown in table 10); and (3) a nongeneralizable sample of 11 of the 137 remaining BLM field offices that had not conducted a competitive sale under FLTFA as of May 31, 2007 (as shown in table 11). From the field offices with no competitive sales, we choose at least one office from each of the ten state offices under FLTFA and we considered the proximity of lands managed by field offices to urban areas. For California, we selected two additional field offices—Palm Springs and Eagle Lake. We chose the Palm Springs Field Office because it planned a major sale during our review and we chose the Eagle Lake Field Office because, although it is located in California, it manages some land in Nevada and has had no competitive sales. Because all of the Nevada field offices have had competitive sales and four Nevada offices were among the high revenue offices selected, we decided to select the Eagle Lake office. To analyze the narrative responses to some of the semistructured interview questions, we used the web-based system to perform content analyses of select open-ended responses. To conduct the content analyses to develop statistics on agreement among the answers, two reviewers per question collaborated on developing content categories based on survey responses and independently assessed and coded each survey response into those categories. Intercoder reliability (agreement) statistics were electronically generated in the coding process, and agreement on all categories were 90 percent or above. Coding disagreements were resolved through reviewer discussion. In addition, analyses of the closed-ended responses were produced with statistical software. We also interviewed a range of officials about the land acquisition process. These officials included FLTFA program leads at each agency’s headquarters and selected state or regional-level contacts with each agency, as well as officials from third-party organizations involved with the land acquisition process, such as The Nature Conservancy and The Trust for Public Land. We performed our work between November 2006 and February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Completed FLTFA Land Sales, through May 2007 Betty Foster/Marble Canyon Co. Inc. Eagle Lake Field Office Pitchfork Cattle Co. Royal Gorge Field Office Cripple Creek & Victor Gold Mining Company Royal Gorge Field Office Cripple Creek & Victor Gold Mining Company Royal Gorge Field Office County of Boulder White River Field Office Taylor Temples White River Field Office Walter Powell White River Field Office Mark Slawson White River Field Office Chris Halandras White River Field Office Cross Slash 4 Ranch White River Field Office Big Mountain Ranch, White River Field Office Howard Cooper Golden Reward Mining Co. Carson City Field Office Carson Auto Mall, LLC 01/29/2004 Carson City Field Office Raymond Sidney Carson City Field Office Jacob and Arezou West Wendover Project, LLC Mt. Wheeler Power, Inc. The Crescent Group, LLC Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Robert Dale Beck Las Vegas Field Office Rajwinder Dhaliwal Las Vegas Field Office Muscle Investments, The Crescent Group, LLC Las Vegas Field Office Edward Van Sloten Las Vegas Field Office Varinder Singh Las Vegas Field Office William Berdie Jr. Las Vegas Field Office Ominet Laughlin, LLC 05/12/2006 Las Vegas Field Office DJL Enterprises, LLC 12/06/2005 Las Vegas Field Office DJL Enterprises, LLC 12/06/2005 Las Vegas Field Office DJL Enterprises, LLC 12/06/2005 Las Vegas Field Office Peter Horne dba Halo Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Brooke Ann Mrofcza Las Vegas Field Office Brooke Ann Mrofcza Las Vegas Field Office Silver State Schools Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Las Vegas Field Office Hardy Properties, LLC 12/08/2005 Cavanaugh Jr. American Exchange Services, Inc. Van Grazing Cooperative, Inc. Van Grazing Cooperative, Inc. Tom and Diane Grant Jr. Lakeview District Office Donald Rajnus Lakeview District Office Kennedy Land Co, Lakeview District Office Meadow Lake, Inc. Appendix III: Detailed Information on Planned FLTFA Land Sales through 2010, as Reported by BLM State Offices Acreage Fair market value Sale method Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development Competitive public interest, expansion of community and economic development To resolve an unintentional, unauthorized occupancy Competitive public interest, expansion of community and economic development To resolve unauthorized use (agricultural) To resolve unauthorized use (occupancy) Accommodate use on adjoining lands (sewage treatment ponds) Resolve unauthorized use (agricultural) Information not provided. Appendix IV: Comments from the Department of the Interior Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to those named above, Andrea Wamstad Brown, Assistant Director; Mark Keenan; Emily Larson; John Scott; and Rebecca Shea made key contributions to this report. Also contributing to the report were Anthony Covacevich, Rich Johnson, Paul Kinney, and Carol Herrnstadt Shulman.
The U.S. Department of the Interior's Bureau of Land Management (BLM), Fish and Wildlife Service, and National Park Service, and the U.S. Department of Agriculture's Forest Service manage about 628 million acres of public land, mostly in the 11 western states and Alaska. Under the Federal Land Transaction Facilitation Act (FLTFA), revenue raised from selling BLM lands is available to the agencies, primarily to acquire nonfederal land within the boundaries of land they already own--known as inholdings, which can create significant land management problems. To acquire land, the agencies can nominate parcels under state-level interagency agreements or the Secretaries can use their discretion to initiate acquisitions. FLTFA expires in 2010. GAO was asked to determine (1) FLTFA revenue generated, (2) challenges to future sales, (3) FLTFA expenditures, and (4) challenges to future acquisitions. To address these issues, GAO interviewed officials and examined the act, agency guidance, and FLTFA sale and acquisition data. Since FLTFA was enacted in 2000, through May 2007 BLM has raised $95.7 million in revenue, mostly from selling about 17,000 acres. About 92 percent of the revenue raised, or $88 million, has come from land transactions in Nevada--1 of the 11 western states. Nevada accounts for the lion's share of the sales because of a rapidly expanding population, plentiful BLM land, and experience with federal land sales in southern Nevada. Most BLM field offices have not generated sales revenue under FLTFA. BLM faces several challenges to raising revenue through future FLTFA sales. In particular, BLM state and field officials most frequently cited the limited availability of knowledgeable realty staff to conduct the sales. These staff are often not available because they are working on higher priority activities, such as reviewing and approving energy rights-of-way. We identified two additional issues hampering land sales activity under FLTFA. First, while BLM has identified land for sale in its land use plans, it has not made the sale of this land a priority during the first 7 years of the program. Furthermore, BLM has not set goals for sales or developed a sales implementation strategy. Second, GAO found that some of the additional land BLM has identified for sale since FLTFA was enacted would not generate revenue for acquisitions because the act only allows the deposit of revenue from the sale of lands identified for disposal on or before the date of the act. The four land management agencies have spent $13.3 million of the $95.7 million in revenue raised under FLTFA: $10.1 million using the Secretaries' discretion to acquire nine parcels of land and $3.2 million for administrative expenses to prepare land for FLTFA sales. The agencies acquired the land between August 2007 and January 2008--more than 7 years after FLTFA was enacted, and BLM spent the administrative funds between 2000 and 2007, primarily for preparing FLTFA sales in Nevada. As of October 2007, no land had been purchased through the state-level interagency nomination process, which the agencies envisioned as the primary mechanism for acquiring land. Agencies face several challenges to completing future land acquisitions under FLTFA. Most notably, the act requires that the agencies use most of the funds to purchase land in the state in which the funds were raised; this restriction has had the effect of making little revenue available for acquisitions outside of Nevada. Furthermore, progress in acquiring priority lands has been hampered by weak agency performance in identifying inholdings and setting priorities for acquiring them, as required by the act. In addition, GAO found that the agencies have not established procedures to track implementation of the act's requirement that at least 80 percent of FLTFA revenue raised in each state be used to acquire inholdings in that state or the extent to which BLM is complying with agreed-upon fund allocations among the four participating agencies. Of the revenue generated by FLTFA sales, the agencies have agreed to allocate 60 percent to BLM, 20 percent to the Forest Service, and 10 percent each to the Fish and Wildlife Service and the Park Service.
Background Health insurance coverage often includes beneficiary contributions, which require an insured individual to pay some portion of medical expenses. The medical expenses charged to an individual—particularly for certain types of beneficiary contributions—can vary depending on the amount and type of services used. The two most common forms of beneficiary contribution requirements—health insurance premiums and cost sharing—differ in the method and frequency with which they are applied. Premiums are charged at regular intervals, such as monthly, and generally the same amount is charged each time. In contrast, cost sharing charges can vary depending on the amount and type of services used. There are three types of cost sharing arrangements: coinsurance, copayments, and deductibles (see table 1). Among low-income populations, approximately 40 percent of children and nondisabled adults had at least one nonpreventive physician visit during 2000. Among these individuals, children averaged close to three nonpreventive physician visits per year, while nondisabled adults averaged fewer than five visits per year. Similarly, for individuals who filled at least one prescription, the average number of filled prescriptions ranged from approximately 4 per year for children to over 32 per year for adults with disabilities. (See app. I for more information on beneficiary service utilization.) Medicaid and SCHIP generally limit the use of beneficiary contribution requirements. The following sections contain specific information about the programs and the federal laws pertaining to their use of beneficiary contributions. Medicaid Established in 1965, Medicaid is a joint federal-state entitlement program that finances health care coverage for certain low-income families, children, pregnant women, and individuals who are aged or disabled. In fiscal year 2001, there were more than 46 million Medicaid enrollees, over half of whom were children, and federal and state expenditures totaled $228 billion. Medicaid eligibility is based in part on family income and assets; states set their eligibility criteria within broad federal guidelines. Eligibility criteria for each state’s Medicaid program are outlined in a CMS- approved state plan. Medicaid allows states to require certain beneficiaries to contribute to the cost of their coverage by charging premiums and requiring cost sharing. The populations that can be required to make beneficiary contributions under federal law differ depending on the type of beneficiary contribution—premiums or cost sharing—and the law places limits on the amounts of the contributions states can require. Federal law generally bars states from requiring beneficiary contributions of certain populations, but exceptions do exist. Additionally, states may seek federal approval to waive certain provisions regarding beneficiary contributions. Federal Law Governing Premiums in Medicaid States are prohibited from requiring premiums from certain low-income individuals within certain groups, including children, pregnant women, individuals in families with dependent children, individuals with disabilities, and elderly persons, but exceptions exist. Specifically, in Medicaid, the law allows states to require premiums from certain populations, such as certain working individuals with disabilities and families. (See table 2 for examples of these exceptions.) Additionally, states are allowed to charge premiums to medically needy individuals— generally, people who fall into one of the eligibility coverage groups indicated above, but who incur medical expenses such that their income, less these expenses, makes them eligible for Medicaid. If states require premiums for medically needy individuals, the regulations specify that the premiums be assessed on a sliding scale, from $1 to $19 per person per month, on the basis of their family’s total gross income. Federal law prohibits states from applying cost sharing requirements for certain individuals and certain services. Specifically, cost sharing may not be charged for categorically and medically needy children under 18 years of age, and pregnant women, for services related to the pregnancy or to conditions that could complicate the pregnancy. Additionally, cost sharing may not be charged for the categorically and medically needy for services furnished to individuals residing in a nursing home or other institution, who were required to spend most of their income for medical care; services furnished to individuals receiving hospice care; emergency services; and family planning services and supplies. States may require nominal copayments, coinsurance, or deductibles within federal limits from other beneficiaries or for other services (see table 3). Beneficiaries may be charged only one type of cost sharing per service. Providers may collect cost sharing amounts from beneficiaries and generally are not to be reimbursed by the state if they are unsuccessful in collecting cost sharing from beneficiaries. Providers generally may not deny services if beneficiaries are unable to pay cost sharing amounts. States must seek permission from the federal government to charge premiums or cost sharing beyond what is allowed under Medicaid. Under section 1115 of the Social Security Act, the Secretary of Health and Human Services has broad authority to approve demonstration projects that he determines are likely to promote Medicaid objectives. The Secretary may waive certain provisions of the statute if the Secretary finds it necessary for the performance of the experimental, pilot, or demonstration projects. Section 1115 waivers have been used to provide coverage to individuals not normally eligible for Medicaid—or to expand coverage to those who are eligible under Medicaid but are not included in the scope of the state’s plan. Beneficiary contribution requirements for individuals who become eligible for Medicaid through an 1115 waiver may be approved at the Secretary’s discretion, subject to some limitations. CMS reviews states’ proposed beneficiary contribution requirements for 1115 waivers as part of the waiver approval process and specifies any terms and conditions that a state must adhere to as a condition of the waiver approval. According to CMS, because the provisions of Medicaid law related to limitations on beneficiary contributions are applicable only to persons eligible under the state plan, specific waivers of the beneficiary contribution provisions are not always necessary. Waivers are necessary when states want to charge premiums or cost sharing amounts that are generally prohibited under federal law for individuals who are already covered under the state’s plan. As of February 2004, two states—Arkansas and Vermont—have received approval to charge individuals premiums and one state—Arizona—has received approval to charge individuals both premiums and cost sharing. For other populations, specific waivers of requirements regarding beneficiary contributions are not necessary. In particular, states are permitted to charge beneficiary contributions in excess of what would otherwise be permitted for populations who, without a waiver, would not be eligible for coverage under the state’s Medicaid plan. For these populations, states are permitted to end coverage for beneficiaries who fail to pay premiums or deny services to those who fail to pay cost sharing. As of February 2004, of the 22 states with statewide 1115 waivers, 21 states covered populations in their Medicaid program for which the Medicaid statutory provisions regarding limits on beneficiary contributions are not applicable. In 1997, Congress established SCHIP, which provides health care coverage to low-income, uninsured children living in families whose incomes exceed the states’ eligibility limits for Medicaid. SCHIP covered over 5.8 million children during fiscal year 2003, and federal and state expenditures were approximately $6.1 billion. States have three options in designing SCHIP—expand their Medicaid program, develop a separate child health program that functions independently of Medicaid, or combine these two approaches. The approach that a state chooses affects its beneficiary contribution policies. A state that uses its SCHIP allocation to expand Medicaid must follow Medicaid rules—thus SCHIP beneficiaries are subject to the state’s Medicaid policies with regard to premiums and cost sharing. For a state with a separate SCHIP program, federal law limits the premium and cost sharing amounts it may charge. States with a separate SCHIP program are prohibited from requiring premium or cost sharing contributions together totaling more than 5 percent of family income. States with separate SCHIP programs are also prohibited from charging any cost sharing on preventive services. In addition, for children in families with income at or below 150 percent of the FPL, there are specific limits on the amounts of premiums and cost sharing that states may charge in a separate SCHIP program (see table 4). For these individuals, federal regulation also prohibits states from requiring more than one type of cost sharing charge on each service. Additionally, regardless of family income or a state’s SCHIP design, states are prohibited from charging premiums or cost sharing to American Indians or Alaska Natives. Similar to Medicaid, to require premiums or cost sharing in SCHIP beyond what is permissible under federal law, states must seek waivers from the Secretary of Health and Human Services. In establishing SCHIP, Congress extended the applicability of section 1115 of the Social Security Act to SCHIP “in the same manner” as it applies to states under Medicaid. According to CMS, six states with SCHIP programs that are Medicaid expansions have received section 1115 waivers to require beneficiary contributions that would be allowable in a separate SCHIP program. In some cases, 1115 waiver approvals have allowed states to increase cost sharing in their premium assistance programs—programs in which the state helps individuals gain access to available employer-based insurance by using SCHIP funds to pay for part of an individual’s share of the cost of coverage. Specifically, two states—Illinois and Oregon—have waivers to allow for increased cost sharing for children in such premium assistance programs. Children Were More Likely to Be Subject to Beneficiary Contributions in SCHIP than in Medicaid In response to our survey, states reported that children were more likely to be subject to premiums and cost sharing in SCHIP than in Medicaid. Overall, 26 states charged premiums for some portion of children— “some,” “most,” or “all” in SCHIP, and 9 states charged premiums, through the use of 1115 waivers, for some portion of children in Medicaid. Twenty- five states charged cost sharing for children in SCHIP compared to six states for Medicaid. Most states that reported charging cost sharing applied copayment requirements to the six services we reviewed. In addition, the amounts of beneficiary contributions required for children varied on the basis of factors such as family income. Premiums Twenty-six states reported charging premiums for some portion of children in SCHIP, compared to 9 states for Medicaid: 5 states charged premiums for some portion of children in both Medicaid and SCHIP, 21 states charged premiums for SCHIP children only, and 4 states charged premiums for Medicaid children only. (See table 5.) Although federal law generally prohibits states from charging premiums for children in Medicaid, some states reported having received waivers from the Secretary of Health and Human Services granting them authority to do so. Of the nine states charging premiums for children in Medicaid, six states required premiums for children included in their 1115 waiver populations only. For example, Rhode Island charged premiums only for children with incomes between 150 and 250 percent of the FPL, all of whom became Medicaid eligible through its 1115 waiver. The remaining three states—Arizona, Arkansas, and Vermont—also had 1115 waivers but had received approval to waive provisions related to premium requirements. Thus, they were allowed to charge premiums for children. States generally are not allowed to charge premiums for children in their SCHIP Medicaid expansion programs, as these programs follow the law governing the Medicaid program. According to CMS, six states have received SCHIP 1115 waivers to require beneficiary contributions for children in their SCHIP Medicaid expansion programs. Three of those states—Missouri, Rhode Island and Wisconsin—used their 1115 waiver to implement premiums for some portion of their SCHIP beneficiaries. The remaining three states—Arkansas, New Mexico and Ohio—did not charge premiums for children in their SCHIP program. Among states with premium requirements for children, SCHIP programs often reported charging premiums for a larger proportion of their children than did Medicaid programs (see app. II). Ten of the 26 states charging premiums for children in SCHIP required them for all or most of their SCHIP children. In contrast, all nine of the states with premiums for children in Medicaid required them for only some of the population. The amount of premiums required for Medicaid and SCHIP children varied across and within states. (See app. III for the range in premiums for all states.) Some states reported varying premium amounts on the basis of beneficiaries’ family income, and some states reported capping the amount of premiums a beneficiary could be subject to in a given year. (See table 6.) The following are examples of the variation in states’ premium requirements. In Vermont, Medicaid premiums were assessed for eligible children in families with incomes above 185 percent of the FPL, and amounts varied from $25 to $35 a month depending on the family income. Medicaid programs in Rhode Island and Minnesota limited total yearly premium amounts to 4 percent and 7.5 percent of annual family income, respectively. In SCHIP, monthly premiums in Washington were $10 per child, with a cap of $360 per family per year. In New York, monthly premiums for families with incomes between 133 and 185 percent of the FPL were $9 per eligible child with a cap of $27 per family per month; families with incomes above 185 were charged $15 per eligible child with a cap of $45 per family per month. Cost Sharing In requiring cost sharing amounts, states reported relying on copayments and generally did not report using the other two main types of cost sharing requirements—coinsurance and deductibles. Twenty-five states charged copayments for some portion of children in SCHIP, while six states charged copayments for some portion of children in Medicaid. (See table 7.) With regard to coinsurance, three states charged coinsurance in Medicaid; Alaska and Missouri charged only children aged 18 or over, and Arkansas charged only children in its 1115 waiver program. Additionally, four states charged coinsurance in SCHIP (Alaska, Arkansas, Colorado, and Utah). None of the states reported using deductibles as a form of cost sharing for children. While federal law prohibits states from charging cost sharing for children in Medicaid under age 18, some states require cost sharing to the extent it is permissible under Medicaid provisions or through an 1115 waiver. For the six states that charged copayments for some portion of Medicaid children, Alaska’s, Missouri’s, and Wisconsin’s copayment requirements applied to children age 18 or over, and Delaware reported charging copayments for nonemergency transportation, requirements that are permissible under federal law. Arkansas charged copayments only to children in its state’s 1115 waiver population. Tennessee, whose entire Medicaid program operates under an 1115 waiver, charged copayments to children at or above the FPL. With regard to cost sharing in SCHIP, six states obtained section 1115 waivers that allowed them to require beneficiary contributions from children in their SCHIP Medicaid expansion programs. Four of the states—Arkansas, Missouri, New Mexico and Wisconsin—used their 1115 waiver to implement copayments for some portion of their SCHIP beneficiaries. The remaining two states—Ohio and Rhode Island—did not charge copayments for children in their SCHIP programs. Among states with copayment requirements for children, SCHIP programs were more likely to charge a larger proportion of their population compared to Medicaid (see app. IV). Most states that reported charging cost sharing applied copayment requirements to the six health care services that we considered. (See table 8.) In addition, the amount of cost sharing that states charged for the six selected services varied by service and state. For example, in the Texas SCHIP program, copayments varied on the basis of family income, ranging from $2 to $10 per physician visit, and from $25 to $100 per inpatient hospitalization. Across states with copayments for physician services, copayment amounts ranged from $1 per visit in Missouri’s Medicaid program and Wisconsin’s Medicaid and SCHIP programs to as high as $25 per visit in Tennessee’s Medicaid program. (See app. V.) Some states varied cost sharing amounts for children on the basis of family income. For example, in Virginia, SCHIP copayments for children in families with income from 133 percent to below 150 percent of the FPL were $2 per physician visit or per prescription and $5 for these services for children in families with higher incomes. Of the six states that charged cost sharing for children in Medicaid, only Tennessee capped cost sharing amounts for children. In SCHIP, seven states set specific caps for cost sharing amounts for a child in a given year. (See table 9.) For example, SCHIP cost sharing was capped at $650 a year in Connecticut and $750 a year in West Virginia. For Adults in Medicaid, Nearly Half the States Assessed Premiums and a Majority Required Cost Sharing Nearly half the states (25) reported assessing premiums for some adults enrolled in Medicaid, and a majority of the states (43) reported requiring cost sharing for some portion of adults, primarily in the form of copayments. Overall, 45 states required some portion of adults to share in the cost of their care by charging premiums, cost sharing, or both. (See fig. 1.) The states that required premiums generally did so on a limited basis, targeting portions of particular population groups, such as certain adults with disabilities. In contrast, the states with cost sharing requirements for adults in Medicaid charged several population groups and a larger portion of each group. Premiums Twenty-five states reported assessing premiums for some portion of their adult Medicaid populations. States mainly charged premiums to adults with disabilities (23 states) and parents (9 states), but a few states charged premiums to other adults, such as pregnant women (4 states) and noninstitutionalized elderly individuals (2 states). (See table 10.) (App. VI contains details on the portion of the populations charged premiums in each state.) Generally, states are not permitted to require certain individuals to pay premiums, including elderly persons, individuals with disabilities, and pregnant women. However, certain exceptions exist, for example: Four states (Hawaii, Minnesota, Rhode Island, and Vermont) reported charging premiums to pregnant women through their states’ 1115 waiver programs. Vermont had a waiver of the specific Medicaid provision regarding premium requirements, while the other three states charged pregnant women in their 1115 waiver programs. Hawaii, Rhode Island, and Vermont charged premiums only to pregnant women with incomes exceeding 185 percent of the FPL. In the fourth state, Minnesota, pregnant women with incomes at or below 275 percent of the FPL could choose whether to enroll in the state’s regular Medicaid program or the state’s 1115 waiver program. Only those enrolled in the 1115 waiver program were charged premiums, and failure to pay the required premiums did not result in the women’s disenrollment from the program. As allowed under federal law, states may charge premiums in Medicaid to certain individuals with disabilities, primarily those who are employed. For example, Connecticut reported charging premiums to working individuals with disabilities with incomes above 200 percent of the FPL. These individuals were required to pay a monthly premium equivalent to 10 percent of their income that exceeded 200 percent of the FPL, minus the amount the individuals or their spouses paid for any other health insurance. Premium amounts and requirements varied significantly across the 25 states. For example, in Massachusetts, monthly premiums ranged from $15 for families with incomes at the poverty level to over $928 for families with incomes over 1,000 percent of the FPL. Maine charged premiums equal to 3 percent of families’ net incomes for eligible parents with incomes above 150 percent of the FPL. (See app. VII for the income thresholds and ranges in amounts for premiums charged to adults in each state.) Twelve states capped the amount of premiums that beneficiaries could be subject to in a given year. For example, premiums for working individuals with disabilities in Mississippi were capped at 5 percent of annual income, and in Maine, premiums for some adults were capped at 3 percent of annual income. (See table 11.) Cost Sharing Forty-three states reported requiring adult populations to share in the cost of their care by charging copayments, coinsurance, or deductibles. (See fig. 2.) All 43 states charged copayments for selected services to some portion of adults. Nine of these states also charged coinsurance to some portion of adults. Two of the 43 states—South Carolina and Wisconsin— required a deductible for elderly individuals who received pharmacy—but no other—benefits from the states’ Medicaid program. For example, all participants in South Carolina’s Medicaid pharmacy program were required to pay a $500 deductible for prescription drugs. Copayments were the predominate form of cost sharing for adults, with states most frequently reporting copayments for adults with disabilities, noninstitutionalized elderly persons, and parents. (See table 12 and app. VIII.) Three states required copayments for pregnant women (Delaware, Virginia, and Wisconsin) for services unrelated to the pregnancy. While states generally are prohibited from charging cost sharing, including copayments, for medical services for individuals residing in institutions, Delaware considers nonemergency transportation to be an administrative cost and thus was allowed to charge a $1 copayment. The services for which states most frequently reported charging copayments were physician services and prescription drugs. (See table 13.) Copayment amounts varied depending on the service and the state. Across states, copayments ranged from $.50 to $25 for physician services and prescription drugs. Across the services, most states that required copayments for inpatient hospital services charged higher copayment amounts for this service compared to the other five services. For example, Montana’s copayment requirement for inpatient hospital services was $100 per stay, whereas its copayment requirements for the five remaining services we reviewed were $1 to $5. (See app. IX for details on the cost sharing amounts, including copayments, for adults, by state.) In five states, the amount of cost sharing charged varied by income for some portion of adults. For example, copayment amounts for physician services in Utah varied from $3 or $5 per visit depending on income. Six states reported placing a cap on the amount of cost sharing an individual could be subject to in a given year. For example, in Pennsylvania cost sharing expenses were capped at $90 per beneficiary every 6 months, and in New Mexico cost sharing amounts for working individuals with disabilities were capped at 3 to 5 percent a year depending on income. Thirty-Four States Increased and Ten States Decreased the Amount of Beneficiary Contributions From the beginning of their 2001 state fiscal years through August 1, 2003, 34 states reported increasing and 10 states reported decreasing the amount of beneficiary contributions they required in Medicaid, SCHIP, or both. We considered states to have increased beneficiary contribution requirements if they either raised the amount of existing contributions or instituted new contribution requirements for certain populations or services. For children, 18 states increased the amount of beneficiary contributions required in Medicaid, SCHIP, or both. For adults in Medicaid, 30 states increased the amount of beneficiary contributions. For the states that provided us information on the amount of beneficiary contribution increases, premium increases to existing requirements ranged from $2 a month to $39 a month. Other states added new premium requirements, some of which were as much as several hundred dollars a month. In contrast, states primarily increased copayment requirements by $5 or less. For a small number of states, however, copayment increases were more significant. New Hampshire SCHIP, for example, increased copayments for ER visits from $25 to $50 per visit. While no states reported decreasing their beneficiary contribution requirements for children in Medicaid, five states decreased these requirements (premiums, cost sharing, or both) for some portion of children in SCHIP, and five other states decreased cost sharing requirements for some portion of adults in Medicaid. Eighteen States Increased and Five States Decreased Beneficiary Contributions for Children From the beginning of their 2001 state fiscal years through August 1, 2003, 18 states reported increasing the amount of beneficiary contributions required for children in Medicaid, SCHIP, or both. Beneficiary contribution requirements were increased solely in Medicaid by 3 states, solely in SCHIP by 12 states, and in both Medicaid and SCHIP by 3 states. During the same period, 5 states decreased the amount of beneficiary contributions required for children, with all decreases occurring in states’ SCHIP programs. Premiums Of the 9 states charging premiums for children in Medicaid, 5 reported increases in premiums. Eleven of the 26 states charging premiums for children in SCHIP also reported increased premium amounts. (See table 14.) Some states increased existing premiums, while other states added new premiums, as shown in the following examples. Vermont increased its existing Medicaid monthly premiums by $5 or $9 per household depending on income; it increased its SCHIP monthly premiums by $20 per household. Premiums for newly covered populations of children were added in Arizona’s Medicaid program and Maryland’s SCHIP program. While no states decreased their premiums for children in Medicaid, two states—Kansas and Utah—decreased SCHIP premium amounts. For example, in February 2003, Kansas increased its monthly premium amounts by $20 or $30, depending on family income, and then decreased them by $10 or $15 dollars a few months later. Cost Sharing Delaware was the only state of the 6 states charging copayments for children in Medicaid that reported increasing copayment amounts, compared to 6 of the 25 states charging copayments for children in SCHIP that reported increasing copayment amounts. (See table 15.) Delaware added a copayment in Medicaid for nonemergency transportation services in 2002. As described in the following, of the six states that reported increasing SCHIP copayment requirements, two increased existing copayments, and four both increased existing copayments and added new copayment requirements. Missouri and New Hampshire increased existing copayments. For example, New Hampshire increased copayments for nonemergency use of the ER from $25 per visit to $50 per visit and increased copayments for physician visits from $5 to $10. Kentucky, Texas, Utah, and West Virginia made multiple changes to their copayment requirements. For example, Utah added a copayment for dental services for children in families with incomes at or below 150 percent of the FPL and increased copayment amounts for children in families with incomes above 150 percent of the FPL. While no states reported decreasing copayment amounts for children in Medicaid, four states did so for SCHIP. Colorado decreased the SCHIP copayment for nonemergency use of the ER from $5 to $3, and Virginia decreased copayments for vision exams from $25 to either $2 or $5, depending on family income. In addition to decreasing copayment amounts, the remaining two states, Texas and Utah, also increased copayments during the same period. Texas’ changes to copayments varied by service and family income. For example, the state decreased the copayment for generic prescription drugs by $1 or $2 for certain SCHIP beneficiaries, while increasing the copayment for brand name prescription drugs by between $3 and $10 for these and other beneficiaries. Copayment increases for other services in Texas ranged from $3 to $50. Utah decreased SCHIP copayment amounts for children in families with incomes at or below 150 percent of the FPL by $2 for physician services, inpatient and outpatient hospital services, and ER services. The state also increased copayments by $5 for physician and ER services, and $1 for certain prescription drugs for children in families with incomes above 150 percent of the FPL. While none of the states changed coinsurance requirements for children in Medicaid, one of the four states (Alaska, Arkansas, Colorado, and Utah) that charged coinsurance in SCHIP (Colorado) increased its coinsurance requirements. Thirty States Increased and Five States Decreased Beneficiary Contributions for Adults Thirty states reported increasing the amount of beneficiary contributions charged to some portion of adults in Medicaid. Most of these states (24) increased copayment amounts; fewer states increased premiums (12) and coinsurance amounts (2). Five states decreased beneficiary contribution requirements, specifically with respect to cost sharing. Premiums From the beginning of their 2001 state fiscal years through August 1, 2003, 12 states reported increasing premiums for some portion of adults in Medicaid. Half of these states increased the amount of existing premium requirements. For example, Rhode Island increased monthly premiums from approximately 3 percent of a family’s income to approximately 4 percent, and Vermont increased premiums for certain working individuals with disabilities by $25 to $36 a month, depending on the individual’s income and whether he or she had other insurance. The other half of these states added new premium requirements. For example, in January 2003, Arizona began covering working individuals with disabilities, requiring the new beneficiaries to pay monthly premiums of $15 or $25, depending on their income. In 2002, Washington added a premium for certain families covered under transitional Medicaid assistance. While a few states increased premiums for pregnant women, adults with disabilities, and parents, no states increased premiums for noninstitutionalized elderly beneficiaries. (See table 16.) No states decreased premium amounts for adults during this period. With regard to cost sharing, 25 states reported increasing requirements for some portion of Medicaid adults. Twenty-two of these states increased only copayment requirements, one state increased only coinsurance requirements, and two states increased a combination of cost sharing requirements. States’ cost sharing increases were generally targeted to noninstitutionalized elderly persons, adults with disabilities, parents and medically needy individuals. (See table 17.) Some states increased the amount of existing cost sharing requirements, while other states added cost sharing requirements for new services, as shown in the following examples: Both Nebraska and South Carolina increased prescription drug copayments by $1, and Utah increased copayments for drugs by $2. In North Dakota, copayments for inpatient hospitalization increased from $50 to $75 per stay, and copayments for nonemergency visits to the ER increased from $3 to $6 per visit. Washington implemented a $3 copayment for nonemergency visits to the ER in July 2002, while Oklahoma added $1 to $3 copayments for certain services, such as outpatient hospital services. During this same time period, five states reported decreasing copayment or coinsurance requirements for portions of their adult population. Specifically, Illinois, Indiana, Maryland, and Montana decreased copayment amounts for some portion of adults. For example, both Illinois and Maryland eliminated their $1 copayments for generic prescription drugs. Only Arkansas decreased coinsurance requirements for adults. In November 2001, the state decreased the coinsurance amount for inpatient hospitalization for most adults by 12 percent, from 22 percent of the cost of the first day of hospitalization to 10 percent. Agency Comments We asked CMS officials to verify the technical accuracy of the statutory and regulatory information on Medicaid and SCHIP beneficiary contributions presented in the background section of this report. These officials provided technical comments, which we have incorporated as appropriate. Because we did not evaluate CMS’s management of the Medicaid and SCHIP programs, we did not ask CMS to comment on other sections of this report. As agreed with your offices, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me on (202) 512-7118 or Carolyn Yocom on (202) 512-4931 if you have questions about this report. Major contributors to this report are listed in appendix X. Appendix I: Service Utilization Rates for Low- Income Individuals The medical expenses charged to an individual—particularly for cost sharing provisions—can vary depending on the amount and type of services used. The Medical Expenditure Panel Survey (MEPS) provides data on individuals’ annual utilization of medical services. MEPS, conducted by the Agency for Healthcare Research and Quality (AHRQ), consists of four surveys, including the Household Component, which provides nationally representative data and expenditures for the U.S. civilian noninstitutionalized population. The MEPS Household Component is a survey of individuals regarding their demographic characteristics, health insurance coverage, and health care use. At the time of our analysis, the 2000 version of the MEPS household component was the most recent version with all of the necessary data available. To determine service utilization for low-income populations, we included individuals with incomes below 200 percent of the FPL. For this cohort, we analyzed data for the following five population groups: (1) children (defined as individuals under age 18), (2) pregnant women aged 18 and over, (3) elderly persons—individuals aged 65 and over, (4) adults aged 18 to 64 with disabilities, and (5) nondisabled adults aged 18 to 64. For each of these population groups, we calculated the proportion of the population that used the following five services—(1) inpatient hospital, (2) outpatient hospital, (3) physician, (4) prescription drug, and (5) dental—at least once during the year (see table 18). For example, approximately 38 percent of children had a nonpreventive physician visit during the year, and almost 79 percent of adults with disabilities visited the physician for nonpreventive care. For the individuals in each population group who used a service, we calculated their average utilization rates for each of the selected services. The utilization rates for each service, displayed in table 19, represent the average use among individuals who used that particular service at least once during the year. Additionally, since federal law generally does not allow states to charge Medicaid cost sharing for emergency services, we calculated the utilization rates for nonemergency physician and dental visits by excluding visits classified in MEPS as emergencies. Similarly, since SCHIP generally does not allow states with separate SCHIP programs to require cost sharing for preventive medical or dental visits, we excluded certain types of visits we considered as preventive, such as well-child exams and dental visits for teeth cleaning. Appendix II: Premium Requirements for Children in Medicaid and SCHIP, by State, as of August 1, 2003 Appendix II: Premium Requirements for Children in Medicaid and SCHIP, by State, as of August 1, 2003 State only charged premiums to some portion of children with special needs. State did not charge premiums, but had an enrollment fee. Tennessee, which operates its entire Medicaid program under an 1115 waiver, charged premiums for some children in families with incomes at or above the FPL. Tennessee did not have a SCHIP program. Texas also had an enrollment fee. Appendix III: Premium Amounts for Children in Medicaid and SCHIP, by State, as of August 1, 2003 Lowest percentage of the FPL at which state Lowest percentage of the FPL at which state $16.50 to $110 $9 or $15 per individual; $27 or $45 per family $61 to $92 $15 to $18 $13 to $25 per quarter $10 per individual with $30 family maximum $30 to $360 In Minnesota, families could choose to enroll their children in either the state’s regular Medicaid program or its 1115 waiver program – both of which covered children from families with incomes up to 275 percent of the FPL. Children in families that chose to enroll in the 1115 waiver program were charged premiums regardless of their family income. Thus, families with incomes less than 1 percent of the FPL could choose to pay premiums. Appendix IV: Copayment Requirements for Children in Medicaid and SCHIP, by State, as of August 1, 2003 Appendix IV: Copayment Requirements for Children in Medicaid and SCHIP, by State, as of August 1, 2003 Arkansas charged copayments to all children in its 1115 waiver program, but did not charge copayments to other children. Delaware’s only copayment, which the state charged to all populations in its Medicaid program, was for nonemergency transportation services. Although Delaware did not charge copayments to children in SCHIP, the state did charge a fee for inappropriate use of the emergency room. Maryland’s SCHIP program did not charge copayments, but SCHIP beneficiaries receiving coverage through Maryland’s employer-sponsored insurance program may be charged copayments by their health plan. Tennessee, which operates its entire Medicaid program under an 1115 waiver, charged copayments for some children in families with incomes at or above the FPL. Tennessee did not have a SCHIP program. Appendix V: Cost Sharing Amounts for Children in Medicaid and SCHIP, by State, as of August 1, 2003 NA = Not applicable. The state did not charge cost sharing for this service. Delaware charged a $1 copayment for nonemergency transportation. Missouri did not have a copayment for prescription drugs in Medicaid, but some children were charged a dispensing fee for prescriptions. Appendix VI: Premiums for Adult Populations in Medicaid, by State, as of August 1, 2003 Appendix VI: Premiums for Adult Populations in Medicaid, by State, as of August 1, 2003 The following states did not charge premiums to any adults in Medicaid: Alabama, Arkansas, Colorado, Delaware, District of Columbia, Florida, Georgia, Idaho, Kentucky, Louisiana, Maryland, Michigan, Montana, Nevada, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, South Carolina, South Dakota, Texas, Virginia, and West Virginia. This population includes working adults with disabilities. States may require premiums from certain working adults with disabilities who received Medicaid coverage under the Balanced Budget Act of 1997 or the Ticket to Work and Work Incentives Improvement Act of 1999. Population not covered in the state’s Medicaid program. State charged premiums to all working individuals with disabilities, but did not charge premiums to other adults with disabilities. State charged premiums to some portion of childless adults. Maine charged premiums to individuals in the state’s HIV/AIDS waiver program. Tennessee, which operates its entire Medicaid program under an 1115 waiver, charged premiums to some adults enrolled in the state’s 1115 waiver program who had incomes at or above the poverty level. Not applicable: Tennessee did not report information based on these population groups. Utah charged an enrollment fee to all adults enrolled in the state’s primary care waiver program. Vermont charged premiums to some adults enrolled in the state’s 1115 waiver program. Appendix VII: Premium Amounts for Adults in Medicaid, by State, as of August 1, 2003 Lowest percentage of the FPL at which state Individual, couple, or household $80 to $220 Individual $6 to $20 Individual 5% of income Individual $61 to $92 Family $20 to $550 for an individual; $40 to $1,375 for a family 15% of income Individual $10 semi-annually to $75 per month Individual or household Formula based on income Individual or household $25 to $1000; $30 to $300 Formula based on income Individual This appendix reflects the range in premiums states charged across their entire adult population. The lowest income level at which an adult could be charged premiums in this state’s Medicaid program equated to less than one percent of the FPL. However, for certain populations, there were higher income thresholds at which the state began charging premiums. Pennsylvania charged premiums only for working individuals with disabilities whose incomes were below 250 percent of the FPL. Appendix VIII: Copayment Requirements for Adults in Medicaid, by State, as of August 1, 2003 Appendix VIII: Copayment Requirements for Adults in Medicaid, by State, as of August 1, 2003 The following states did not charge copayments to any adults in Medicaid: Hawaii, Idaho, Michigan, Nevada, New Jersey, Ohio, Rhode Island, and Texas. Population not covered in the state’s Medicaid program. Alaska also charged copayments to all individuals qualifying for transitional Medicaid assistance. Delaware’s only copayment, which the state charged to all populations in its Medicaid program, was for nonemergency transportation services. Maine also charged copayments to all individuals enrolled in its HIV/AIDs waiver program and all individuals in its comprehensive 1115 waiver program. In addition, individuals participating in the Missouri’s 1115 waiver program, which extends 12 months of additional coverage to working parents or caretakers, were also charged copayments. As of January 2004, this program had approximately 2,400 beneficiaries. Nebraska also charged copayments to most individuals in its refugee resettlement program. Oregon also charged copayments to most childless adults. Pennsylvania also charged copayments to most adults in its general assistance program. State also charged copayments to all individuals in its state’s Medicaid pharmacy program. Tennessee, which operates its entire Medicaid program under an 1115 waiver, charged copayments to some adults enrolled in the state’s 1115 waiver program who had incomes at or above the poverty level. Not applicable: Tennessee did not report information based on these population groups. Utah also charged copayments to all individuals enrolled in its primary care waiver program. Vermont also charged copayments to all individuals enrolled in its 1115 waiver program. Appendix IX: Cost Sharing Amounts for Adults in Medicaid, by State, as of August 1, 2003 Nonemergency use of the emergency $0.50 or $1 $0.50 or $2 $1 or $3 $1 or $2 $3 (brand name only) NA 5% of payment; maximum of 5% of payment; $5 to $25 $5 or $10 $25 or $50 $15 or $25 $2 or $3 $3 or $5 $2 to $5 or 25% $6 or $30 10% of allowable Medicaid payment $3 or $25 per $1 to $10 or 50% $0.50 to $3 $1 to $3 $.50 to $15$.50 to $3 $2 to $25 NA = Not applicable. The state did not charge cost sharing for this service. This appendix reflects cost sharing amounts charged by states for the services and portions of the Medicaid adult populations subject to cost sharing charges. The amount of cost sharing and the services subject to cost sharing may vary within a state by population. See Appendix VIII for details on the adult populations subject to copayment requirements in Medicaid. Delaware’s only cost sharing was a $1 copayment for nonemergency transportation. llinois did not require cost sharing for all procedures within this service. Appendix X: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments Major contributors included Catina Bradley, Janice Raynor, Michelle Rosenberg, Kevin Milne, and Elizabeth T. Morrison.
Over 50 million low-income adults and children receive health insurance coverage through Medicaid and the State Children's Health Insurance Program (SCHIP). Federal law allows states to require beneficiary contributions, such as premiums and cost sharing (coinsurance, copayments, and deductibles), for at least some Medicaid and SCHIP beneficiaries. GAO was asked to (1) identify and compare states' Medicaid and SCHIP beneficiary contribution requirements for children, (2) identify states' Medicaid beneficiary contribution requirements for adults, and (3) determine the extent to which states' Medicaid and SCHIP beneficiary contribution requirements have changed since 2001. GAO surveyed Medicaid and SCHIP program offices in the 50 states and the District of Columbia about their beneficiary contribution requirements as of August 2003, including their requirements for specific population groups and for six selected services, such as inpatient hospital, physician services, and prescription drugs. For each population group covered, states were asked to indicate the portion of the group charged beneficiary contributions by selecting "all," "most," "some," or "none." GAO also interviewed officials of the Centers for Medicare & Medicaid Services (CMS) regarding the Medicaid and SCHIP statutory requirements for beneficiary contributions. GAO's survey found that children were more likely to be subject to beneficiary contributions, specifically premiums and cost sharing, in SCHIP than in Medicaid. Overall, 26 states reported charging premiums for a portion of children--"some," "most," or "all"--in SCHIP, compared to 9 states in Medicaid. Twenty-five states charged cost sharing for some portion of children in SCHIP, compared to 6 states for Medicaid. States used copayments as the primary form of cost sharing for children. Most states that reported charging cost sharing applied copayment requirements to the six health care services. Most states reported requiring beneficiary contributions from adults enrolled in Medicaid. Twenty-five states charged premiums, generally charging portions of certain populations, such as adults with disabilities. Over 40 states charged cost sharing to most, if not all, adults, including those with disabilities, noninstitutionalized elderly persons, and parents. Copayments were the predominate form of cost sharing. States most frequently reported copayments for prescription drugs and physician services. From the beginning of their 2001 state fiscal years through August 1, 2003, 34 states reported increasing and 10 states reported decreasing the amount of beneficiary contributions required in Medicaid, SCHIP, or both. For the 33 states that provided information on the amount of increases, premium increases to existing requirements ranged from $2 a month to $39 a month. Other states added new premium requirements, some of which were as much as several hundred dollars a month. In most instances, reported copayment increases were generally limited to $5 or less. GAO asked CMS officials to provide technical comments on the statutory and regulatory information on Medicaid and SCHIP beneficiary contributions, which were incorporated as appropriate.
Mutual Fund Fees Appear to Have Risen Recently Data from others and our own analysis indicates that mutual fund fees may have increased recently. Studies by SEC and ICI found that expense ratios for mutual funds overall have increased since 1980. Our own analysis finds that average expense ratios for large stock funds have increased since 1998, but those for large bond funds have declined since then. Recent Studies Indicate that Mutual Fund Expense Ratios Have Increased Since we issued our report in 2000, the staff at SEC have published a study of mutual fund fees that showed that fund expense ratios have increased. The SEC staff study measured the mutual fund expense ratio of all stock and bond mutual funds between 1979 and 1999. The study used a weighted average of mutual funds in order to give more weight to funds with more assets. Their study found that the average expense ratio for these funds rose from 0.73 percent in 1979 to 0.94 percent in 1999. However, they noted that the increase in mutual fund expense ratios since the 1970s can be attributed primarily to changes in the manner that mutual funds and their shareholders pay for distribution and marketing expenses. Over this period, many funds have decreased or replaced front-end loads, which are not included in a fund’s expense ratio with ongoing rule 12b-1 fees, which are included in a fund’s expense ratio. Front-end loads are charged to investors as a percentage of the initial investment when they buy shares and are used to compensate financial professionals, such as the investor’s broker or financial planner. Using a different methodology, ICI also published a series of studies that show that, although expense ratios may be rising, the overall cost of investing in mutual funds has decreased. ICI’s studies attempt to measure what it calls the “total shareholder cost” of investing in mutual funds by considering both a fund’s operating expense ratio and any sales charges, such as loads, investors paid when investing in that fund. To determine the average total cost of investing in funds as a percentage of fund assets, ICI also weights each individual fund’s total cost by the fund’s sales each year. By using sales to weight each fund’s contribution to the overall average, ICI indicates that it is attempting to present the cost and the actual investment choices made by investors purchasing mutual fund shares in particular years. In its latest study using this methodology, ICI reports that the total shareholder costs for equity funds fell from 2.26 percent of fund assets in 1980 to 1.28 percent in 2001, and that the total cost of investing in bond funds declined from 1.53 percent to 0.90 percent during the same period. According to ICI’s study, the primary reason that the total cost of mutual fund investing has declined results from the reduction in sales and other distribution costs paid by mutual fund investors over this period. For example, ICI finds that the average load has fallen from 7.0 percent of the dollar value of investors’ purchases to 5.2 percent and sales of shares not subject to such loads have also increased. For example, some funds waive the load for certain investors, such as purchases by retirement plans. Some industry participants have criticized the ICI’s methodology. As we discussed in our June 2000 report, analysts at one industry research organization acknowledged that the ICI data may indicate that the total cost of investing in mutual funds has declined. However, they said that because ICI weighted the fund fees and other charges by sale volumes, the decline ICI reports results mostly from actions taken by investors rather than advisers of mutual funds. These research organization officials noted that ICI acknowledged in its study that about half of the decline in fund costs resulted from investors increasingly purchasing shares in no-load funds. Although ICI’s study shows that the total cost of investing in funds may be declining, it also shows that stock funds’ expense ratios have risen. According to ICI’s September 2002 study, the average stock fund operating expense ratio has risen from 0.77 percent in 1980 to 0.88 percent in 2001. ICI’s study also shows that the average expense ratio of the stock funds it reviewed has continued to rise in recent years from 0.83 in 1998 to 0.88 percent in 2001. ICI attributes this increase to two factors. First, funds with higher expense ratios, such as aggressive growth funds or international stock funds, have been popular lately and increased sales of these funds would increase the overall average. Second, the decline in assets experienced by many stock funds as a result of the market decline since 2000 also means that such funds have fewer assets over which to spread their fixed operating costs and thus their expense ratio would rise as a percentage of their assets. Recent press reports have also indicated that fees for mutual funds may be increasing. For example, a March 2003 press report presented data from Lipper, Inc., a mutual fund research service, that shows that the median expense ratio for stock funds increased from 1.30 percent in 1998 to 1.46 percent in 2002. Our Analysis Shows that Average Fees for Large Stock Funds Have Increased Recently, but Fees for Large Bond Funds Have Declined Although our June 2000 report found that fees for large stock and bond funds had generally declined between 1990 and 1998, analysis of recent years shows that the average expense ratios for large stock funds have risen since 1998 while fees for bond funds have continued to decline. For our June 2000 report, we analyzed the change in expense ratios from 1990 to 1998 for 77 large stock and bond mutual funds, which because of their growth during this period—which collectively averaged over 600 percent—were likely to have experienced economies of scale in their operations that would allow them to reduce their expense ratios. To calculate the average expense ratios on the large mutual funds identified in our previous report, we weighted each fund’s expense ratio by its total assets. The resulting asset-weighted average expense ratios represent the fees an average investor would expect to pay on every $100 dollars invested in these funds during this period. Since our 2000 report one of the bond funds was liquidated, so our analysis for this statement presents comparable results for 76 funds. As shown in figure 1, since 1990, the average expense ratio charged by the large stock funds we analyzed, after generally rising during the mid-1990s, declined the second half of the 1990s and then began rising again. The asset-weighted average expense ratio for these stock funds declined from 0.74 percent in 1990 to 0.70 percent in 2001. However, the average expense ratio of these funds has increased recently by about 8 percent, from 0.65 percent in 1998 to 0.70 percent in 2001. The average expense ratios for the large bond funds also generally declined between 1990 and 2001, from 0.62 percent to 0.54 percent. However, unlike the stock funds, the bond funds have continued to decline since 1998. Various factors may explain the recent rise in stock fund expense ratios. ICI and industry participants attribute recent increases in average expense ratios industrywide to asset declines among stock funds. For example, ICI reported that total assets held by stock funds have declined from over $4 trillion in 1999 to about $3.4 trillion at the end of 2001. The decline in assets for many stock funds may have contributed to the recent increase in expense ratios because many funds have fee schedules that charge lower management fees at various increments as the fund’s assets increase. As the assets of a fund with such a declining rate fee schedule increase, these additional assets are assessed a lower-percentage rate fee, which results in the fund reporting a lower total expense ratio overall. However, when assets decline, more of the fund’s assets are charged the higher management fee increments, resulting in an increase in the overall expense ratio of the fund. However, asset declines and resulting increases in some expense ratios do not explain all of the increases in the average expense ratio for the large stock funds we analyzed because the assets of most of these funds continued to grow. Overall, the total assets in the 46 stock funds we reviewed increased from $835 billion in 1998 to over $1,052 billion in 2001. Individually, 28 of the 46 stock funds experienced asset growth between 1998 and 2001, although most of these funds’ assets declined from 2000 to 2001. The decline in the average expense ratio for bond funds shown in figure 1 appeared to arise from stronger asset growth in lower-fee funds. We divided the 30 bond funds in our analysis into two groups: (1) those funds with expense ratios in 1998 that were higher than the 0.60 percent weighted average ratio for all 30 funds and (2) those funds with expense ratios in 1998 that were lower than the 0.60 percent weighted average ratio for all 30 funds. As shown in table 1, the 16 low-fee funds experienced overall asset growth of about 32 percent, whereas the assets of the 14 high- fee funds declined 16 percent from 1998 to 2001. In addition, the low-fee funds’ average expense ratio declined by 7 percent whereas the high-fee funds’ ratio decreased only 2 percent. Looking specifically at the extent to which individual funds expense ratios changed, we found that the expense ratios for the majority of the large stock and bond funds we analyzed had also increased since 1998. As shown in table 2, the expense ratios for 28 or 61 percent of the 46 large stock funds we analyzed increased from 1998 to 2001. The table also shows that half of these 28 funds had increased their total assets but their expense ratios continued to increase. However the majority of these expense ratios increases were less than 10 percent. Table 2 shows four funds whose assets increased by more than 30 percent and whose expense ratios increased by more than 10 percent. However, these four funds management fees included provisions that would allow the fund adviser to charge a higher rate if the fund’s performance exceeded certain benchmarks. For example, the expense ratio of one of these funds increased from under 0.60 percent in 1998 to 0.88 percent in 2001. This increase is due in large part to the fund’s fee schedule, which calls for part of the fund’s management fee to go up or down between 0.02 percent and 0.20 percent of assets annually, depending on whether the fund’s 3-year performance was better or worse than the return of the S&P 500 index, which this fund’s performance did exceed. Of the remaining 18 funds we analyzed, most of whose assets increased, their expense ratios either did not change or decreased between 1998 and 2001. The expense ratios for the majority of bond funds that we analyzed also increased. As shown in table 3, the expense ratios for 18, or 60 percent, of the 30 large bond funds we analyzed also increased from 1998 to 2001. Over this period, 14 of the funds’ assets decreased—which could increase their expense ratios because less of their assets would be subject to lower fee rates under a declining rate fee schedule. Four funds assets and expense ratios increased between 1998 and 2001. However, of the 18 funds with increased expense ratios, the majority of the increases were less than 10 percent. SEC Is Proposing Additional Fee Disclosures, but Other Alternatives Could Provide More Specific Information SEC is proposing that investors receive additional information about mutual fund fees, but other alternatives for disclosing fees exist that could better inform investors of the actual fees they are charged. The SEC proposal would allow fees to be compared across funds, but would present information to investors in dollar amounts using only illustrative investment amounts. In contrast, various alternative means of providing additional fee disclosures would provide dollar amounts calculated using each investors’ own account balances or number of shares owned and present this information in the quarterly statements they receive that show the value of their mutual fund holdings. Although mutual funds generally do not emphasize the level of their fees in their advertisements, SEC is also proposing that additional disclosures be made in such materials. SEC Proposal Provides Additional Information on Fees Since 1988, SEC has required that mutual fund prospectuses include a table that shows all fees and charges associated with a mutual fund investment as a percentage of net assets. The fee table reflects (1) charges paid directly by shareholders out of their investment such as front- and back-end sales loads and (2) recurring charges deducted from fund assets such as management and 12b-1 fees. The fee table is accompanied by a numerical example that illustrates the aggregate expenses that investors could expect to pay over time on a $10,000 investment if they received a 5- percent annual return and remained in the fund for 1, 3, 5, or 10 years. In addition, SEC adopted requirements in January 2001 that require mutual funds to disclose their after-tax returns. SEC staff told us that taxes can have an even more significant impact on investors’ returns than fund expenses. In response to the recommendation in our 2000 report that SEC consider additional disclosures regarding fees, SEC released proposed rule amendments in December 2002 whose primary purpose is to require mutual funds to disclose additional information about their portfolio holdings, but also proposes that they make additional disclosures about their expenses. Under this proposal, SEC would require that mutual fund investors be provided with information on the dollar amount of fees paid using preset investment amounts. This information would be presented to investors in the annual and semiannual reports prepared by mutual funds. Specifically, mutual funds would be required to present a table showing the cost in dollars associated with an investment of $10,000 that earned the fund’s actual return and incurred the fund’s actual expenses paid during the period. This disclosure is intended to permit investors to estimate the actual costs in dollars that they bore over the reporting period using the actual return for the period. In addition, SEC is also proposing that mutual funds present in the table the cost in dollars, based on the fund’s actual expenses, of a $10,000 investment that earned a standardized return of 5 percent. This second disclosure, would allow investors to more easily compare the differences in the actual expenses of two funds irrespective of any performance differences between the two. SEC is also proposing that a narrative accompany these two new expense disclosures. The narrative would explain that mutual funds have transaction-based charges, such as loads or fees for exchanging shares of one fund for another, and ongoing costs, as represented by the expense ratio, and that the numerical examples are intended to help shareholders understand these ongoing costs and to compare these costs with the ongoing costs of investing in other mutual funds. The narrative would also explain the assumptions used in the examples, note that the examples do not reflect any of the transaction-based costs, and advise investors that examples are useful in comparing ongoing but not total costs of investing in different funds. The method of disclosure that SEC is proposing is consistent with one of the alternatives discussed in our June 2000 report. As SEC’s rule proposal states, the two new expense figures being proposed are designed to increase investor understanding of the fees that they pay on an ongoing basis for investing in a fund. The proposed disclosure in shareholder reports would supplement the fee disclosure required in the mutual fund prospectus. According to SEC staff, the new disclosures they are recommending would be placed in the annual and semiannual reports because these documents contain more information than quarterly statements and thus would allow investors to better understand fee information in an appropriate context. SEC staff also believe that providing this information in these reports will allow investors to compare the fees of one fund to another. If adopted, we agree that the proposed disclosures would provide investors with additional useful information. SEC has received a wide range of comments on their proposal specific to disclosure of fund expenses. Most comments were in support of SEC’s requirement to include the dollar cost associated with a $10,000 investment. For example, one investment advisory firm commented in its letter that the new disclosures SEC is proposing would benefit investors by allowing them to estimate actual expenses and compare costs between different funds in a meaningful way. Some commenters also noted that requiring specific dollar disclosures was not necessary, given the potential costs and burdens to mutual fund companies. One large labor union supported SEC’s proposal, but encouraged SEC to explore cost-effective methodologies to provide investors with their actual share of fees. An industry association representing attorneys stated in its letter that it generally supported the additional disclosures SEC was proposing, but given existing disclosures requirements, the benefits of these additional disclosures appeared marginal at best. Alternative Disclosures Could Provide Investors More Specific Information Alternatives to the SEC proposal could offer more investor-specific information. While SEC’s proposed disclosures would provide additional information that investors could use to compare fees across funds, the disclosures in SEC’s 2002 proposed rule amendments would not be investor specific because they would not use an investor’s individual account balances or number of shares owned. In addition, SEC’s proposed placement of these new disclosures in the semiannual shareholder reports, instead of in quarterly statements, may be less likely to increase investor awareness and improve price competition among mutual funds. Quarterly statements, which show investors the number of shares owned and value of their fund holdings, are generally considered to be of most interest to investors. In our June 2000 report, we offered another alternative for disclosing fee information that would provide shareholders with the specific-dollar amounts of fees paid on their shares in their quarterly account statements. We noted that such disclosure would make mutual funds comparable to other financial products and services such as bank checking accounts or stock or bond transactions through broker-dealers. As our report noted, such services actively compete on the basis of price. If mutual funds made similar specific-dollar disclosures, we stated that additional competition on the basis of price would likely result among funds. SEC and industry officials raised concerns about requiring specific-dollar disclosures in quarterly statements. They believed that the potential costs associated with accounting for, and reporting, costs on an individual basis could be significant. After our June 2000 report was issued, ICI commissioned a study by a large accounting firm to survey mutual fund companies about the costs of producing such disclosures. This study obtained information from 39 mutual fund companies and entities that provide services to mutual funds. To produce specific-dollar disclosures, the respondents indicated the most costly activities that would be necessary to produce this information included enhancing current data processing systems modifying investor communication systems and media developing new policies and procedures and implementing employee training and customer support programs. Officials highlighted, in many cases, that mutual fund companies do not have access to the name and account information for individual shareholders to whom the fee disclosures would be made. Instead, broker- dealers or financial planners maintain account information on the many shareholders who purchase their mutual fund shares through these third parties. The third parties in turn maintain what are called omnibus accounts at the mutual fund. As a result, the mutual fund will know only the total number of shares owned by clients of a particular party, but not know how many actual shareholders there are and how many shares each shareholder owns. To disclose the specific-dollar amount of fees for each of these shareholders would require funds and third parties to communicate daily to receive the specific cost information that would then have to be attributed to each shareholder’s individual account. The ICI study concluded that the aggregated estimated costs of the survey respondents to implement specific-dollar disclosures in shareholder account statements would exceed $200 million, and the annual costs of compliance would be about $66 million. However, this estimate did not include the reportedly significant costs that would be borne by third-party financial institutions, which maintain accounts on behalf of individual mutual fund shareholders. Although ICI’s estimates are significant in the aggregate, when spread over the accounts of many investors, the amounts are less sizeable. For example, ICI reported that at the end of 2001, a total of about 248 million shareholder accounts existed. If the 39 fund companies, which represent 77 percent of industry assets, also maintain about the same percentage of customer accounts, then the 39 companies would hold about 191 million accounts. As a result, apportioning the estimated $200 million in initial costs to these accounts would amount to about $1 per account. Apportioning the estimated $66 million in costs to these accounts would amount to $0.35 per account. Another option to improve mutual fund fee disclosures would involve calculating estimates of fund expenses attributable to individual investors. One former fund adviser suggested that mutual funds could provide investors with fairly precise estimates of what they are paying in fees in their quarterly account statement by multiplying the funds’ expense ratio for the prior year by the assets that the shareholder held as of the last day of the year or period. According to the former fund adviser, this calculation, which would help investors better understand the fees their investments are incurring, could be made at minimum cost to mutual funds. According to some mutual fund officials, the expense calculation disclosure presents similar cost concerns and raises other issues. According to ICI staff, mutual funds and third-party financial institutions may have to develop improved communication links to pass the information needed to make this calculation, and thus would incur some of the same costs as specific-dollar disclosures would entail. In addition, mutual fund officials expressed concerns that providing investors estimates could also create problems. For example, an estimate calculated on the basis of the investor’s holding on the closing day of the statement could be highly inaccurate if the number of shares owned by the investor has changed dramatically during the period. ICI staff also noted that fund complexes would likely want to include considerable explanatory material or disclaimers about the nature of the estimated information that this type of disclosure would provide. Before requiring mutual fund companies and others to incur such costs to produce these additional disclosures, ICI officials said that the benefits to investors would have to be better quantified. As a result, although additional disclosures could provide investor-specific information and in documents that investors receive more frequently, fund companies and other financial institutions would incur costs to produce such additions to the existing reporting made to fund shareholders. The benefit to investors from receiving this additional information has not been quantified. Mutual Fund Advertisements Usually Do Not Focus on Fees, but SEC Is Proposing Additional Disclosures Although mutual fund officials say that funds compete vigorously against each other, they generally do not emphasize fees in their advertisements and SEC is proposing additional disclosures be made. In our 2000 report, we reported that fund advisers generally do not emphasize the level of their fees when attempting to differentiate their funds from those of their competitors. We recently analyzed 29 different mutual fund advertisements that ran in the 2002 and 2003 mutual fund editions of three major business magazines. Of these, only three advertisements emphasized low management fees, 12b-1 fees, or expense ratios. In addition, while one mutual fund family, which accounted for 9 of the 29 advertisements, frequently advertised that its funds had no loads, the primary emphasis in the majority of advertisements was on other themes such as, in order of their frequency, the importance of long-term investments, risk management, good performance as evidenced by high rating by mutual fund advisory services, and tax savings. In 2002, SEC proposed amendments to investment company advertising rules. These changes would allow mutual funds to advertise more timely information than that appearing in fund prospectuses and would require more balanced disclosure of information, particularly in the area of past performance. The proposal also includes a provision that would require funds to indicate that information about charges and fees can be found in a fund’s prospectus. Under current requirements, mutual funds are not required to discuss fees in advertisements. Nevertheless, in practice, most of the mutual fund advertisements that we analyzed already included language that referred investors to the fund prospectus for information on fees and charges. Mutual Fund Trading Costs Are Additional Expense to Investors but Are Not Prominently Disclosed In addition to the expenses reflected in the expense ratio, mutual funds also incur trading costs that also affect investors’ returns. Among these costs are brokerage commissions that funds pay to broker-dealers when they trade securities on a fund’s behalf. Currently brokerage commissions are not routinely or explicitly disclosed to investors and there have been increasing calls for disclosure as well as debate on the benefits and costs of added transparency. Brokerage Commissions Add to Investor Costs When mutual funds buy or sell securities for the fund, they may have to pay the broker-dealers that execute these trades a commission. In other cases, trades are not subject to explicit brokerage commissions but rather to markups or spreads. For example, the broker-dealers offering the stocks traded on NASDAQ are often compensated by the spread between the buying and selling prices of the securities they offer. Other trading- related costs that mutual funds can incur include potential market impact or other costs that can arise when funds seek to trade large amounts of particular securities. For example, a fund seeking to buy a large block of a particular company’s stock may end up paying higher prices to acquire all the shares it seeks because its transaction volume causes the stock price to rise while its trades are being executed. Data from mutual funds indicates that brokerage commissions and other trading costs can be significant. Estimates of the size of brokerage commissions mutual funds pay ranged from 0.15 percent of funds’ assets to as much as 0.50 percent. Various academic studies conducted in the mid-1990s found that brokerage commissions were around 0.30 percent of a mutual fund’s total assets. For example, a study that looked at more than 1,100 stock and bond funds found that brokerage commissions for these funds averaged 0.31 percent of fund assets. These studies also found that brokerage commissions increase as turnover—the extent to which the fund buys and sells securities—increases. In some cases, a portion of the brokerage commissions that funds pay may represent payment for research services from the executing broker-dealer. When a portion of the commission entitles the fund to such research, this amount is called “soft dollars.” One academic study estimated that mutual funds pay brokerage commissions of about $0.06 per share traded. Because individual investors trading through discount broker-dealers can trade for as little as $0.02 per share, the study’s author attributes the higher amount of commissions—about 66 percent of the total amount per share—paid by mutual funds to charges for soft dollar research. Fund managers are allowed to engage in this practice under a provision created by the Congress in Section 28 (e) of the Securities Exchange Act of 1934. In adopting this section, the Congress acknowledged the important service broker-dealers provide by producing and distributing investment research to fund managers and permitted fund managers to use commission dollars paid by managed accounts to acquire research. SEC staff told the authors of this study that funds that obtain research using soft dollars would have the opportunity to reduce their expense ratios because the fund’s manager is not incurring as many direct costs for research activities. However, this study, which looked at 240 stock funds, also found that the funds with higher expense ratios also had higher brokerage commission costs. The authors said that this could either mean that these funds are investing in stocks that are more costly to research and to trade or that the managers of these funds were less resolute about reducing their expense ratios even though they did not have to pay directly for some of the research services obtained for their funds. Calls Made for Increased Disclosure of Brokerage Commissions Brokerage commissions are not disclosed in documents routinely sent to investors, and some parties have called for additional disclosures. Currently, SEC requires mutual funds to disclose the amount of brokerage commissions paid in the statement of additional information (SAI), which also includes disclosures relating to fund policies, officers and directors, and tax matters. Specifically, SEC requires funds to disclose in their SAI how transactions in portfolio securities are conducted; how brokers are selected; and how they determine the overall reasonableness of brokerage commissions. Unlike fund prospectuses or annual reports, SAIs do not have to be sent periodically to a fund’s shareholders, but instead are filed with SEC annually and are sent to investors upon request. The amount disclosed in the SAI does not include other trading costs borne by mutual funds such as spreads or the market impact cost of the fund’s trading. SEC staff told us that, although investors are not sent the disclosures on brokerage commissions unless they request it, funds are required to disclose their portfolio turnover in their prospectuses, which new and existing investors are routinely sent. Academics and other officials have called for increased disclosures relating to mutual fund brokerage commissions and other trading costs. In the academic studies we reviewed that looked at brokerage commission costs, the authors often urged that investors pay increased attention to such costs. For example, one study noted that investors seeking to choose their funds on the basis of expenses should also consider reviewing trading costs as relevant information. The authors of another study note that research shows that all expenses can reduce returns so attention should be paid to fund trading costs, including brokerage commissions, and that these costs should not be relegated to being disclosed only in mutual funds’ SAIs. Others who advocated additional disclosure of brokerage commissions cited other benefits. Some officials have called for mutual funds to be required to include their trading costs, including brokerage commissions, in their expense ratios or as separate disclosures in the same documents in which they disclose their expense ratios. For example, one investor advocate noted that if funds were required to disclose brokerage commissions in these ways, funds would likely seek to reduce such expenses and investors would be better off because the costs of such funds would be similarly reduced. He also indicated that when funds are required to disclose information, competition among funds usually results in them attempting to improve their performance in the area subject to the disclosures. He explained that this could result in funds experiencing less turnover, which could also benefit investors as some studies have found that high-turnover funds tend to have lower returns than lower-turnover funds.
Millions of U.S. households have invested in mutual funds whose value exceeds $6 trillion. The fees and other costs that these investors pay as part of owning mutual funds can significantly affect their investment returns. Recent press reports suggest that mutual fund fees have increased during the market downturn in the last few years. In addition, questions have been raised as to whether the disclosures of these fees and other costs, such as brokerage commissions, are sufficiently transparent. GAO updated its analysis from its June 2000 report, which showed the trends in mutual fund fees from 1990 and 1998 for large funds by collecting data on how these 76 funds' fees changed between 1998 to 2001. GAO also reviewed the Securities and Exchange Commission's recent rule proposal on fee disclosure as well as studies by industry. Recent data indicate that mutual fund fees may have increased. Studies by the staff of the Securities and Exchange Commission (SEC) and the Investment Company Institute found that expense ratios for mutual funds overall have increased since 1980. GAO's prior analysis of large mutual funds showed that these funds' average expense ratios generally decreased between 1990 and 1998, but between 1999 and 2001, the average ratio for the large stock funds analyzed has increased somewhat while the average ratio for the large bond funds has continued to decline. The average expense ratio for these large funds overall remains lower than their average in 1990. SEC is proposing that investors receive additional information about mutual fund fees in the semiannual reports sent to fund shareholders. If adopted, these new disclosures would appear to provide additional useful information to investors and would allow for fees to be compared across funds. However, various alternatives to the disclosures that SEC is proposing could provide information specific to each investor and in a more frequently distributed and relevant document to mutual fund shareholders--the quarterly account statement, which presents information on the actual number and value of each investor's shareholdings. Industry participants have raised concerns that requiring additional disclosures in quarterly statements would be costly and that the additional benefits to investors have not been quantified.
Background According to EPA, polluted storm water runoff is a leading cause of impairment to the nearly 40 percent of surveyed U.S. water bodies that do not meet water quality standards. Pollutants in storm water can significantly impact the environmental quality of U.S. waters by destroying aquatic habitat and elevating pollutant concentrations and loadings. Storm water discharges from construction activities can increase pollutants and sediment amounts to levels above those found in undisturbed watersheds. The NPDES Program was created in 1972 under the Clean Water Act to control water pollution from point sources—any discernible, confined, and discrete conveyance. Though EPA has had authority since 1972 to regulate storm water discharges, it declined to require permits for most of these discharges for over 15 years. However, in 1987, Congress passed the Water Quality Act, which amended the Clean Water Act to require the regulation of storm water discharges. Accordingly, EPA established the NPDES Storm Water Program, which requires certain municipal, industrial, and construction sources to obtain permit coverage for storm water discharges. The storm water program was implemented in two phases: 1. Phase I, adopted in 1990, which applies to medium and large municipal separate storm sewer systems and 11 categories of industrial activity (including large construction activity disturbing 5 or more acres of land); and 2. Phase II, adopted in 1999, which applies to small municipal separate storm sewer systems and small construction activity disturbing between 1 and 5 acres of land. The Phase II final rule was published on December 8, 1999, and required storm water dischargers to obtain permit coverage by March 10, 2003. When promulgated, EPA assumed that few, if any, oil and gas sites would be impacted by the construction component of the Phase II rule. Subsequent to rule promulgation, EPA decided to reevaluate how many oil and gas construction sites would be subject to the rule and postponed the deadline for seeking coverage to March 10, 2005, for oil and gas construction activities disturbing between 1 and 5 acres of land. The postponement was designed to allow EPA enough time to analyze and better evaluate the impact of the permit requirements on the oil and gas industry and to reconsider how key elements of the Phase II regulations would apply to small oil and gas sites. Analyzing the impact of storm water permitting on oil and gas construction activities is important because this type of construction requires companies to undertake a number of earth disturbing activities. These activities include clearing, grading, and excavating, associated with oil and gas exploration and production; processing and treatment operations; and transmission facilities. For example, to prepare a site for drilling, operators must create a pad to support the drilling equipment, such as the derrick. Creating the pad generally requires clearing and grading—or leveling—an area and then placing rock, concrete, or other materials on it to stabilize the surface. If necessary, companies may also construct access roads to transport equipment and other materials to the site as well as additional pipelines to connect the site to existing pipelines. As with other construction activities, storm water runoff containing sediment from oil and gas construction can lead to the degradation of nearby waters if not properly managed. Figure 1 identifies activities, including oil and gas construction, covered under Phase I and II of the NPDES Storm Water Program. NPDES storm water programs are administered at both the federal and state level. Under the Clean Water Act, states whose programs EPA has approved may manage their state’s programs. Forty-five states, including Louisiana, are responsible for administering their own NPDES program, including its storm water component; and EPA is responsible for administering and enforcing the NPDES Storm Water Program in five states. In addition, EPA is the NPDES storm water permitting authority for oil and gas activities in Oklahoma and Texas. In many environmental programs, regulated entities obtain individual permits. In contrast, under the storm water program, regulated entities may seek coverage under a single document called a general permit. A general permit is issued by EPA or by the state environmental regulator and is available to all eligible operators in the EPA or state program. With respect to regulated discharges of storm water associated with construction activity, EPA’s general permit is called the Construction General Permit. Each general permit, whether it is issued by EPA or by a state program, sets forth many steps that regulated entities must take to ensure the minimization of storm water pollution. To obtain coverage under the EPA Construction General Permit, regulated entities must file a complete and accurate Notice of Intent to be covered under the general permit prior to initiating the construction activities. The Notice of Intent includes a signed certification statement from a company official acknowledging that the operator has met all eligibility conditions of the permit, including development and implementation of a plan to control the discharge of pollutants from the site. Examples of types of sediment and erosion controls that can be included in the plan consist of vegetative cover, rocks, and hay bales to filter storm water, or terracing slopes to divert and slow runoff. Figure 2 diagrams the steps that must be completed to obtain coverage under EPA’s Construction General Permit. One of the steps operators must complete when filing a Notice of Intent involves determining whether the construction activity meets the permit’s eligibility conditions that address endangered species. The purpose of the Endangered Species Act is to conserve endangered and threatened species and the ecosystems upon which they depend. The act prohibits the “taking” of any endangered fish or wildlife. Under the act and implementing regulations, federal agencies, including EPA, must determine whether their activities might affect a listed species or habitat identified as critical. If effects are likely, the agencies, including EPA, must consult with the Fish and Wildlife Service (FWS) or the National Marine Fisheries Service (NMFS) to ensure that the activities, such as issuing permits, will not jeopardize a species’ continued existence or adversely modify its designated critical habitat. In an effort to satisfy its responsibilities under the Endangered Species Act, EPA consulted with FWS and NMFS to create language for its Construction General Permit that requires operators to self-certify that they have examined their project’s potential effects on endangered species. Specifically, language in appendix C of EPA’s Construction General Permit sets out the procedures operators are to follow in meeting permit conditions that address endangered species for coverage under the permit. Briefly, the procedures in the permit require companies to determine if federally listed threatened or endangered species or their critical habitats are present on or near the project area, determine if the construction activity’s storm water discharges or related activities are likely to affect any threatened or endangered species or designated critical habitat on or near the project area, determine if measures can be implemented to avoid adverse effects, and if adverse effects are likely, work with FWS or NMFS to modify the project and/or take other actions to gain authorization for the activity. Permit Coverage under Phase I Has Been Obtained by a Small Fraction of Total Oil and Gas Activities A small fraction of total oil and gas construction activities have sought permit coverage under Phase I of EPA’s storm water program. Industry and state officials we spoke with confirmed that few of their sites obtained permit coverage under the Phase I rule, since their activities rarely exceeded Phase I’s 5-acre size threshold. However, EPA clarified that since industry decides whether to seek permit coverage for their oil and gas construction activities, the total number of activities for which permit coverage should have been obtained is unknown. EPA representatives told us they expect that pipeline projects are more likely to obtain permit coverage than individual drilling sites due to the higher visibility of pipelines, additional preconstruction approval processes under other laws, and the higher likelihood of pipeline construction being conducted by larger companies with more experienced legal and environmental staff. Although there is currently no centralized storm water permit database that tracks storm water permit coverage nationwide, our review of Phase I storm water permit data for three major oil and gas producing states— Louisiana, Oklahoma, and Texas—confirmed that permit coverage has been obtained for only a small number of oil and gas construction activities, compared with the thousands of drilling activities occurring in those states. Our review found 433 sites in Louisiana, Oklahoma, and Texas that have obtained construction storm water permit coverage for their oil and gas activities in the most recent 12-month period for which data were available. Table 1 shows the breakdown of permit coverage by state for the most recent 12 months that data were available. Further analysis of Phase I storm water permitting data showed that the principal activity for which oil and gas companies sought storm water permit coverage in these states was for pipeline construction. Three hundred four of the 433 activities for which permit coverage was obtained in the most recent 12-month period—about 70 percent—were for pipeline construction activities. Table 2 shows the breakdown of permit coverage by state and activity. Fifty-four percent of the 304 pipeline activities in these states disturbed more than 10 acres of land. Eighty-seven pipeline activities—almost 30 percent of all the pipeline permittees—exceeded 20 acres in size. Another key oil and gas construction activity in these states was oil and gas well drilling, with 72 of the 433 permits—about 17 percent—involving drilling activities. Fifty-six percent of these drilling activities disturbed between 5 and 8 acres of land. The drilling activities for which storm water permit coverage was sought represents a small portion of the total number of oil and gas drilling activities occurring in these three states. We reviewed onshore well completion data for Louisiana, Oklahoma, and Texas and found that between 2001 and 2003, an average of ten-thousand wells was completed each year. Table 3 provides data on the number of wells completed in these three states between 2001 and 2003 and the average number of wells completed each year over the 3-year period. Industry officials must decide whether or not they will apply for permit coverage, and some may have applied for storm water permit coverage on few occasions because they broke their construction activities—which taken together would exceed 5 acres—into what they believed were distinct projects that disturbed less than 5 acres each. During our site visit to a Texas gas construction location, we observed three drilling sites situated adjacent to each other with an attached pipeline. Although the total acres disturbed by these activities exceeded 5 acres, industry officials did not believe these three sites needed permit coverage because each of the four activities—three drilling sites and a pipeline—was less than 5 acres, under construction at different times and stabilized prior to constructing the next activity. Figure 3 illustrates the layout of this area. Sites A, B, and C each disturbed approximately 3.5 acres of land and were connected by pipeline to an existing pipeline located about a mile from this site. According to industry officials, site A was financed, drilled, deemed a productive well, shut-in and the area stabilized prior to subsequent wells being drilled. The company did not decide to drill exploratory well B until A was identified as profitable. Once it drilled well B and found it to be profitable, the company drilled a well on site C between well A and B. Prior to well C being drilled, a different company agreed to construct a pipeline to connect this site with an existing pipeline. The industry officials estimated the pipeline disturbed less than 5 acres and said it was stabilized prior to starting construction on site C. The total acres disturbed by these sites exceeded 5 acres; individually the sites disturbed less than 5 acres of land. Neither the drilling company nor the pipeline company constructing these activities obtained a permit under Phase I, although each of the four activities would require permitting under Phase II after the postponement period passes and small oil and gas sites are required to comply with the Phase II rules. EPA’s Phase I rule requires that activities disturbing 5 acres or more of land—as well as smaller construction activities that are part of a common plan of development that disturbs 5 acres or more—obtain permit coverage. EPA guidance defines a common plan of development as a contiguous area where multiple separate and distinct construction activities occur under a single plan. As this definition relates to oil and gas activities, EPA guidance considers lease roads, pipeline activities, and drilling pads to be a single “common” activity if they are under construction at the same time—provided there is an interconnecting road, pipeline or utility project, or if the activities are within one-fourth mile of each other. EPA headquarters officials said that the aforementioned example highlights a unique situation in which the definition of the common plan is difficult to interpret without more information from the site operator(s). They said that depending on the operator’s reasons for drilling the second and third wells, permit coverage may or may not have been required in this example. Many oil and gas industry groups assert that EPA’s definition of “common plan” is confusing and illegal because it does not adequately consider oil and gas industry practices. These oil and gas groups have raised the issue of EPA’s definition of “common plan” in two lawsuits pending against EPA in federal courts. Although actual compliance rates in the field are unknown, neither EPA nor state officials reported many compliance problems associated with oil and gas construction activities that are 5 acres or more in size in Louisiana, Oklahoma, and Texas. Currently, EPA’s Region 6—responsible for administering the Oklahoma and Texas storm water programs for oil and gas activities—has not completed any enforcement actions against oil and gas construction companies for violations of the storm water program, although it currently has one enforcement action under way. Region 6 enforcement officials told us they primarily depend on citizen complaints and state referrals to identify oil and gas construction activities that may adversely impact water quality. Similar to EPA Region 6’s program, the Louisiana Department of Environmental Quality’s (LADEQ) construction storm water inspections are complaint driven. A Louisiana inspections representative whom we spoke with said that due to the traditionally short time frames for completing oil and gas construction activities, LADEQ found including these activities in the state’s annual compliance monitoring strategy to be impractical. As a result, the state relies on citizen complaints and routine surveillance to provide cause for conducting storm water inspections of construction activities. Although LADEQ does not track storm water enforcement actions for oil and gas construction separately from those of other types of construction activities, Louisiana officials with whom we spoke said they did not believe the state had carried out any storm water enforcement actions against oil and gas construction activities. Most Oil and Gas Construction Activity Will Likely Be Affected by Phase II, but the Financial and Environmental Implications of Phase II Are Difficult to Quantify EPA, industry and state government representatives agree that Phase II permit coverage will be required for most oil and gas construction activities, but the actual number of activities that will be affected by the rule is unknown. In addition, the financial and environmental implications of implementing Phase II for oil and gas construction activities are difficult to quantify. Phase II may lead to increased costs for federal agencies with a role in the storm water permitting process, as well as for members of the oil and gas industry who obtain permit coverage. However, Phase II may also lead to environmental benefits for local waters and endangered species and their habitats, even though these benefits are difficult to quantify. As EPA approaches the end of a 2-year period to study the impact of Phase II on oil and gas construction activities, EPA has not yet quantified the number of sites impacted or the financial and environmental implications of the Phase II rule’s implementation. Most Oil and Gas Construction Activities Will Likely Be Required to Obtain Storm Water Permit Coverage under Phase II, but the Actual Number of Activities That Will Be Affected by the Rule Is Unknown EPA, industry and state government representatives agree that most oil and gas construction activities will disturb 1 acre or more of land and, as such, will have to obtain permit coverage under the Phase II rule. However, the precise number of oil and gas construction activities that will require storm water permit coverage under the Phase II rule is unknown, and estimating the specific number of sites that will be affected by Phase II is problematical because there is no data source that comprehensively identifies the disparate oil and gas construction activities subject to the rule and categorizes them by size. Industry representatives that we spoke with said most, if not all, of their oil and gas construction activities not covered by Phase I would be required to seek permit coverage under Phase II. These representatives said that their typical drilling construction site disturbs more than 1 acre but less than 5 acres of land. Similarly, representatives from the Oklahoma Corporation Commission and Railroad Commission of Texas indicated that almost all of the oil and gas well construction in their states would disturb over 1 acre of land and would have to obtain storm water permit coverage. Furthermore, EPA officials generally concurred that most oil and gas construction activities would need to obtain coverage, or seek a waiver, under Phase II. A company may receive an optional waiver from permit coverage in more arid areas where there is low rainfall. EPA officials told us that in arid areas, such as western Oklahoma and Texas, most operators could qualify for waivers with expeditious construction schedules and careful timing. Phase II May Lead to Additional Costs that Are Difficult to Quantify The Phase II rule may lead to additional costs for industry and federal agencies, but these costs are difficult to quantify. For example, the EPA Construction General Permit requires companies to implement erosion and sediment controls to minimize pollutants in storm water discharges, which will lead to additional costs for operators. Industry representatives we spoke with were less concerned with these particular costs, however, because they said that the oil and gas industry routinely takes similar preventative measures. These officials did express concerns about the costs associated with storm water inspections required by the permit. These inspections are designed to ensure companies properly implement practices to minimize storm water pollution and require that sites be inspected (1) at least once every 7 days or (2) at least once every 14 days and within 24 hours of certain storm events. Industry officials explained that oil and gas activities typically occur in remote, rural areas, which makes it costly for them to inspect sites as required by the permit. Furthermore, since sites may not always have personnel present, these representatives said it can be difficult to determine when a storm event has occurred. EPA maintains that it has reduced the inspection burden by allowing less difficult pipeline inspections and authorizing monthly inspections under certain circumstances, such as when a site is temporarily stabilized or when winter conditions make runoff unlikely. The Phase II storm water rule may also lead to additional costs for federal agencies and the oil and gas industry associated with the endangered species requirements of the storm water permit. The EPA Storm Water Construction General Permit provides coverage under the permit only if the storm water discharges are not likely to jeopardize the continued existence of any species that is listed as endangered or threatened, pursuant to the Endangered Species Act, or result in the adverse modification or destruction of critical habitat. Because companies seeking storm water permit coverage must evaluate the impact their construction activities might have on endangered species, the workload of agencies such as the U.S. Fish and Wildlife Service (FWS) and National Marine Fisheries Service (NMFS), which are the regulatory agencies for the Endangered Species Act, could increase if a significantly larger number of sites initiated communications or consultation requests. NMFS headquarters representatives and FWS field representatives we spoke with indicated that the increased workload from a greater number of Phase II consultation requests could exceed staff capabilities. However, they also said they were unsure what impact Phase II would have on their activities, because they did not know how many additional oil and gas construction sites would be affected by the rule. Oil and gas industry representatives were most concerned about costs that stem from delays companies may face when identifying a construction activity’s impact on endangered species. These representatives said that endangered species reviews are often extremely time intensive and require interactions with federal agencies that introduce delays into the construction process and lead to increased costs. Various forms of interactions with FWS and NMFS (the Services) may be used to ensure that provisions of the storm water permit concerning endangered species are met—including the more common informal consultations and less frequent formal consultations. Informal consultation can be used to determine whether an activity will adversely affect endangered or threatened species or critical habitat. If during informal consultation the action agency—in this case EPA—determines that no adverse impact is likely and FWS and NMFS agree, the consultation process is terminated with the written concurrence of the Services. Although there is no regulatory deadline for completing an informal consultation, the Services’ policy is to respond to informal consultations about endangered species within 30 days. Formal consultations are necessary if an activity is likely to adversely affect a listed species. The Endangered Species Act requires most formal consultations to be conducted within 90 days. In addition, the implementing regulations require the Services to document in a biological opinion, within 45 days after the conclusion of the consultation, whether the activity is likely to jeopardize the listed species’ continued existence or adversely modify its designated critical habitat. If necessary, the biological opinion may also provide reasonable and prudent alternatives that, if taken, would avoid jeopardizing a species or adversely modifying its critical habitat. However, the Services may postpone the start of any of these time frames until they have the best available information on which to base their opinions. The total time needed to consult with the Services is difficult to quantify, given that not all sites will have to perform the same level of review and because not all construction activities occur in areas where endangered species are present. In a March 2004 report on the overall consultation process, we identified concerns from federal agencies and nonfederal entities about the time it takes to complete the consultative process. In one limited review that we conducted of 1,550 consultations, about 40 percent exceeded established time frames. However, we found that FWS and NMFS needed more complete and reliable information about the level of effort devoted to the process. Specifically, these time frames did not capture sometimes significant amounts of preconsultation time spent discussing a project before the consultation officially was considered to have begun. Even without the requirements of EPA’s Construction General Permit and associated consultations under the Endangered Species Act, operators of oil and gas construction activities would still have to spend time complying with the act by ensuring that their activities do not result in a “take” of an endangered species. Phase II May Lead to Additional Benefits that Are Difficult to Quantify The Phase II storm water rule may lead to additional environmental benefits, although these benefits can be difficult to quantify. Officials from EPA’s Office of Water indicated that while it is difficult to quantify all the benefits associated with the rule, the principal benefits are based on decreased quantities of sediment in water. These officials told us that excess amounts of sediment in water can affect aquatic habitat, water quality, waters’ use as a source of drinking water and water supply reservoir capacity, navigation, and recreational activities. According to FWS officials, construction activities may affect listed species in both direct and indirect ways. Direct effects may include killing or injuring members of listed species. Indirect effects may include changing essential behavior patterns like feeding, breeding, or sheltering, as a result of modifications to the species’ habitat. Additionally, the NMFS acknowledged that land disturbance activities that increase the amount of sediment in water and turbidity can indirectly influence endangered species’ productivity and ultimately cause changes in migratory behavior, reduce prey abundance, reduce the survival and emergence of larvae, and contribute to increased temperatures and chemical pollutants that can cause habitat loss. An environmental group representative we spoke with said that voluntary initiatives are not a viable method for resolving storm water pollution issues and that the permit process provides a mechanism for ensuring that practices to mitigate water pollution from construction activities are implemented. This representative also commented that EPA has not provided any evidence that the environmental consequences of oil and gas construction activities are different from those of other types of construction activities or that the oil and gas industry’s controls are any better. Finally, this representative said that a single industry should not be exempted from regulations with which other industries must comply and added that the large number of oil and gas activities potentially subject to the rule shows the significant amount of environmental damage that could occur if these activities went unregulated. EPA is currently studying the environmental impact of oil and gas construction activities but has not completed its analysis. Industry representatives, however, believe that the Phase II storm water rule provides only negligible environmental benefits and that the current system of regulation encourages environmentally friendly construction practices. For example, one industry representative stated that with only the Phase I rule in effect, companies have an incentive to keep construction activity to less than 5 acres—thus minimizing the land disturbance and associated environmental effects. If the Phase II rule were implemented as written, this representative maintained the industry would have no incentive to minimize the acreage used in order to keep the site under 5 acres. EPA Has Not Completed Its Assessment of the Number of Oil and Gas Sites Impacted by Phase II or Its Financial and Environmental Implications Almost 2 years after delaying the implementation of Phase II for oil and gas activities in order to study and evaluate the impact on the industry, EPA initiated an analysis of the rule but has not completed the study, quantified the number of activities affected, or determined its potential financial and environmental implications. In March 2003, EPA extended the deadline for operators to obtain Phase II permits for oil and gas activities in order to allow itself additional time to analyze and better evaluate the impact of the rule on the oil and gas industry. This 2-year extended deadline will expire on March 10, 2005. However, as we performed audit work for this engagement, EPA had not issued any analysis of the rule’s impact; nor could EPA management representatives provide a specific estimate of when its analysis would be completed or when a final decision would be reached. We provided a draft of this report with our recommendation to EPA. Subsequently, on January 18, 2005, the agency proposed a further extension of the compliance date to June 12, 2006, to complete its review and take final action. Within 6 months of a final action on the January 18, 2005, proposal, EPA intends to propose rulemaking to address storm water discharges from oil and gas sites and invite public comment. Separate from EPA’s efforts, oil and gas industry representatives informed us of a Department of Energy (DOE) study to evaluate the impact of the Phase II rule on the oil and gas industry. During our study, officials from DOE’s Office of Fossil Energy told us that DOE’s study was still in draft form. These officials would not provide an explanation of the purpose, costs, or estimated completion date of the study. Conclusions Our review indicated that it is probable that substantially more oil and gas activities will be impacted by Phase II of the NPDES storm water rule than by Phase I. Given that EPA has not been able to quantify the number of oil and gas activities required to obtain storm water permit coverage under either rule, it remains important that EPA identify the universe of oil and gas activities that would most likely be affected. This analysis would provide the necessary foundation for understanding the implications that the rule may have for the environment and the oil and gas industry and determine the overall effectiveness of the NPDES storm water program. Recommendation for Executive Action So that EPA may fully understand the implications of Phase II of its storm water rule prior to deciding whether the oil and gas industry should be subject to it, we recommend that EPA complete its Phase II analysis before making any final decision. Furthermore, as a part of this analysis, we recommend that EPA assess the number of oil and gas sites impacted by the Phase II rule; the costs to industry of compliance with the rule and whether these costs are solely attributable to the storm water rule; and the environmental implications and benefits of the storm water rule, including, but not limited to, potential benefits for endangered species. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Administrator of the Environmental Protection Agency (EPA). EPA provided oral comments and agreed with our findings and recommendation. In addition, EPA included technical and clarifying comments, which we included in our report as appropriate. As agreed with your staffs, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the EPA Administrator and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge at GAO’s Web site at http://www.gao.gov. Questions about this report should be directed to me at (202) 512-3841. Other key contributors to this report are listed in appendix II. Objectives, Scope, and Methodology This report provides information about (1) oil and gas construction activities for which permits have been obtained under Phase I and (2) oil and gas construction activities that are likely to be affected by Phase II and its financial and environmental implications. To address the number of oil and gas activities for which permits have been obtained under Phase I, we limited our analysis to three of the top five natural gas producing states and three of the top six crude oil producing states in 2003, according to available data from the Energy Information Administration. We chose states with storm water programs implemented by both state and federal authorities. Louisiana’s Department of Environmental Quality administers the National Pollutant Discharge Elimination System’s Storm Water Program (NPDES) for its state, while Oklahoma’s and Texas’ storm water programs for oil and gas activities are administered by the Environmental Protection Agency’s (EPA) Region 6. Additionally, Oklahoma and Texas are unique in that only the oil and gas portions of their storm water program are administered by EPA; the remainder of their storm water program is administered by the state. To determine the number of oil and gas construction activities requesting storm water permit coverage under Phase I in those three states and to get a national perspective on the number and types of sites affected, we spoke with oil and gas industry and government representatives. We also reviewed EPA’s (for Oklahoma and Texas) and Louisiana’s storm water database that contains information about Notices of Intent filed with the program authority to indicate a company’s plan to begin a construction activity that disturbs 5 acres or more of land. We reviewed the most recent 12-month period of data available: EPA’s information for Oklahoma and Texas from December 2003 to November 2004 and Louisiana’s information from November 2003 to October 2004. Because the database contained more than just oil and gas construction information, we isolated data for those companies within the oil and gas industry and reviewed relevant characteristics of those Notices of Intent. While this data provides information about the number of companies that requested storm water permit coverage for their oil and gas construction activities, it does not indicate the universe of companies that should have filed. Furthermore, these data are not generalizible to the nation as a whole. We spoke with the administrator of this database to assess the reliability of this data and found the data from 2003 and 2004 to be sufficiently reliable for our purposes. Additionally, to provide us with context for understanding how the number of drilling activities covered by Phase I compares with the total number of oil and gas drilling activities being carried out, we reviewed oil and gas well completion data from Louisiana, Oklahoma, and Texas. These data provided us with an additional perspective about the magnitude of oil and gas activities occurring in these states and proved sufficiently reliable for our purposes. We gathered these data from the Louisiana Office of Conservation, Oklahoma Corporation Commission, and Railroad Commission of Texas. In order to gather information about the characteristics of oil and gas construction activities, we visited oil and gas construction sites in Louisiana, Oklahoma, and Texas and viewed pollution control measures implemented in various terrains. In Louisiana and Texas, we were accompanied by industry representatives who were members of the Domestic Petroleum Council; in Oklahoma, we were accompanied by EPA and oil and gas industry representatives. Both EPA and industry officials provided perspectives on the choice of pollution control measures implemented. We spoke with the storm water enforcement coordinator for oil and gas activities in EPA’s Region 6, as well as the state official responsible for storm water program permitting at the Louisiana Department of Environmental Quality. We discussed their respective storm water programs and strategies for enforcing the storm water regulations. When possible, their offices provided data about enforcement actions and inspections. To determine the number of oil and gas activities that may be affected by Phase II and the financial and environmental implications of implementing Phase II for oil and gas construction activities, we spoke with storm water stakeholders, including the Natural Resources Defense Council. Finally, we spoke with oil and gas industry representatives, including the Domestic Petroleum Council, the American Petroleum Institute, and the International Petroleum Association of America and representatives from some of these organizations’ members. These stakeholders offered contrasting views about the environmental benefits and economic costs of these regulations. We also reviewed written comments that environmental groups and oil and gas industry groups provided to EPA when the agency first proposed postponing the Phase II deadline for oil and gas activities. To formulate a more thorough understanding of federal agencies with a role in implementing the Storm Water Program and level of interagency coordination, we spoke with U.S. Fish and Wildlife and National Marine Fisheries officials responsible for carrying out section 7 of the Endangered Species Act, which requires federal cooperation to protect endangered species. Specifically, we spoke with representatives from the U.S. Fish and Wildlife Service’s headquarters, Arlington, TX and Tulsa, OK offices, as well as with the National Marine Fisheries Service’s headquarters and southeast regional offices. We conducted our review between August 2004 and January 2005 in accordance with generally accepted government auditing standards. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to the individuals above, James W. Turkett, Paige Gilbreath, Omari Norman, Carol Bray and Nancy Crothers made key contributions to this report.
To prevent pollutants from entering storm water runoff, the Clean Water Act's National Pollutant Discharge Elimination System Storm Water Program requires controls for construction activities that disturb land. Phase I of this program requires permitting for construction activities that disturb 5 acres or more, while Phase II requires permitting for activities disturbing between 1 and 5 acres. The Environmental Protection Agency (EPA) extended the Phase II compliance date for discharges associated with oil and gas construction activities until March 2005 to analyze the impact of Phase II on the oil and gas industry. GAO was asked to provide information about oil and gas construction activities--such as well drilling and pipeline construction--affected by Phase I and likely to be affected by Phase II, as well as Phase II's financial and environmental implications. A small fraction of total oil and gas construction activities have been permitted under Phase I of EPA's storm water program. Phase I storm water permit data for three of the six largest oil and gas producing states--Louisiana, Oklahoma, and Texas--showed that 433 construction activities were permitted under Phase I over the most recent 12 months for which data were available. About 70 percent, 304 of the 433, were oil and gas pipeline activities, most of which were much larger than the 5 acre criterion under Phase I. About 17 percent, 72 of the 433, were drilling activities. In comparison, these three states reported drilling an average of about 10,000 wells for each of the past 3 years. Industry must decide whether to seek permit coverage, and it has sought to have its drilling activities permitted on few occasions because it has determined that most drilling activity involves distinct projects that disturb less than 5 acres each. In states we reviewed, there were few reported compliance problems associated with oil and gas construction activities. While it appears that most oil and gas construction activities may have to be permitted under Phase II, the actual number of activities that could be affected is uncertain, and the financial and environmental implications are difficult to quantify. The oil and gas construction activities affected by the rule may lead to increased financial costs for the oil and gas industry and federal agencies implementing the rule. Many of the potential costs stem from meeting permit requirements to review the impact of construction activities on endangered species, although this impact would be site specific and difficult to quantify. Potentially offsetting these costs, the rule may lead to additional environmental protections that are difficult to quantify, such as decreased levels of sediment in water and benefits for endangered species and their habitat. After delaying implementation of this rule for oil and gas construction activities for 2 years to study the impact of Phase II, EPA is analyzing the impact but, as yet, has not quantified the number of activities affected or the potential financial and environmental implications.
Background ATF, a criminal and regulatory enforcement agency within the Department of the Treasury, is responsible for providing industry regulation; collecting revenue; and enforcing federal statutes regarding firearms, explosives, alcohol, tobacco, and arson. A critical component of ATF’s criminal enforcement mission is the tracing of firearms used in crimes to identify the last known purchaser of a firearm. To accomplish its criminal enforcement responsibilities, ATF has 22 field divisions, headed by special agents in charge, located throughout the United States. To efficiently and effectively carry out its enforcement responsibilities, ATF maintains certain computerized information on firearms and firearms purchasers. Over the years, Congress has tried to balance the law enforcement need for this information with the competing interest of protecting the privacy of firearms owners. To achieve this balance, Congress has required federal firearms licensees to provide ATF certain information about firearms transactions and the ownership of firearms while placing restrictions on ATF’s maintenance and use of such data. The Gun Control Act of 1968, as amended, established a system requiring federal firearms licensees to record firearms transactions, maintain that information at their business premises, and make these records available to ATF for inspection and search under certain prescribed circumstances.The system was intended to permit law enforcement officials to trace firearms involved in crimes while allowing the records themselves to be maintained by the licensees rather than by a governmental entity. Through the use of these records, ATF provides firearms tracing services to federal, state, local, and foreign law enforcement agencies. To carry out its firearms tracing responsibilities, ATF maintains a firearms tracing operation at the National Tracing Center in Falling Waters, West Virginia. The Center traces firearms suspected of being involved in crimes to the last known purchaser to assist law enforcement in identifying suspects. Appendix II provides a detailed description and flowchart of ATF’s tracing operation. Since the passage of the Gun Control Act, Congress has enacted two provisions that place restrictions on ATF’s handling of federal firearms licensee records. Since fiscal year 1979, the annual Treasury appropriation act generally has prohibited ATF from using appropriated funds in connection with consolidating or centralizing the records of acquisition and disposition of firearms maintained by federal firearms licensees. In addition, a provision of the Firearms Owners’ Protection Act of 1986 (P.L. 99-308, 100 Stat. 449 (May 19, 1986)), codified at 18 U.S.C. 926(a), prohibits ATF from issuing any rule or regulation, after the date of that act, requiring that (1) firearms licensee records (or any portion of the contents of the records) be recorded at or transferred to a facility owned, managed, or controlled by the United States or any state or any political subdivision thereof or (2) any system of registration of firearms, firearms owners, or firearms transactions or dispositions be established. Further, section 926(a) provides that ATF’s authority to inquire into the disposition of a firearm during a criminal investigation is not restricted or expanded by this section. The act also limited ATF’s authority to require reports from licensees to those specified by statute and codified several reporting requirements that ATF had previously imposed on licensees by regulation, including those related to out-of-business licensee records and reports of multiple handgun (pistols and/or revolvers) sales. Scope and Methodology To address our objectives, we reviewed ATF documents and data and discussed ATF policies and operations with agency officials. We obtained from ATF officials descriptions of national data systems that ATF officials determined were related to firearms, including those that contained retail firearms purchaser data. Although we reviewed the descriptive data provided by ATF on the firearms-related data systems, with the exception of the Out-of-Business Records and the Multiple Sales Systems, we did not verify whether these or any other ATF data systems contained retail firearms purchaser data or observe system operations. We reviewed relevant laws and ATF regulations, legal opinions, and documents relating to ATF’s firearms tracing, out-of-business records, and multiple sale reports processing operations. We also observed and conducted some tests of these operations and discussed them with officials at ATF’s National Tracing Center. We did not review ATF’s other systems for compliance with the data restrictions. With regard to the Out-of-Business Records and Multiple Sales Systems, we did not review their compliance with other statutory requirements, such as the Privacy Act and the Computer Security Act. We reviewed relevant laws and ATF regulations, legal opinions, and other documents concerning the data restrictions. We discussed ATF’s legal interpretation of the data restrictions with ATF’s Associate Chief Counsel (Firearms and Explosives) and other headquarters and Tracing Center officials. Appendix I contains a detailed discussion of our objectives, scope, and methodology. We did our work at ATF’s headquarters in Washington, D.C., and National Tracing Center in Falling Waters, West Virginia, from August 1995 through July 1996 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this report from ATF. These comments are discussed at the end of this letter and are reprinted in appendix IX. ATF officials also provided some technical comments, which we incorporated where appropriate. ATF Has Several Nationwide Computer Systems That Contain Retail Firearms Purchaser Data ATF collects and maintains data from the firearms industry to carry out its criminal and regulatory enforcement responsibilities more efficiently and effectively. ATF’s criminal enforcement responsibilities include investigating firearms-related crimes and tracing firearms used in crimes, and its regulatory responsibilities include regulating the manufacture and importation of firearms and licensing firearms dealers. ATF has established national data systems to maintain the data it collects from the firearms industry, including federal firearms licensees. To identify ATF national data systems that contain retail firearms purchaser data, we requested from ATF a description of its national data systems that relate to firearms. ATF identified and provided documentation on 14 national data systems and 4 subsystems relating to firearms. Appendix III provides a brief description of these systems and subsystems. ATF indicated that five systems and one subsystem contain retail firearms purchaser data. These are the (1) Firearms Tracing System and one of its three subsystems—the system dealing with multiple sale reports; (2) Firearms Tracking System; (3) Project Lead; (4) Out-of-Business Records System; and (5) National Firearms Act Database. Appendix IV provides a detailed description of the five systems and one subsystem. We reviewed the descriptive information provided by ATF to determine whether we agreed with its categorization of the data systems and subsystems. On the basis of that review and follow-up discussions with ATF officials, ATF recategorized several of the systems and subsystems. On the basis of the information provided by ATF, we agreed with its categorization of its data systems as presented in appendixes III and IV. ATF’s Out-Of-Business Records and Multiple Sales Systems Comply With Legislative Restrictions The Out-of-Business Records and Multiple Sales Systems, as designed, comply with the legislative data restrictions. Also, on the basis of our review, observations, and discussions with ATF officials, we believe that ATF operates the systems consistently with their design, with one exception relating to the purging of data from the Multiple Sales System, which ATF subsequently informed us it had taken action to correct. Out-Of-Business Records System Shortly after the passage of the Gun Control Act of 1968, ATF issued regulations requiring federal firearms licensees who permanently discontinued their businesses to forward their transaction records to ATF within 30 days following the discontinuance. This ensured that ATF had access to these records for its tracing operation. In 1986, the Firearms Owners’ Protection Act codified this regulatory reporting requirement.Accordingly, since the enactment of the Gun Control Act, ATF has maintained the out-of-business records at a central location, currently the National Tracing Center. Before fiscal year 1991, ATF maintained these records in hard copy in boxes, with a file number assigned to each firearms licensee. If ATF determined during a trace that a firearm had been sold by a firearms licensee who was out of business and had sent in its records, an ATF employee was to locate the boxes containing the records and manually search them for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, which also created storage problems for ATF. In 1991, ATF began a major project to microfilm these records and destroy the originals. In fiscal year 1992, ATF began using a minicomputer to create a computerized index of the microfilm records containing the information necessary to identify whether ATF had a record relating to a firearm being traced. The index contains the following information: (1) the cartridge number of the microfilm; (2) an index number; (3) the serial number of the firearm; (4) the federal firearms licensee number; and (5) the type of document on microfilm, i.e., a Firearms Transaction Record (ATF Form 4473) or acquisition and disposition logbook pages. The index information that is entered into the minicomputer is stored on a database in ATF’s mainframe computer to allow searches of the index information. The other information, including the firearms purchaser’s name or other identifying information and the firearms manufacturer, type, and model, remains stored on microfilm cartridges and is not computerized. Appendix V provides a more detailed description of the Out-of-Business Records System along with pertinent statistical data. We believe that ATF’s current Out-of-Business Records System complies with the data restrictions. With regard to 18 U.S.C. 926(a), as discussed earlier, it prohibits ATF from prescribing certain rules or regulations after the date of enactment of the Firearms Owners’ Protection Act. At the same time it added the section 926(a) restriction, Congress codified at 18 U.S.C. 923(g)(4) the then-existing regulatory requirement that licensees who permanently go out of business send their records to ATF. ATF’s current regulatory requirement concerning the out-of-business records predates the Firearms Owners’ Protection Act, and is thus not subject to section 926(a). With regard to the annual appropriation rider, in our view, the Out-of-Business Records System does not violate the general prohibition on “consolidation or centralization” of firearms acquisition and disposition records. The regulatory requirement that licensees send these records to ATF existed before the appropriation rider was first passed for fiscal year 1979, and there is no indication in the legislative history that the rider was intended to overturn ATF’s existing practices concerning the acquisition or use of licensee information. According to ATF, the out-of-business records historically have been maintained at a central location. Moreover, the Firearms Owners’ Protection Act provided ATF with specific statutory authority to collect these records. In the legislative history of the act, there is evidence that Congress considered placing constraints on ATF’s maintenance of out-of-business records, but did not do so. The Senate-passed version of the act prohibited the Secretary of the Treasury from maintaining out-of-business records at a centralized location and from entering them into a computer for storage or retrieval. This restrictive provision was dropped from the version of the bill enacted by Congress. Lastly, in fiscal year 1992, Congress appropriated $650,000 “for improvement of information retrieval systems at the National Firearms Tracing Center.” These funds were for the microfilming of the out-of-business records. For fiscal year 1995, Congress appropriated funds for the President’s firearms initiative, which included a request for funding of the Out-of-Business Records System. Congress provided these funds in the same legislation that contained the rider restricting consolidation and centralization of licensee records. According to ATF, the system solved storage and trace timing problems, thereby enhancing ATF’s tracing capabilities. At the same time, the system does not computerize certain key information, such as firearms purchaser information. In conclusion, we believe that the system for maintaining the out-of-business records does not violate either data restriction provision. (Our legal analysis of the Out-of-Business Records System is contained in app. VIII.) Furthermore, on the basis of our review of the Out-of-Business Records System documentation provided by ATF, our discussions with ATF officials, and our observation of the out-of-business records process, we believe that ATF was operating the system in a manner consistent with the way it was designed by ATF. During a visit to the Tracing Center, we observed that the Out-of-Business Records System does not permit the operator to enter the name or other identifying information of any firearm purchaser, or the type or model of any firearm. Thus, we found no evidence that ATF captures and stores firearms purchasers’ names or other identifying information from the out-of-business records in an automated file. Multiple Sales System Since 1975, federal firearms licensees have been required by regulationand subsequently by law to report all transactions in which an unlicensed person has acquired two or more pistols and/or revolvers at one time or during any 5 consecutive business days (called a multiple sale). The purpose of the multiple sale reporting requirement regulation was to enable ATF to “monitor and deter illegal interstate commerce in pistols and revolvers by unlicensed persons.” According to ATF, that purpose has remained unchanged since 1975. In an August 1993 memorandum on gun dealer licensing, the President listed a number of steps that ATF could take to ensure compliance with federal firearms licensing requirements. These steps included, among other things, increasing scrutiny of licensees’ multiple sale reports and providing automated access to those reports. In November 1995, ATF issued a new policy centralizing and computerizing multiple sale reports at its National Tracing Center. Prior to that time, ATF’s criminal enforcement field divisions maintained multiple sale reports locally. To computerize the reports, ATF developed a Multiple Sales Subsystem as part of its Firearms Tracing System so that the reports could be entered directly into the Tracing System and used for tracing purposes. As of June 30, 1996, ATF had computerized about 91,600 multiple sale reports and associated 521 firearms traces with those reports. The head of the National Tracing Center estimated that in the future the Center will receive 130,000 multiple sale reports annually. In addition to using multiple sale reports for tracing purposes, ATF also provides multiple sale report data to its criminal enforcement field divisions through Project Lead for use in developing investigative leads, such as leads on firearms traffickers, straw purchasers, and federal firearms licensees who appear to be engaged in suspicious activity. Unlike the Out-of-Business Records System, reports entered into ATF’s computerized Multiple Sales System are retrievable by firearm purchaser name. However, as part of its November 1995 policy, ATF adopted a requirement to purge firearms purchaser data in the system that were over 2 years old if they had not been linked to firearms traces. According to the Chief of the Firearms Enforcement Division, the primary reason for purging purchaser data over 2 years old is to delete data that may not be useful because of its age. In addition, the head of the Tracing Center said that ATF is sensitive for privacy reasons about retaining firearms purchaser data that may no longer be useful. Appendix VI provides a detailed description of the multiple sale reporting requirement and the data system along with pertinent statistical data. We believe that ATF’s Multiple Sales System complies with the data restrictions. As discussed earlier, the prohibitions in section 926(a) only apply to certain rules or regulations prescribed after the enactment of the Firearms Owners’ Protection Act. In the same act, Congress codified the then-existing regulatory requirement that federal firearms licensees prepare these multiple sale reports and forward them to ATF. ATF’s current regulatory requirement concerning the multiple sale reports predates the Firearms Owners’ Protection Act and thus is not subject to section 926(a). With regard to the annual appropriation rider, in our view, the Multiple Sales System does not violate the general prohibition on the “consolidation or centralization” of firearms acquisition and disposition records. The requirement that licensees prepare these reports and send them to ATF existed in regulation before the first appropriation rider was passed in fiscal year 1979, and there is no indication in the legislative history that the rider was intended to overturn ATF’s existing practices concerning the acquisition or use of licensee information. Although the multiple sale reports historically have been maintained at the field level, the provisions and legislative history of the Firearms Owners’ Protection Act, which gave ATF specific statutory authority to collect these records, indicate that ATF would not be precluded from computerizing the multiple sale reports. The act requires that licensees send the reports “to the office specified” on the ATF form. Under this provision, ATF could specify that licensees forward the multiple sale reports to a central location. In addition, the legislative history of the act indicates that Congress considered placing constraints on ATF’s maintenance of multiple sale reports but did not do so. The Senate-passed version of the Firearms Owners’ Protection Act prohibited the Secretary of the Treasury from maintaining multiple sale reports at a centralized location and from entering them into a computer for storage or retrieval. This restrictive provision was dropped from the version of the bill enacted by Congress. Lastly, for fiscal year 1995, Congress appropriated funds to implement the President’s firearms initiative, which included plans to automate multiple sale reports. Congress provided these funds in the same legislation that contained the rider restricting consolidation and centralization of licensee records. In conclusion, we believe that the Multiple Sales System does not violate either data restriction provision. (Our legal analysis of the Multiple Sales System is contained in app. VIII.) With regard to the operation of the Multiple Sales System, on the basis of our review and observations and discussions with ATF officials, we believe that ATF was, with one exception, operating the system in a manner consistent with its design. Our test of the Multiple Sales System at the Tracing Center showed that ATF’s requirement to purge firearms purchaser data over 2 years old if not linked to firearms traces had not been fully implemented. At our request, a Tracing Center computer specialist queried the system for multiple sale records with sales dates over 2 years old. The results of this query identified 2,291 records (of the over 86,000 that had been entered) that contained purchaser data for sales over 2 years old. The computer specialist indicated that he thought multiple sale purchaser data over 2 years old had been purged during the last upgrade of the Firearms Tracing System. In July 1996, the Chief of the Firearms Enforcement Division provided us with documentation stating that the affected purchaser data had been purged from the Multiple Sales System and that future purges would be performed weekly. We did not verify whether the affected purchaser data were purged and whether weekly purges were being done. In addition, ATF officials also told us that while the 2-year purge requirement pertained to the Multiple Sales System at the Tracing Center, it was not being applied to multiple sale data maintained locally by ATF criminal enforcement field divisions through Project Lead. ATF had no requirement or mechanism for purging multiple sale purchaser data over 2 years old after it was received by field divisions. The Chief of the Firearms Enforcement Division told us that ATF planned to place Project Lead on its mainframe computer in about a year. At that time, ATF plans to apply the 2-year purge requirement to multiple sale data in Project Lead. In Response to Our Review, ATF Has Adopted a Broader Interpretation of the Data Restriction in the Annual Appropriation Rider ATF’s interpretation of the data restrictions in the annual appropriation rider and 18 U.S.C. 926(a) was contained in a number of opinions and correspondence that ATF provided us during our review. Although we agreed with ATF’s interpretation of 18 U.S.C. 926(a), we believed that ATF was interpreting the data restriction contained in the annual appropriation rider too narrowly. As a result, ATF would not have reviewed its data systems and information practices to ensure compliance with the broader interpretation of the rider, as discussed below. Appendix VIII contains our detailed legal analysis. In response to a draft of this report, ATF stated it adopted the broader interpretation of the rider, had applied it to a legal review of the systems listed in appendix IV that we did not review, and is committed to applying it to record systems it might establish in the future. Previously, ATF maintained that the restrictions in section 926(a) and the appropriation rider had the same effect, and that they were intended only to preclude rules or regulations issued after the enactment of the Firearms Owners’ Protection Act that impose additional reporting requirements upon licensees. Thus, ATF viewed the data restrictions as having no application to the agency’s internal practices, i.e., they did not restrict what ATF did with information it had acquired through reporting requirements in effect before the act or through other means, such as ATF’s criminal enforcement and regulatory activities. ATF’s interpretation relied on the language and context of section 926 and related provisions (primarily section 923), as well as the language and context of the 1979 appropriation rider, which was enacted to counter the broad reporting requirements that ATF sought to impose on licensees through the 1978 proposed rulemaking. ATF maintained that the basic effect of the Firearms Owners’ Protection Act—codifying certain former regulatory reporting requirements in section 923 and restricting the agency’s authority to prescribe certain rules and regulations in section 926—was to preempt any additional reporting requirements that the agency might impose on licensees. We agreed with ATF’s interpretation of the data restrictions as far as it went; clearly the data restrictions apply to rules or regulations that would impose additional reporting requirements upon licensees. The question was whether they have any effect beyond such reporting requirements, and, in particular, whether they restrict how ATF compiles or otherwise uses firearms transaction records once they have been acquired from licensees through current reporting requirements or other means. With regard to the restriction in section 926(a), we agree that it is limited to ATF actions in the form of prescribing rules and regulations. The appropriation rider, however, contains no language that would limit its application either to prescribing rules and regulations or to imposing additional reporting requirements on licensees. Although the original version of the rider did refer to the 1978 proposed rulemaking that would have required new reporting by licensees, it was not limited to that proposal. Furthermore, beginning in fiscal year 1994, the reference to the 1978 proposal was dropped from the appropriation rider, and the language of the restriction was expanded to include “any portion” of these licensee records. In our view, given its structure and language prohibiting the use of appropriations in connection with consolidating or centralizing certain firearms licensee records within the Department of the Treasury, the rider appears to encompass ATF’s internal operations. ATF’s prior legal opinions did not analyze the rider, other than to treat it as “similar to” the section 926(a) restriction. However, we believe that the appropriation rider clearly has legal effect independent of section 926. Congress enacted it for a number of years predating the Firearms Owners’ Protection Act and has continued to enact it for each subsequent year. As referred to above, the language of the appropriation rider was expanded in fiscal year 1994 to include portions of licensee firearms records. Further, there are significant differences in the language of the two provisions—most notably, the absence from the rider of any limitation on its coverage to rules or regulations. There is no indication in the legislative history of the Firearms Owners’ Protection Act that section 926 was intended to subsume or otherwise affect the appropriation rider. Therefore, in our view, ATF’s interpretation that the appropriation rider applied only to the issuance of rules and regulations that impose additional reporting requirements on licensees, and did not reach ATF’s internal information practices, was not supported by the statutory language or legislative history of the rider. Determining the extent to which the appropriation rider restricts ATF’s internal information practices posed more difficult questions. The appropriation rider applies to “consolidating or centralizing, within the Department of the Treasury, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees.” However, we do not believe that the rider precludes all information practices and data systems that involve an element of “consolidating or centralizing” licensee records. The legislative history of the rider indicates that it was originally enacted in response to an ATF proposal that was viewed as a wholesale aggregation of licensee firearms transaction records that went “beyond the intent of Congress when it passed the Gun Control Act of 1968.” There is no evidence in the legislative history that the rider was intended to overturn existing ATF information practices or data systems. Indeed, the Firearms Owners’ Protection Act, which amended the Gun Control Act and was enacted 8 years after the original rider was passed, reaffirmed several long-standing ATF information practices. The rider must be interpreted in light of its purpose and in the context of the other statutory provisions governing ATF’s acquisition and use of information contained in the Gun Control Act, as amended. Pursuant to the Gun Control Act, ATF is responsible for certain regulatory functions, such as licensing and monitoring firearms licensees, as well as certain law enforcement functions, such as the tracing of firearms. The act, as amended, contains specific statutory authorities that allow ATF to obtain certain firearms transaction data from licensees. Section 923 contains licensee recordkeeping and reporting authorities, as well as the authorities for ATF to conduct inspections and searches of licensee business premises for certain purposes. To implement these responsibilities and authorities, ATF necessarily gathers specific firearms transaction data, and must centralize or consolidate the data to some degree. However, as discussed above, section 926(a) precludes ATF from issuing rules or regulations after enactment of the Firearms Owners’ Protection Act that require the establishment of any system of registration of firearms, firearms owners, or firearms transactions or dispositions. The legislative history of the Firearms Owners’ Protection Act indicates a clear congressional concern that such a registry not be established. Therefore, to the extent that the centralization or consolidation of records is incident to carrying out a specific ATF responsibility and does not entail the aggregation of data on firearms transactions in a manner that would go beyond the purposes of the Gun Control Act of 1968, as amended, we do not believe that the rider would be violated. Conclusions ATF’s Out-of-Business Records and Multiple Sales Systems comply with the data restrictions, including the restriction in the annual appropriation rider, as discussed above. However, we did not review the other data systems and subsystems ATF identified as containing firearms-related information to determine their compliance with the data restrictions. ATF’s legal interpretation of the restriction in the appropriation rider was that the restriction had no application to ATF’s internal information practices under any circumstances. Given this, ATF had not reviewed its data systems and information practices to determine whether they involved the type of centralization or consolidation of records that might be affected by the rider, as discussed above. Such a review would help provide assurance that the systems and subsystems we did not review currently comply with the rider. In response to a draft of this report, ATF (1) revised its interpretation of the rider to adopt the broader interpretation we believed was appropriate; (2) applied this interpretation to a legal review of the systems, as described in appendix IV, that we did not review; and (3) stated it will apply the new interpretation to future systems. Although it found that the current systems, as described, comply with the broader interpretation, ATF did not determine whether the systems are operating as described. Such a determination would provide fuller assurance that the systems are in compliance. Recommendation We recommend that the Secretary of the Treasury require the Director of ATF to review ATF’s firearms data systems and information practices to ensure that they comply with the appropriation rider, as discussed above; and report the results of these actions to the Subcommittee in conjunction with ATF’s fiscal year 1998 budget submission. Agency Comments and Our Evaluation ATF provided written comments on a draft of this report. These comments are reprinted in appendix IX. Overall, ATF concurred with our findings and conclusions concerning the compliance of the Out-of-Business Records and Multiple Sales Systems with the statutory data restrictions. However, ATF disagreed with our conclusions regarding systems we did not review because it believed those systems were outside the scope of our review. Nevertheless, ATF agreed with and adopted the broader interpretation of the data restriction in the annual appropriation rider, as discussed in this report, for its existing, as well as future, systems. It also applied its revised interpretation to a legal review of the systems containing retail firearms purchaser data that we did not review and found them to be in compliance. In light of these actions ATF requested that we reconsider our recommendation. With regard to the issue of ATF’s interpretation of the data restriction contained in the annual appropriation rider, we concluded in our draft report that given ATF’s legal interpretation that the appropriation rider had no application to its internal information practices, ATF had not analyzed its data systems and information practices to determine whether they involved the type of centralization and consolidation of records that might be affected by the rider. Therefore, we concluded that ATF could not ensure that the systems and subsystems that we did not review complied with the rider; nor could ATF provide Congress with reasonable assurance that, in the future, its data systems, subsystems, and information practices would be in compliance with the rider, assuming that Congress continued to enact it. In commenting on our draft report, ATF stated that these assertions were speculative and ranged far beyond the scope of our review. We do not agree that our conclusion and recommendation go beyond the scope of our review. While we were asked to review only two systems’ compliance with the data restrictions, our third objective—to assess ATF’s overall legal interpretation of the data restrictions—covered all of its data systems and information practices. However, we did not suggest or imply that the ATF data systems and practices that we did not review were not in compliance with the law. Rather, our intention was to focus on the lack of assurance that ATF was providing related to its systems’ compliance with the restriction in the rider based on its narrow interpretation. Nevertheless, ATF stated that it “will hereafter apply GAO’s interpretation of the rider to its record systems and any future systems it might establish.” It further stated that it saw “no disadvantage to ATF in changing its position to be in conformity with the reading given by GAO since our record systems actually comply with GAO’s interpretation of the rider.” Also, as part of its written response to our draft report, ATF enclosed an August 23, 1996, opinion of the ATF Chief Counsel, whose Office reviewed those ATF data systems that contain retail firearms purchaser data (with the exception of the two systems that we reviewed) and found them to be in compliance with the rider, under the revised interpretation. The Chief Counsel’s opinion also indicated that if ATF adopted our interpretation of the rider, the Office of the Chief Counsel would, in the future, review any proposed new record system to determine compliance with the rider. We believe ATF has taken several important actions toward fulfilling the recommendation. Most notably, ATF has revised its legal analysis of the rider, applied it to the descriptions of the remaining systems that contain retail firearms purchaser information, and stated that future systems will be reviewed under the revised analysis. These actions, in our view, constitute major steps toward providing assurance that ATF is currently complying with the rider and that the agency will continue to comply with it in the future. Accordingly, we have modified our final report to reflect ATF’s actions. Although ATF has taken significant steps toward implementing our recommendation, in our view, it has not fully implemented the recommendation. ATF’s legal analysis of the description of the remaining systems that contain retail firearms purchaser information appears to apply appropriate criteria and rationale. In addition, the legal analysis discusses various general controls that ATF had in place and actions it had taken to help ensure that existing, as well as future, records systems and information practices comply with the law. However, it was not clear how these controls specifically applied to the systems discussed in ATF’s legal analysis or whether they were used to help ensure that the systems were in compliance with the data restrictions. To fully respond to our recommendation, ATF needs to provide assurance that the systems are actually operating as they were described. Thus, we believe that ATF should perform an operational review of the systems listed in appendix IV that we did not review. Therefore, we are retaining our recommendation that ATF review these systems and report the results to the House Appropriations Subcommittee. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to the Ranking Minority Member of the Subcommittee, appropriate congressional committees, the Secretary of the Treasury, the Director of ATF, and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix X. If you have any questions about this report, please call me on (202) 512-8777. Objectives, Scope, and Methodology Because of concerns regarding ATF’s compliance with the legislative restrictions regarding centralizing and consolidating data from federal firearms licensee records, the Chairman of the House Subcommittee on Treasury, Postal Service, and General Government, Committee on Appropriations, requested that we review ATF’s compliance with the legislative restrictions on maintaining certain federal firearms licensee data. We agreed to (1) identify and describe the ATF data systems that contain retail firearms purchaser data and (2) determine whether ATF’s Out-of-Business Records System and Multiple Sales System comply with the legislative data restrictions. We also agreed to assess ATF’s overall legal interpretation of the legislative data restrictions. To identify and describe the ATF data systems that contain retail firearms purchaser data, we obtained from ATF headquarters officials descriptions of ATF national data systems that they determined relate to firearms. We also asked ATF to identify those national data systems that contained retail firearms purchaser data. We reviewed the provided descriptive data to determine whether we agreed with ATF’s categorization of each data system. We also interviewed appropriate ATF headquarters and National Tracing Center officials to obtain additional information and clarification concerning the data systems. However, with the exception of the Out-of-Business Records and the Multiple Sales Systems, we did not independently verify the contents of the data systems because of time constraints. At the Subcommittee’s request, we focused on assessing ATF’s Out-of-Business Records System and Multiple Sales System. These systems (1) play a significant role in the firearms tracing process, (2) contain data obtained from nonlaw enforcement sources, and (3) involve large numbers of records and reports containing data on firearms transactions and purchasers. To obtain information on the firearms tracing process and the Out-of-Business Records and Multiple Sales Systems, we interviewed officials and reviewed system documentation and other data at ATF headquarters and at ATF’s National Tracing Center in Falling Waters, West Virginia. We also observed the firearms tracing, out-of-business records, and multiple sale reports processing operations at the Center and discussed these operations with Center officials. To address whether the Out-of-Business Records and Multiple Sales Systems were in compliance with the legislative data restrictions, we reviewed relevant laws and ATF regulations, legal opinions, and documentation on the design of these systems. We also discussed ATF’s legal opinions with ATF’s Associate Chief Counsel (Firearms and Explosives) and other officials. We did not review ATF’s other systems for compliance with the data restrictions. With regard to the Out-of-Business Records and Multiple Sales Systems, we did not review their compliance with other statutory requirements such as the Privacy Act and the Computer Security Act. Furthermore, to determine whether ATF’s actual handling of the records and reports in these systems was in accord with the systems’ designs, we observed the processing and maintenance of the out-of-business records and the multiple sale reports at the Tracing Center, conducted some tests, and discussed these operations with ATF headquarters and Tracing Center officials. To determine whether ATF was implementing its requirement to purge certain firearms purchaser data from the Multiple Sales System, we conducted data entry and retrieval tests. To assess ATF’s overall legal interpretation of the legislative data restrictions and their application to ATF operations, we reviewed relevant laws and their legislative histories, ATF regulations and legal opinions, and other documentation concerning the data restrictions. We also interviewed ATF’s Associate Chief Counsel (Firearms and Explosives) and other ATF headquarters and Tracing Center officials. Description of ATF’s Firearms Tracing Process The Gun Control Act of 1968, as amended, requires federal firearms licensees to record firearms transactions, maintain that information at their business premises, and make such records available to ATF for inspection and search under certain prescribed circumstances. Through the use of these records, ATF provides firearms tracing services to federal, state, local, and foreign law enforcement agencies. ATF also uses the records for other law enforcement purposes. To carry out its firearms tracing responsibilities, ATF maintains a firearms tracing operation, located at the National Tracing Center in Falling Waters, West Virginia. With a staff of 45 as of July 1996, the Tracing Center tracks firearms suspected of being involved in crimes to assist law enforcement in identifying suspects. The Tracing Center receives trace requests by facsimile, telephone, and mail. To do a trace, the manufacturer and the serial number of the firearm must be known. The Tracing Center determines the ownership of firearms being traced by using documentation, such as out-of-business licensee records and multiple sale reports, which are maintained in ATF’s national data systems, and/or by contacting manufacturers, importers, wholesalers, and retailers (i.e., firearms dealers). The objective of the trace is to identify the last known purchaser of the firearm. ATF is to document each trace request and its results and provide that information to the requester. ATF considers a request completed when it traces the firearm to a retail firearms licensee or a purchaser or when it cannot identify the purchaser for various reasons. For example, the description of the firearm as submitted by the requester may not have contained sufficient information to perform a trace. Figure II.1 provides a macro flowchart of ATF’s firearms tracing process. by fax, telephone, suspect guns, multiple sales, gun serial numbers (OOBR) gun serial number? active? ATF have licensee's records? F purchaser? have records? OOBR - Out-of-business records From this point in the tracing process, an active and/or inactive manufacturer, importer, or licensee may be involved. It should also be noted that not all traces are successfully completed. Some are closed due to age of firearm, incomplete/inaccurate description of firearm, loss of licensee records, or inability to locate licensee. For fiscal years 1992 through 1995, ATF received a total of 262,984 trace requests. The number of trace requests received by ATF increased about 56 percent during this 4-year period, from 51,210 in fiscal year 1992 to 80,042 in fiscal year 1995. During this period, ATF completed a total of 243,584 traces, including those that did not result in the identification of a retail firearms licensee or purchaser. As shown in figure II.2, the number of traces completed more than doubled, from 42,980 in fiscal year 1992 to 86,215 in fiscal year 1995. During this 4-year period, ATF identified a retail firearms licensee or a purchaser of the traced firearm, on average, in about 41 percent of the completed trace requests. In fiscal year 1995, the number of completed trace requests resulting in the identification of retail licensees or purchasers increased to about 52 percent. ATF’s Firearms-Related Data Systems Contains firearms data but does not identify retail firearms purchasers. Tracks firearms production and exports data that are gathered annually from licensed manufacturers and exporters for regulatory enforcement. Contains firearms data but does not identify retail firearms purchasers. Tracks inspectors’ assignments, certain performance measures, and workflow data within ATF’s Office of Regulatory Enforcement. Contains firearms data collected for criminal investigative purposes that, in some cases, may identify retail firearms purchasers. Automates the preparation of three ATF reports: Investigative Case Summary, Report of Investigation, and Property Taken Into ATF’s Custody for criminal enforcement. Contains firearms data but does not identify retail firearms purchasers. Tracks information on tax payments, including tax returns and return information, from more than 10,000 excise taxpayersfor regulatory enforcement. Contains firearms data but does not identify retail firearms purchasers. Used to manage sales information about firearms and ammunition manufacturers, who are required to pay federal excise taxes, to determine whether the proper amounts of tax were paid when due for regulatory enforcement. Contains firearms data but does not identify retail firearms purchasers. Tracks applications and permits for federal firearms and explosives licenses for regulatory and criminal enforcement. Contains firearms data that identify the consignees of firearms, who, in some cases, may be retail firearms purchasers. Tracks information on the importation of firearms and explosives into the United States and their release into commerce by the U.S. Customs Service for regulatory enforcement. Contains firearms data that identify retail firearms purchasers. Collects and tracks data on traces of firearms suspected of being involved in a crime to assist law enforcement agencies in identifying suspects for regulatory and criminal enforcement. Contains firearms data but does not identify retail firearms purchasers. Collects and tracks data on firearms stolen, or missing in inventory, from federal firearms licensees’ place of business for regulatory and criminal enforcement. Contains firearms data that identify the consignees of firearms, who, in some cases, may be retail firearms purchasers. Collects and tracks, for criminal enforcement purposes, information on thefts of firearms during interstate shipment between the manufacturer and the wholesaler, the wholesaler and the retailer, or retailers. (continued) Contains firearms data that identify retail firearms purchasers. Collects and tracks data on purchasers of two or more pistols and/or revolvers at one time or during any 5 consecutive business days for regulatory and criminal enforcement. Contains firearms data that identify retail firearms purchasers. Collects and tracks data (derived from the Firearms Tracing System) on firearms recovered in ATF’s field divisions’ geographic areas of responsibility for criminal enforcement. Allows ATF to analyze information concerning problem dealers, questionable purchasers, and other descriptive firearms data. Contains firearms data but does not identify retail firearms purchasers. Tracks all aspects of special agents’ duty time by various categories, including court time, investigative time, and leave for criminal enforcement. Also maintains information on individual cases, including information on the type of case, defendants, and seizures. Contains firearms data but does not identify retail firearms purchasers. Collects and tracks data for regulatory enforcement by name and address of subject, alleged firearms excise tax violation, action taken, business type, potential leads, investigations, product detention, and reporting offices. Contains firearms data that identify retail firearms purchasers. Collects and tracks data from applications and forms submitted by manufacturers, dealers, and owners of machine guns, destructive devices, and certain other firearms to monitor and enforce these classes of firearms for regulatory and criminal enforcement. Contains firearms data that identify retail firearms purchasers. Collects, indexes, and retrieves microfilmed copies of firearms transaction records of federal firearms licensees who have permanently gone out of business for regulatory and criminal enforcement. Contains firearms data that identify retail firearms purchasers. Analyzes firearms data contained in the Firearms Tracing System by ATF’s field divisions’ geographic areas of responsibility for their use in identifying and investigating suspected firearms traffickers, “straw” purchasers, and licensees suspected of being involved in criminal activity for regulatory and criminal enforcement. (continued) Contains firearms data but does not identify retail firearms purchasers. Tracks the tax payment records of taxpayers in certain occupations, including manufacturers of firearms and persons dealing in commodities regulated by the National Firearms Act, who are required to pay special occupational taxes for regulatory enforcement. The Criminal Enforcement Investigative Reports program is not a database. It is a word processing application that can be queried by case number, but not by name, to generate investigative reports. Although the consignees of firearms that are imported or shipped interstate could, in some cases, be retail purchasers, they cannot be specifically identified as such through the system alone, i.e., the system has no specific data field on retail firearms purchasers. See appendix IV for a detailed description. ATF Data Systems That Contain Retail Firearms Purchaser Data This appendix describes the five national data systems and one subsystem that ATF identified as containing sufficient data, or automated interfaces to related databases, to readily identify the retail purchaser or possessor of a specific firearm. The descriptions in tables IV.1 through IV.5 include data sources, data input, data location, authorized users, and security measures. Firearms Tracing System The Firearms Tracing System collects and tracks data on traces of firearms suspected of being involved in a crime to assist law enforcement in identifying suspects. Trace data are used by law enforcement agencies worldwide. In addition, the Firearms Tracing System contains three subsystems: (1) Interstate Theft, (2) Federal Firearms Licensees Theft, and (3) Multiple Sales. As shown in tables IV.1 and IV.1a, the overall System and the Multiple Sales Subsystem contain firearms purchaser data. According to ATF, the data collected in the Firearms Tracing System are firearms trace-specific duplicates of firearms transaction data kept by licensees pursuant to 18 U.S.C. 923(g). Section 923(g) requires licensees to maintain firearms data at their place of business and to make the information in those records available to ATF for certain purposes. Specifically, section 923(g)(7) requires licensees to respond within 24 hours after the receipt of a request from the Secretary of the Treasury for information contained in their records as may be required for determining the disposition of one or more firearms in the course of a bona fide criminal investigation. Table IV.1: Characteristics of the Firearms Tracing System Data in the system include serial number, make, model, type, and caliber of firearm; trace requester’s name; reasons for the trace; possessor of the weapon at the time of recovery; place of recovery; name and address of the licensee to whom the firearm was transferred; and name, address, date of birth, and place of birth of the individual purchaser. Data are maintained on a mainframe computer system at the National Data Center in Falling Waters, West Virginia. About 40 National Tracing Center personnel have complete access to the system. About 60 contractors and special agents in certain field divisions have restricted access. In addition, ATF agents and other law enforcement personnel nationwide have electronic access to the system, through the National Law Enforcement Telecommunications Network, only for purposes of submitting a trace request. The electronic data are protected by the Resource Access and Control Facility (RACF).User access privileges are defined by the RACF administrator with the permissions approved by the National Tracing Center and user’s first line supervisor. The hard copy data are protected by provisions of ATF’s Physical Security Program order. Multiple Sales Subsystem The Multiple Sales Subsystem collects and tracks information on purchasers of two or more pistols and/or revolvers at one time or during any 5 consecutive business days. Data are used to conduct traces of firearms suspected of being used in crimes and to develop investigative leads as part of Project Lead, which is discussed later in this appendix. A provision of the Gun Control Act of 1968, as amended, 18 U.S.C. 923(g)(3)(A) requires federal firearms licensees to report these transactions. The implementing regulation is at 27 C.F.R. 178.126a. Table IV.1a: Characteristics of the Multiple Sales Subsystem Data in the system include the name, address, date of birth, place of birth, race, and sex of purchasers; the serial number, make, model, type, and caliber of firearms purchased; and the name, address and license number of the federal firearms licensee. Data are maintained on a mainframe computer at the National Data Center. About 40 National Tracing Center personnel have complete access to the system. About 60 contractors and special agents in certain field divisions have restricted access. The electronic data are protected by the RACF. User access privileges are defined by the RACF administrator with the permissions approved by the National Tracing Center and user’s first line supervisor. The hard copy data are protected by provisions of ATF’s Physical Security Program order. Firearms Tracking System The Firearms Tracking System was designed to enable ATF to study firearms recovered in ATF field divisions’ geographic areas of responsibility and analyze information concerning problem dealers, questionable purchasers, and other descriptive firearms data. It was developed as an investigative tool to be used by ATF field divisions. This system was designed to be an interim system and was to be replaced by Project Lead, which is discussed next. However, as of July 1996, ATF field offices could use the Firearms Tracking System, Project Lead, or both. According to ATF, the data in this system are the same as the trace data in the Firearms Tracing System. Therefore, the statutory authority for the data collected is the same as that for the Firearms Tracing System. Table IV.2: Characteristics of the Firearms Tracking System Data in the system include the type of firearms; name of the dealer, purchaser, and possessor of traced firearms; recovery location; type of crime; quantity of firearms in a multiple sale report; and agency and project identification. Data are maintained on either the local area network server or on a personal computer hard drive. ATF special agents in about 10 field divisions have or had access to their divisions’ systems. Only ATF personnel with authorized identifications and passwords can access data in the Firearms Tracking System. Project Lead Project Lead is currently a personal computer-based system designed to analyze firearms data contained in the Firearms Tracing System by ATF’s field divisions’ geographic areas of responsibility. The field divisions use the data to help identify and investigate suspected firearms traffickers, “straw” purchasers, and federal firearms licensees suspected of involvement in criminal activity. ATF plans to make Project Lead a mainframe system at the National Data Center. Since, according to ATF, the data in this system are obtained directly from the Firearms Tracing System, the statutory authority for the data collected is the same as that for the Firearms Tracing System. Data in the system are exact replicas of data in the Firearms Tracing System. This includes firearms recovered that are suspected of being involved in a crime and traced by the National Tracing Center (includes the purchasers’ or possessors’ names; multiple sales reported by licensees; names of individuals associated with the recovery of a firearm, e.g., names of people associated with a vehicle in which a firearm was recovered; and the recovery locations of firearms. Data from the Firearms Tracing System are periodically downloaded to and maintained on stand-alone computers in certain ATF field divisions. Selected ATF special agents and inspectors in the field divisions have access to data that are appropriate to their geographic area of responsibility. The electronic data are protected by users’ authorized identifications and passwords built into the Project Lead application. The hard copy data are protected by provisions of ATF’s Physical Security Program order. Out-Of-Business Records System The Out-of-Business Records System was designed to collect, index, and retrieve microfilmed copies of firearms transactions records that federal firearms licensees have forwarded to ATF when the licensees permanently discontinued their business operations. The data are used to conduct traces of firearms suspected of being used in crimes. A provision of the Gun Control Act of 1968, as amended, 18 U.S.C. 923(g)(4), requires federal firearms licensees who permanently discontinue their business to forward their records to ATF within 30 days after the discontinuance. The implementing regulation is at 27 C.F.R. 178.127. Table IV.4: Characteristics of the Out-Of-Business Records System The microfilm system contains an exact photographic image of the firearms transaction record. The record contains, among other things, the name and address of the firearms purchaser. These records are indexed on a minicomputer. The index system contains, among other things, microfilm cartridge numbers, film frame numbers, and serial numbers of firearms recorded on each cartridge. The microfilm cartridges containing microfilmed records are maintained in file cabinets at the National Tracing Center. The computerized index data that are captured by a minicomputer are stored on a mainframe computer system at the National Data Center. About 100 National Tracing Center personnel and contractors have complete access to the system. The electronic data in the index system are protected by the RACFon the mainframe and by users’ personal identification and passwords on the minicomputer. The hard copy data are protected by provisions of ATF’s Physical Security Program order. National Firearms Act Database The National Firearms Act Database contains data on certain classes of firearms, such as machine guns and destructive devices, as defined by the National Firearms Act of 1934, 26 U.S.C. Chapter 53. (This act was recodified as Title II of the Gun Control Act of 1968.) The act requires the registration of these defined categories of weapons and requires that the Secretary of the Treasury collect transfer taxes and maintain a central registry of these firearms, which is known as the National Firearms Registry and Transfer Record. (Implementing regulations are set forth in 27 C.F.R. Part 179.) ATF uses this system to monitor and enforce these classes of firearms. Federal, state, and local law enforcement agencies use the data for criminal prosecutions. Table IV.5: Characteristics of the National Firearms Act Database Data in the system include the type, model, and serial number of firearm; the name and address of the retail purchaser or possessor; and the amount of tax paid and other accounting data, such as the date the tax was paid. Electronic data are maintained on a mainframe computer at the National Data Center. ATF captures about 40 percent of the data from the applications and forms into an electronic format. Hard copies of the applications, forms, and correspondence are maintained in ATF files or on microfilm or computer disks. They are filed at ATF headquarters or the National Archives and Records Administration warehouses in the Washington, D.C., area. Fifteen National Firearms Act Branch personnel and special agents assigned to the Enforcement Operations Center of the National Communications Center, Intelligence Division, Criminal Enforcement Program, have access to the system. The electronic data are protected by RACF.User access privileges are defined by the RACF administrator with the permissions approved by the National Firearms Act Branch and user’s first line supervisor. The hard copy data are protected by provisions of ATF’s Physical Security Program order. Description of ATF’s Out-Of-Business Records System When firearms licensees discontinue their businesses, ATF needs access to their records for tracing purposes. To ensure that it had access to these records, shortly after the passage of the Gun Control Act, ATF issued a regulation requiring federal firearms licensees who permanently discontinued their businesses to forward their records to ATF within 30 days following the discontinuance (27 C.F.R. 178.127). The Firearms Owners’ Protection Act codified this reporting requirement (18 U.S.C. 923(g)(4)). Accordingly, since the enactment of the Gun Control Act, ATF has maintained the out-of-business records at a central location, currently the National Tracing Center. Before fiscal year 1991, ATF stored the out-of-business records in boxes with a Tracing Center file number assigned to each licensee. If during a trace ATF determined that the firearms licensee who sold the firearm was out of business and had sent in his or her records, ATF employees were to locate the boxes containing the records and manually search them for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, which also created storage problems for ATF. In 1991, ATF began a major project to microfilm the out-of-business records and destroy the originals. Instead of in boxes, the out-of-business records were stored on microfilm cartridges, with the firearms licensee numbers assigned to them. Although this system occupied much less space than the hard copies of the records, ATF officials said it was still time consuming to conduct firearms traces because employees had to examine up to 3,000 images on each microfilm cartridge to locate a record. In fiscal year 1992, ATF began using a minicomputer to create a computerized index of the out-of-business microfilm records containing the information necessary to identify whether ATF had a record relating to a firearm being traced. The index contains the following key information: (1) the cartridge number of the microfilm; (2) an index number; (3) the serial number of the firearm; (4) the federal firearms licensee number; and (5) the type of document on microfilm, i.e., a Firearms Transaction Record (ATF Form 4473) or acquisition and disposition logbook pages. The index information that is captured by the minicomputer is then stored on a database in ATF’s mainframe computer to allow searches of the index information by an employee. The other information, including the firearms purchaser’s name or other identifying information and firearms manufacturer, type, and model, remains on the microfilm cartridges and must be viewed with a microfilm reader. Since the establishment of the computerized out-of-business records index, ATF does not begin a trace by contacting firearms manufacturers and importers. Rather it queries the Out-of-Business Records System to determine if the firearm being traced is contained in the records of an out-of-business licensee. To perform a query of the system, employees are to enter the serial number of the firearm in question into the mainframe computer’s database. If the serial number is matched with a particular out-of-business licensee record, the query will produce a list of one or more microfilm cartridges indicating the cartridge number and frame where the serial number in question may be found. After locating the appropriate cartridges, the employee is to use the location information in the index to search the microfilm frames to locate the record containing the serial number. Since the index does not associate a firearm’s serial number with the manufacturer and type or model, the employee may need to examine several frames on one or more cartridges to locate a record. After locating the record, the employee is to examine the record to identify the purchaser of the firearm. If the identified purchaser is not another licensee, the trace is considered complete. If the purchaser is another licensee, the Tracing Center is to contact the licensee. If the serial number is not located in the out-of-business records, the Tracing Center is then to contact the manufacturer or importer to determine who purchased the firearm. According to ATF officials, the indexed Out-of-Business Records System has (1) greatly reduced the need to contact manufacturers, importers, and other licensees and (2) reduced the time and cost, including storage costs, necessary to conduct firearm traces. As shown in table V.1, during fiscal years 1992 through 1995, ATF received out-of-business records from 68,660 firearms licensees. ATF officials estimated that during this period, ATF spent about $9.6 million, including the cost of contract employees (65 as of July 1996), to process and maintain out-of-business records. According to ATF officials, ATF is receiving an increased number of records primarily because the number of licensees going out of business has increased, and more of these licensees have sent in their records. The number of licensees who have gone out of business more than doubled, from 34,663 in fiscal year 1992 to 75,569 in fiscal year 1995. About 43 percent of the licensees who went out of business in fiscal year 1995 sent in their records, compared to about 25 percent in fiscal year 1992—an increase of about 75 percent. ATF officials estimated that during fiscal years 1992 through 1995, ATF microfilmed about 47 million documents contained in about 20,000 boxes. Although ATF does not systematically collect data on the number of traces involving out-of-business records, ATF officials estimated that ATF used the out-of-business records to help complete about 42 percent of all completed trace requests during this period. ATF had no information on the number of completed traces that identified retail firearms licensees or purchasers and involved the use of out-of-business records. Description of ATF’s Multiple Sales System Since 1975, federal firearms licensees have been required by regulation (27 C.F.R. 178.126a) and subsequently by law (18 U.S.C. 923(g)(3)(A)) to report all transactions in which an unlicensed person has acquired two or more pistols and/or revolvers at one time or during any 5 consecutive business days (referred to as a multiple sale). As ATF stated at the time the regulation was issued, the purpose for requiring multiple sale reports was to enable ATF to “monitor and deter illegal interstate commerce in pistols and revolvers by unlicensed persons.” The Firearms Owners’ Protection Act of 1986 codified the multiple sale regulatory reporting requirement. Also, in November 1993, under Title II of Public Law 103-159, federal firearms licensees were required to send a copy of the multiple sale report to the state or local law enforcement agency in whose jurisdiction the sale or other disposition took place. Under 18 U.S.C. 923(g)(3)(A), ATF can specify the multiple sale reporting form and designate the ATF office where the report is to be sent. Currently, federal firearms licensees are required to send multiple sale reports to the ATF criminal enforcement field division located in their respective area. Reports are to be sent no later than the close of business on the day that the multiple sale occurs. In addition, licensees are required by regulation to retain a copy of the multiple sale reports. Before November 1995, ATF required that multiple sale reports be maintained by its field divisions. According to ATF, these divisions in the past maintained multiple sale reports in a variety of ways: some used local computer information tracking systems, others used alphabetical card files, and before 1987 some used the Department of the Treasury’s Enforcement Communications System (TECS), a law enforcement data system that includes centralized databases used by Treasury and other law enforcement agencies. According to ATF policy, field divisions are to use multiple sale reports to develop investigative leads for those persons who engage in business as unlicensed firearms dealers or who transport or sell firearms illegally in interstate commerce. These reports are to provide an investigative tool to identify traffickers and other violators of federal firearms laws. In an August 1993 memorandum on gun dealer licensing, the President listed a number of steps that ATF could take to ensure compliance with federal firearms licensing requirements, including increasing scrutiny of licensees’ multiple sale reports and providing automated access to those reports. Further, according to ATF, plans to automate multiple sale reports were included in the President’s firearms initiative, and Congress appropriated fiscal year 1995 funds to implement the initiative. In November 1995, ATF began implementing a new policy to process and computerize multiple sale reports at its National Tracing Center. Field divisions were instructed to forward their multiple sale reports to the Tracing Center for processing. ATF’s decision to computerize multiple sale reports at the Tracing Center was based on a test conducted from June through October 1995. In June 1995, to provide the capability for computerizing multiple sale reports, ATF upgraded its Firearms Tracing System by developing a Multiple Sales Subsystem. This system allows the entry of multiple sale information directly into the Firearms Tracing System. ATF then tested the system by entering multiple sale reports forwarded by three field divisions. Following successful completion of the test, ATF issued its new policy. ATF decided to computerize the handling of multiple sale reports at the Tracing Center for several reasons. First, by computerizing the reports as part of ATF’s Firearms Tracing System, multiple sale information became readily available for firearms traces. When maintained locally by field divisions, multiple sale reports were not readily available for firearms tracing. Second, and most important according to the Chief of the Firearms Enforcement Division, by entering multiple sale information into the Firearms Tracing System, the information would be available to field divisions through Project Lead. ATF, through Project Lead, provides monthly firearms trace information along with multiple sale and other data on computer diskettes to its field divisions to develop investigative leads. Once the Tracing Center receives multiple sale reports from field divisions, the information is entered into the Multiple Sales System. The data entered includes (1) purchaser information such as name, address, date of birth, place of birth, race, and sex; (2) firearms identification information, including serial numbers of pistols and/or revolvers purchased; and (3) federal firearms licensee identification information. Multiple sale data in the system are retrievable by purchaser name and firearm serial number. After the information is entered into the system, the multiple sale reports are to be microfilmed, and the original reports are to be destroyed. The current ATF Form 3310.4—Report of Multiple Sale or Other Disposition of Pistols and Revolvers—showing the information requested of licensees is reproduced in appendix VII. The Tracing Center has been processing multiple sale reports received from field divisions since June 1995. Through June 1996, the Tracing Center had entered data on 91,599 multiple sale reports and had 521 firearms traces linked to multiple sale reports. The Special Agent in Charge of the National Tracing Center estimated that in the future the Center will receive 130,000 multiple sale reports annually. This official estimated that each multiple sale report includes an average of 2.3 firearms. Table VI.1 provides monthly data on the number of multiple sale reports processed at the Tracing Center along with the number of firearms linked to multiple sale reports. N/A = ATF did not maintain data. As part of its November 1995 policy to computerize multiple sale reports at the Tracing Center, ATF included a requirement for purging firearms purchaser data not identified in firearms traces. The requirement calls for purging the purchaser data not identified in a trace 2 years after the date of the sale. The remainder of the data entered for each multiple sale, such as firearms descriptive data, is not to be purged and is to remain in the system to be used as investigative intelligence. In contrast, all multiple sale data identified in firearms traces, including purchaser data, is not to be purged from the system. According to the Chief of the Firearms Enforcement Division, the primary reason for purging purchaser data over 2 years old is to delete data that may not be useful. The Special Agent in Charge of the National Tracing Center said that ATF is sensitive for privacy reasons to retaining firearms purchaser data that may have lost its utility. Report of Multiple Sale or Other Disposition of Pistols and Revolvers Legal Analysis of Statutory Restrictions Concerning Federal Firearms Licensee Data We were asked to evaluate ATF’s interpretation of data restrictions contained in 18 U.S.C. 926(a) and in a rider to its annual appropriation act. We have reviewed the relevant laws, legislative history, and ATF legal opinions and met with ATF lawyers regarding their interpretation. Set forth below are the relevant statutory provisions, our analysis of ATF’s interpretation of these provisions, and our analysis of the application of these restrictions to the two ATF data systems that we reviewed in detail, the Out-of-Business Records System and the Multiple Sales System. Although we agree with ATF’s interpretation of 18 U.S.C. 926(a), we believe that ATF’s interpretation of the restriction in its annual appropriation was too narrow. However, we found that the two data systems that we reviewed did not violate the data restrictions. Background The Gun Control Act of 1968 established a system requiring federal firearms licensees to record firearms transactions, maintain that information at their business premises, and make such records available to ATF for inspection and search under certain prescribed circumstances. This system was intended to permit law enforcement officials to trace firearms involved in crimes while allowing the records themselves to be maintained by the licensees rather than by a governmental entity. As originally enacted, the Gun Control Act required licensees to submit such reports and information as the Secretary of the Treasury prescribed by regulation and authorized the Secretary to prescribe such rules and regulations as deemed reasonably necessary to carry out the provisions of the act. The Appropriation Rider “hat no funds appropriated herein shall be available for administrative expenses in connection with consolidating or centralizing within the Department of the Treasury the records of receipt and disposition of firearms maintained by Federal firearms licensees or for issuing or carrying out any provisions of the proposed rules of the Department of the Treasury, Bureau of Alcohol, Tobacco and Firearms, on Firearms Regulations, as published in the Federal Register, volume 43, number 55, of March 21, 1978.” “The Bureau of Alcohol, Tobacco, and Firearms (BATF) has proposed implementation of several new regulations regarding firearms. The proposed regulations, as published in the Federal Register of March 21, 1978 would require: “(1) A unique serial number on each gun manufactured or imported into the United States. “(2) Reporting of all thefts and losses of guns by manufacturers, wholesalers and dealers. “(3) Reporting of all commercial transactions involving guns between manufacturers, wholesalers and dealers. “The Bureau would establish a centralized computer data bank to store the above information. It is important to note that the proposed regulations would create a central Federal computer record of commercial transactions involving all firearms—whether shotguns, rifles, or handguns. There are approximately 168,000 federally licensed firearms dealers, manufacturers, and importers. It is estimated that the proposed regulations would require submission of 700,000 reports annually involving 25 million to 45 million transactions. “It is the view of the Committee that the proposed regulations go beyond the intent of Congress when it passed the Gun Control Act of 1968. It would appear that BATF and the Department of Treasury are attempting to exceed their statutory authority and accomplish by regulation that which Congress has declined to legislate.” While the reference to the 1978 proposed rules was later dropped, the general prohibition against the centralization or consolidation of records has been included in each of ATF’s annual appropriations since fiscal year 1979. The fiscal year 1996 appropriation rider prohibits the consolidation or centralization of “the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees” within the Department of the Treasury. 18 U.S.C. 926(a) “(a) The Secretary may prescribe only such rules and regulations as are necessary to carry out the provisions of this chapter, including—“(1) regulations providing that a person licensed under this chapter, when dealing with another person so licensed, shall provide such other licensed person a certified copy of this license; “(2) regulations providing for the issuance, at a reasonable cost, to a person licensed under this chapter, of certified copies of his license for use as provided under regulations issued under paragraph (1) of this subsection; and “(3) regulations providing for effective receipt and secure storage of firearms relinquished by or seized from persons described in subsection (d)(8) or (g)(8) of section 922. “No such rule or regulation prescribed after the date of the enactment of the Firearms Owners’ Protection Act may require that records required to be maintained under this chapter or any portion of the contents of such records, be recorded at or transferred to a facility owned, managed, or controlled by the United States or any State or any political subdivision thereof, nor that any system of registration of firearms, firearms owners, or firearms transactions or dispositions be established. Nothing in this section expands or restricts the Secretary’s authority to inquire into the disposition of any firearm in the course of a criminal investigation.” (Emphasis supplied.) This data restriction was one of several amendments to the Gun Control Act made by FOPA to limit ATF’s authority over licensees and their records. For example, FOPA amended section 923 of title 18 to provide that licensed “importers, manufacturers, and dealers shall not be required to submit to the Secretary reports and information with respect to such records [they are required to maintain] and the contents thereof, except as expressly required by this section.” The act went on to codify in section 923 several reporting requirements that ATF previously had imposed on licensees by regulation, including those related to out-of-business licensee records and reports of multiple handgun sales. ATF’s Legal Interpretation ATF’s interpretation of the data restrictions in the annual appropriation rider and section 926(a) was contained in a number of opinions and correspondence that ATF provided to us during the course of the audit. These opinions generally address whether the data restrictions prohibit the establishment of a specific data system or apply to information gathered during the course of ATF audits of a licensee’s compliance with recordkeeping requirements. On the basis of its interpretation of the two provisions, as set forth below, ATF concluded in each instance that the provisions did not apply to the systems or information collections at issue. Essentially, ATF maintained that the restrictions in section 926(a) and the appropriation act rider have the same effect, and that they only were “intended to preclude future [post-FOPA] regulations imposing additional reporting requirements upon licensees.” Thus, ATF viewed the data restrictions as having no application to the agency’s internal information practices—i.e., they did not restrict what ATF did with information it had acquired from licensees through pre-FOPA reporting requirements or other means. ATF’s interpretation relied on the language and context of section 926 and related provisions (primarily section 923), as well as the language and context of the 1979 appropriation rider, which was enacted to counter the broad reporting requirements that ATF sought to impose on licensees through the 1978 proposed rulemaking. ATF maintained that the basic effect of FOPA—codifying certain former regulatory reporting requirements in section 923 and restricting the agency’s authority to prescribe rules and regulations in section 926—was to preempt any additional reporting requirements that the agency might impose on licensees. ATF also cited the principle of deference to be accorded an agency’s interpretation of laws that it administers. Analysis of ATF’s Interpretation We agreed with ATF’s interpretation of the data restrictions as far as it went; clearly the data restrictions apply to rules or regulations that would impose additional reporting requirements upon licensees. The question was whether they have any effect beyond such reporting requirements, and, in particular, whether they restrict how ATF compiles or otherwise uses firearms transaction records once they have been acquired from licensees through current reporting requirements or other means. With regard to the restriction in section 926(a), we agree with ATF that it is limited to prescribing rules and regulations. The appropriation rider, however, contains no language that would limit its application either to prescribing rules and regulations or to imposing additional reporting requirements on licensees. The original version of the rider did refer to the 1978 proposed rulemaking, but it was not limited to that proposal. The reference to the 1978 proposal was dropped in fiscal year 1994. Moreover, given its structure and language—prohibiting the use of appropriations in connection with consolidating or centralizing certain records within the Department of the Treasury—the rider appears to encompass ATF’s internal operations. The ATF opinions we reviewed did not analyze the appropriation rider, other than to treat it as “similar to” the section 926 restriction. However, we believe that the appropriation rider clearly has legal effect independent of section 926. Congress enacted it for a number of years predating FOPA and has continued to enact it for each subsequent year. In fact, the language of the appropriation rider was expanded in fiscal year 1994 to include portions of licensee firearms records. There also are significant differences in the language of the two provisions—most notably, the absence from the rider of any limitation on its coverage to rules or regulations. Also, there is no indication in the legislative history of FOPA that section 926 was intended to subsume or otherwise affect the appropriation rider. Finally, the appropriation rider was originally intended to prevent ATF from obtaining and computerizing large volumes of information on firearms transactions in a manner that was viewed as “accomplish by regulation that which Congress has declined to legislate.” It would be incongruous to conclude that the appropriation rider would not reach efforts to accomplish the same result through means other than regulatory requirements imposed on licensees. Therefore, in our view, ATF’s interpretation that the appropriation rider applied only to the issuance of rules and regulations that impose additional reporting requirements on licensees, and did not reach ATF’s internal information practices, was not supported by the statutory language or legislative history of the rider. Determining the extent to which the appropriation rider restricts ATF’s internal information practices poses more difficult questions. The appropriation rider applies to “consolidating or centralizing, within the Department of the Treasury, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees.” However, we do not believe that the rider precludes all information practices and data systems that involve an element of “consolidating or centralizing” licensee records. As discussed above, the legislative history of the rider indicates that it was originally enacted in response to an ATF proposal that was viewed as a wholesale aggregation of licensee firearms transaction records that went “beyond the intent of Congress when it passed the Gun Control Act of 1968.” There is no evidence in the legislative history that the rider was intended to overturn existing ATF information practices or data systems. Indeed, FOPA, which amended the Gun Control Act and was enacted 8 years after the original rider was passed, reaffirmed several long-standing ATF information practices. The rider must be interpreted in light of its purpose and in the context of the other statutory provisions governing ATF’s acquisition and use of information contained in the Gun Control Act, as amended. Pursuant to the Gun Control Act, ATF is responsible for certain regulatory functions, such as licensing and monitoring firearms licensees, as well as certain law enforcement functions, such as the tracing of firearms. The act, as amended, contains specific statutory authorities that allow ATF to obtain certain firearms transaction information from licensees. Section 923 contains licensee recordkeeping and reporting authorities, as well as the authorities for ATF to conduct inspections and searches of licensee business premises for certain purposes. To implement these responsibilities and authorities, ATF necessarily gathers specific firearms transaction data and must centralize or consolidate the data to some degree. “the Committee wishes to emphasize that, notwithstanding any other provision of law, the authority granted under 18 U.S.C. 923(g)(3), (4) and (5), as well as that contained in paragraph (1), as amended, are not to be construed to authorize the United States or any state or political subdivision thereof, to use the information obtained from any records or forms which are required to be maintained for inspection or submission by licensees under Chapter 44 to establish any system of registration of firearms, firearms owners, or firearms transactions or dispositions.” Therefore, to the extent that the centralization or consolidation of records is incident to carrying out a specific ATF responsibility and does not entail the aggregation of data on firearms transactions in a manner that would go beyond the purposes of the Gun Control Act of 1968, as amended, we do not believe that the rider would be violated. ATF’s Out-Of-Business Records System and Multiple Sales System Comply With Data Restrictions We reviewed two ATF data systems, the Out-of-Business Records System and the Multiple Sales System, to determine if they comply with the data restrictions. We found that the systems do not violate the restrictions. Out-Of-Business Records System Shortly after the passage of the Gun Control Act of 1968, ATF issued regulations requiring firearms licensees who permanently discontinued their businesses to forward their records to ATF within 30 days following the discontinuance. In 1986, FOPA codified this regulatory reporting requirement. According to ATF, prior to 1991 the out-of-business records were maintained at a central location in boxes, with a file number assigned to each firearms licensee. If ATF determined during a trace that a firearm had been sold by a firearms licensee who was out of business, an ATF employee manually searched the records for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, and the volume of records created storage problems. In 1991, ATF began a major project to microfilm these records. In fiscal year 1992, ATF established a computerized index of the microfilm records. The index contains the following information: (1) the cartridge number of the microfilm, (2) an index number, (3) the serial number of the firearm, (4) the federal firearms licensee number, and (5) the type of document on microfilm. The other information on the microfilm frames, including the firearms purchaser’s name or other identifying information, remains stored on the microfilm and is not computerized. The Out-of-Business Records System is described in detail in appendix V. We believe that ATF’s Out-of-Business Records System does not violate the data restrictions. As noted previously, 18 U.S.C. 926(a) prohibits ATF from prescribing certain rules or regulations after the date of enactment of FOPA. At the same time it added the section 926(a) restriction, Congress codified at 18 U.S.C. 923(g)(4) the then-existing regulatory requirement that licensees who permanently go out of business send these records to ATF. ATF’s current regulatory requirement concerning the out-of-business records predates FOPA and, thus, is not subject to section 926(a). With regard to the annual appropriation rider, in our view, the Out-of-Business Records System does not violate the general prohibition on “consolidation or centralization” of firearms acquisition and disposition records. The regulatory requirement that licensees send these records to ATF existed before the appropriation rider was first passed for fiscal year 1979, and there is no indication in the legislative history that the rider was intended to overturn ATF’s existing practices concerning the acquisition or use of licensee information. According to ATF, the out-of-business records historically have been maintained at a central location. Moreover, FOPA provided ATF with specific statutory authority to collect these records. In the legislative history of FOPA, there is evidence that Congress considered placing constraints on ATF’s maintenance of out-of-business records but did not do so. The Senate-passed version of FOPA prohibited the Secretary of the Treasury from maintaining out-of-business records at a centralized location and from entering them into a computer for storage or retrieval. This restrictive provision was dropped from the version of the bill enacted by Congress. Lastly, in fiscal year 1992, Congress appropriated $650,000 “for improvement of information retrieval systems at the National Firearms Tracing Center.” These funds were for the microfilming of the out-of-business records. For fiscal year 1995, Congress appropriated funds for the President’s firearms initiative, which included a request for funding of the Out-of-Business Records System. Congress provided these funds in the same legislation that contained the rider restricting consolidation and centralization of licensee data. According to ATF, the system solved storage and trace timing problems, thereby enhancing ATF’s tracing capabilities. At the same time, the system does not computerize certain key information, such as firearms purchaser information. In conclusion, we believe that the system for maintaining the out-of-business records does not violate either data restriction provision. Multiple Sales System Since 1975, federal firearms licensees have been required by regulation and subsequently by law to report all transactions in which an unlicensed person has acquired two or more pistols and/or revolvers at one time or during any 5 consecutive business days (referred to as a multiple sale). The purpose of the multiple sale reporting requirement was to enable ATF to “monitor and deter illegal interstate commerce in pistols and revolvers by unlicensed persons.” According to ATF, the multiple sale reports have historically been maintained at local ATF field divisions. In November 1995, ATF issued a new policy to centralize and computerize multiple sale reports at its National Tracing Center. The Multiple Sales Subsystem is described in detail in appendix VI. We believe that ATF’s Multiple Sales System complies with the data restrictions. As discussed earlier, the prohibitions in section 926(a) only apply to certain rules or regulations prescribed after the enactment of FOPA. In the same act, Congress codified the then-existing regulatory requirement that federal firearms licensees prepare these multiple sale reports and forward them to ATF. ATF’s current regulatory requirement concerning the multiple sale reports predates FOPA and, thus, is not subject to section 926(a). With regard to the annual appropriation rider, in our view, the Multiple Sales System does not violate the general prohibition on the “consolidation or centralization” of firearms acquisition and disposition records. The requirement that licensees prepare these reports and send them to ATF existed in regulation before the first appropriation rider was passed in fiscal year 1979, and there is no indication in the legislative history that the rider was intended to overturn ATF’s existing practices concerning the acquisition or use of licensee information. Although the multiple sale reports historically have been maintained at the field level, the provisions and legislative history of FOPA, which gave ATF specific statutory authority to collect these records, indicate that ATF would not be precluded from computerizing the multiple sale reports. FOPA requires that licensees send the reports “to the office specified” on the ATF form. Under this provision, ATF could specify that licensees forward the multiple sale reports to a central location. In addition, the legislative history of the act indicates that Congress considered placing constraints on ATF’s maintenance of multiple sale reports but did not do so. The Senate-passed version of FOPA prohibited the Secretary of the Treasury from maintaining multiple sale reports at a centralized location and from entering them into a computer for storage or retrieval. This restrictive provision was dropped from the version of the bill enacted by Congress. Lastly, for fiscal year 1995, Congress appropriated funds to implement the President’s firearms initiative, which included plans to automate multiple sale reports. Congress provided these funds in the same legislation that contained the rider restricting consolidation and centralization of licensee data. In conclusion, we believe that the current Multiple Sales System does not violate either data restriction provision. Comments From the Bureau of Alcohol, Tobacco and Firearms The following is GAO’s comment on ATF’s August 23, 1996, letter. GAO Comment 1. Concurrent with ATF’s Chief Counsel’s review of the unaudited systems in appendix IV, ATF’s Office of Science and Information Technology recategorized several of the ATF data systems discussed in appendix III. One of these systems, the Firearms Explosives and Import System, was determined not to identify retail firearms purchasers and thus we deleted it from appendix IV. Major Contributors to This Report General Government Division, Washington, D.C. Accounting and Information Management Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Jan B. Montgomery, Assistant General Counsel Rosemary Healy, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed various aspects of the Bureau of Alcohol, Tobacco, and Firearms' (ATF) operations, focusing on ATF compliance in meeting specific legislative restrictions in 18 U.S.C. 926(a) regarding federal firearms licensee data. GAO found that: (1) ATF identified and described 14 national data systems and 4 subsystems that relate to firearms; (2) according to ATF, five systems and one subsystem contain data that readily identify retail purchasers or possessors of specific firearms; (3) the Out-of-Business Records System contains records that federal firearms licensees are required by statute to forward to ATF within 30 days following a permanent discontinuance of their business; (4) the Multiple Sales System contains data from reports that federal firearms licensees are required by statute to send to ATF showing sales or other dispositions of two or more pistols and/or revolvers to an unlicensed person at one time or during any 5 consecutive business days; (5) the two systems, as designed, comply with the data restrictions and do not violate the appropriation rider prohibition against consolidating or centralizing licensee records; (6) on the basis of GAO's review, observations, and discussions with ATF officials, it believes that ATF operated the two systems consistently with their design, with one exception relating to the Multiple Sales System; (7) GAO agrees with ATF's view of section 926(a), but GAO believes that ATF's interpretation of the annual appropriation rider was too narrow; (8) ATF contended that both section 926(a) and the appropriation rider restricted it from issuing rules and regulations imposing additional reporting requirements on licensees but did not restrict what it did internally with information it otherwise acquired; (9) GAO agrees that the restriction in section 926(a) limits ATF only from prescribing certain rules or regulations, but the appropriation rider contains no language that would limit its application either to prescribing rules and regulations or to imposing additional reporting requirements on licensees; (10) GAO believes that the rider has legal effect independent of section 926; (11) GAO does not believe that the rider precludes all information practices and data systems that involve an element of "consolidating or centralizing" licensee records; (12) to the extent that the centralization or consolidation of firearms transaction records is incident to carrying out a specific ATF responsibility and does not entail the aggregation of data on firearms transactions in a manner that would go beyond the purposes of the Gun Control Act of 1968, as amended, GAO does not believe that the rider would be violated; and (13) given its legal position on the limited scope of the rider, ATF had not systematically analyzed its data systems and information practices to give appropriate effect to the appropriation rider.
Background USPTO administers U.S. patent and trademark law to encourage innovation and advance science and technology in two ways. First, USPTO grants to inventors exclusive rights to their inventions for a limited period of time, usually 20 years. During this time, the inventor can exclude others from making, using, selling or importing the invention. Second, the agency preserves and disseminates patent information, for example on issued patents and most patent applications. Such information allows other inventors to improve upon the invention in the original application and apply for their own patent. To obtain a patent, inventors—or more usually their attorneys or agents— submit to USPTO an application that fully discloses and clearly describes one or more distinct innovative features of the proposed invention (called claims) and pays a filing fee to begin the examination process. USPTO evaluates the application for completeness, classifies it by the type of patent and the technology involved, and assigns it for review to one of its operational units, called technology centers, that specialize in specific areas of science and engineering. Supervisors in each technology center then assign the application to a patent examiner for further review. For each claim in the application, the examiner searches and analyzes relevant United States and international patents, journals, and other literature to determine whether the proposed invention merits a patent—that is, whether the invention is a new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement to one that already exists. The examiner may contact the applicant on one or more occasions to resolve questions and obtain additional information to determine the proposed invention’s potential patentability. If the examiner determines that the proposed invention merits a patent, the applicant is informed, and, upon payment of a fee, USPTO issues a patent. The applicant may abandon the application at any time during the examination process. If the application is denied a patent, the applicant may appeal the decision within an established time. Each examiner typically reviews applications in the order in which they are received by USPTO. The time from the date an application is filed until a patent is granted, denied, or the application is abandoned is called “overall pendency.” Over the past decade, overall pendency has increased on average from 20 to almost 28 months. However, pendency varies by technology center, ranging from 24 months for applications in such fields as transportation, agriculture, electronic commerce, mechanical engineering, and manufacturing to 41 months for applications in the fields of computer architecture, software and information security (see table 1). In addition to overall pendency, USPTO monitors the time from when an application is filed until the examiner makes an initial assessment of the proposed invention’s patentability and informs the applicant, called first action pendency. First action pendency also has generally increased in the past decade from 8 to over 20 months. In 2004, first action pendency ranged from an average of 14 months for applications in such fields as semiconductors and optical systems to 33 months for computer architecture and software applications. Such measures of pendency help USPTO assess its effectiveness in reviewing patent applications. USPTO Has Made Greater Progress on Strategic Plan Initiatives That Enhance the Agency’s Capability Rather Than Productivity and Agility USPTO has made greater progress in implementing its Strategic Plan initiatives to make the patent organization more capable than it has been in implementing its productivity and agility initiatives. Specifically, of the activities planned for completion by December 2004, the agency has fully or partially implemented all 23 of the initiatives related to its capability theme to improve the skills of employees, enhance quality assurance, and alter the patent process through legislative and rule changes. In contrast, USPTO has partially implemented only 1 of the 4 initiatives related to the productivity theme to restructure fees and expand examination options for patent applicants and has fully or partially implemented 7 of the 11 initiatives related to the agility theme to increase electronic processing of patent applications and reduce examiners’ responsibilities for literature searches. In explaining why some initiatives have not been implemented, agency officials primarily cited the need for additional funding. With passage of the legislation in December 2004 to restructure and increase the fees available to USPTO, the agency is re-evaluating the feasibility of many initiatives that it had deferred or suspended. For more details on USPTO’s progress in implementing the 38 initiatives in the Strategic Plan, see appendix III. USPTO Has Made Substantial Progress on Its Capability Initiatives To improve the quality of its reviews of patent applications through workforce and process improvements, USPTO developed 23 capability initiatives: 9 to improve the skills of its workforce, 5 to enhance its quality assurance program, and 9 to improve processes through legislative and rule changes. Workforce Skills Improvements As shown in table 2, USPTO has implemented 5 and partially implemented 4 of the 9 workforce skills initiatives. Although the agency has not estimated how much funding would be needed to implement the final 4 initiatives, their full implementation was hindered, in part by funding constraints, agency officials said. The current status of these partially completed initiatives is as follows: To improve the selection and training of managers, USPTO has added proficiency in supervisory skills to the requirements for a supervisory examiner and in 2004 required applicants for such positions to pass an examination, but the agency has not fully developed the supervisory curriculum or trained supervisors. To help ensure that new examiners have the requisite skills prior to promotion, USPTO has identified the knowledge, skills, and abilities needed for patent examiners and established training units in work groups for new examiners, but has not developed a structured process for subsequent promotions. To implement a pre-employment test to assess English language communication skills of new patent examiners, USPTO has, among other things, revised its vacancy announcements to include English language proficiency as a required skill but has not developed an automated pre-employment test of such skills. USPTO has developed an action plan to establish an Enterprise Training Division, which was to have been in place in 2003, to consolidate responsibility for conducting legally required and other agencywide training, developing training policy, and monitoring funds spent on training. Quality Assurance Enhancements As shown in table 3, USPTO has implemented 3 and partially implemented 2 of the 5 capability initiatives to enhance its quality assurance program. The status of the initiatives USPTO has partially implemented is as follows: The agency has begun to develop a plan and criteria to review the quality of searches and anticipates incorporating such reviews in the quality assurance program during fiscal year 2006. To enhance the reviewable record for patent applications, USPTO has developed guidance and amended forms to allow both examiners and applicants to provide additional information on the content of interviews and reason for decisions and strongly recommends, rather than requires, applicants and examiners to do so. Process Improvements Related to Legislative and Rule Changes As shown in table 4, of the 9 capability initiatives to streamline patent processing through legislative and rule changes, USPTO has implemented 1 and partially implemented 8. Although full implementation of these initiatives is largely dependent on actions by Congress, the status of the 8 partially implemented initiatives is as follows: To certify the legal knowledge of newly registering and practicing patent attorneys and agents and to monitor their practice, the agency offers registration examinations electronically year-round and issued proposed rules to harmonize ethics and disciplinary actions with the requirements in place in most states, but has not yet developed a formal program of continuing legal education requirements to periodically recertify the skills of practicing attorneys and agents. To evaluate whether to adopt a unity standard to harmonize U.S. examination practices with international standards and allow U.S. applicants to obtain a single patent on related claims that must currently be pursued in separate patent applications in the United States, USPTO began a study of the changes needed to adopt a unity standard and sought public comment but has not completed its analysis, reached a decision, or drafted and introduced implementing legislation. For the other 6 partially implemented initiatives, USPTO is drafting proposed legislation or obtaining administrative clearance to introduce it. USPTO Has Made Less Progress Implementing Its Productivity and Agility Initiatives As shown in table 5, USPTO has not implemented 3 of the 4 initiatives that focus on accelerating the time to process patent applications and expand public input and has partially implemented only 1 of the productivity initiatives that allow the agency to increase fees and retain the funds. Following passage of legislation in 2004, USPTO has issued rules to increase fees generally and restructure fees to include separate components for different stages of processing both domestic and international patent applications, and for filing the application, searching the literature, and examining the claims. The separate components could, under certain circumstances, be refunded to the applicant. USPTO has not issued rules governing the refund of domestic fees. The revised fees are effective for 2005 and 2006. Similarly, as shown in table 6, USPTO has not implemented 4 of the 11 initiatives related to agility, has only implemented 1 and partially implemented 6. These 11 initiatives are designed to further the agency’s goal to create a more flexible organization and include efforts to increase electronic processing of patent applications, reduce examiners’ responsibilities for literature searches, and participate in worldwide efforts to streamline processes and strengthen intellectual property protection. The status of the 6 partially implemented agility initiatives to increase electronic processing and harmonize U.S. and international practices is as follows: Although USPTO has largely accomplished the actions related to implementing image-based electronic processing of patent applications, it has not achieved the full extent of electronic sharing of patent documents with the European Patent Office the initiative had anticipated and the two offices continue to finalize security and protocols between their servers. USPTO has amended rules to generally allow electronic filing of postgrant review documents and trained additional judges in streamlined procedures, but it has not defined records management schedules for electronic documents or implemented full electronic processing capabilities to support these reviews, such as text searching and the ability to receive, file, store, and view multimedia files. To ensure the availability of critical data in the event of a catastrophic failure, USPTO has certified and accredited its classified system and its mission-critical and business-essential systems, uses scanning tools to identify security weaknesses, and uses intrusion detection systems, but has not acquired the hardware, software, staff, and facilities for a backup data center. To promote harmonization of patent processing among international intellectual property offices and pursue goals to strengthen international intellectual property rights of U.S. inventors, USPTO participated in substantive patent treaty discussions that addressed such topics as the first-to-file (European) versus the first-to-invent (U.S.) standards, access to genetic resources, and definitions for such terms as prior art and novelty. To pursue multi- and bilateral agreements with other intellectual property offices, USPTO completed pilot programs to compare search results with the Japan and European Patent Offices and with patent offices in Australia and the United Kingdom. Regarding the acceleration of Patent Cooperation Treaty reforms, USPTO indicated that many significant reform procedures have been adopted in the last several years. Although USPTO has not determined how much funding would be needed, officials said that the lack of adequate funding largely limited its ability to complete planned actions on productivity and agility initiatives that had not been fully implemented. With passage of the fee-restructuring legislation in December 2004, USPTO plans to commence work on these suspended initiatives. For example, it has assigned new teams to evaluate the feasibility of using contractors and international intellectual property offices to conduct literature searches. For greater detail on USPTO’s progress in implementing the 38 initiatives in the Strategic Plan, see appendix III. USPTO Has Taken Steps to Help Attract and Retain a Qualified Patent Examiner Workforce, but Long- Term Success Is Uncertain Since 2000, USPTO has taken steps intended to help attract and retain a qualified patent examination workforce. The agency has enhanced its recruiting efforts and has used many human capital flexibilities to attract and retain qualified patent examiners. However, during the past 5 years, the agency’s recruiting efforts and use of benefits have not been consistently sustained, and officials and examiners at all levels in the agency told us that the economy has more of an impact on USPTO’s ability to attract and retain examiners than any actions taken by the agency. Consequently, how the agency’s actions will affect its long-term ability to maintain a highly qualified workforce is unclear. While USPTO has been able to meet its hiring goals, attrition has recently increased. USPTO Has Enhanced Recruiting Efforts to Attract Qualified Examiners USPTO’s recent recruiting efforts have incorporated several measures identified by GAO and others as necessary to attract a qualified workforce. First, in 2003, to help select qualified applicants, USPTO identified the knowledge, skills, and abilities that examiners need to effectively fulfill their responsibilities. As part of this study, USPTO conducted focus group meetings with, and surveys of, experienced examiners to identify and validate key skills. In doing so, the agency was responding to a recommendation from the Department of Commerce’s OIG to better target candidates likely to stay at USPTO. Second, in 2004, the agency’s permanent recruiting team, composed of senior and line managers, participated in various recruiting events, including visits to the 10 schools that the agency targeted based on the diversity of their student population and the strength of their engineering and science programs. The team also visited 22 additional schools, participated in two job fairs, and attended three conferences sponsored by professional societies. To assist the recruiting team, USPTO hired a consultant to develop a new brand image for the agency, shown in figure 1 below. As part of this effort, USPTO and the consultant surveyed USPTO managers and supervisors and conducted focus groups with a range of ethnically diverse audiences, from college seniors to experienced professionals, to identify the characteristics of examiners and how the target market perceives the agency, as well as to get a sense of their work habits, values, and perceptions of work at USPTO. According to USPTO, the agency’s new brand focuses on the vital role intellectual property plays in the U.S. economy and the career momentum of patent examiners. Agency officials said that USPTO uses its employment brand image at every opportunity, from Internet banner ads to print advertisements. They believe that this has enhanced public awareness of the agency and has helped distinguish USPTO from other employers. Finally, for 2005, USPTO developed a formal recruiting plan that, among other things, identified hiring goals for each technology center and described USPTO’s efforts to establish ongoing partnerships with the 10 target schools. In addition, USPTO trained its recruiters in effective interviewing techniques to help them better describe the production system and incorporated references to the production-oriented work environment in its recruitment literature. During a USPTO career fair in February 2005, we observed that potential candidates were provided with a range of information about the work environment at the agency, received handouts, and heard a formal presentation about the agency and the role and responsibilities of a patent examiner. The presentation also included overviews of the basics of intellectual property, the patent examination process, USPTO’s production model, the skill set needed for a successful patent examiner, and the benefits the agency offers. USPTO Has Used Many Federal Human Capital Benefits to Attract and Retain Examiners USPTO has used many of the human capital benefits available under federal personnel regulations to attract and retain qualified patent examiners. Among other benefits, USPTO has offered recruitment bonuses ranging from $600 to over $10,000; a special pay rate for patent examiners that is 10 percent above federal salaries for comparable jobs; noncompetitive promotion to the full performance level; flexible spending accounts that allow examiners to set aside funds for expenses related to health care and care for dependents; reimbursement for law school tuition; a transit subsidy program that was recognized in 2003 and 2004 as one of the best in the greater Washington, D.C., area; flexible working schedules, including the ability to schedule hours off work at home opportunities for certain supervisory and senior no-cost health screenings at an on-site health unit staffed with a registered nurse and part-time physician; casual dress policy; and on-site child care and fitness centers at USPTO’s new facility. According to many of the supervisors and examiners in our focus groups, these benefits were a key reason they were attracted to USPTO and are a reason they continue to stay. The benefits most frequently cited as important by examiners were the flexible working schedules and competitive salaries. Many supervisors and examiners said that the ability to set their own hours allowed them to better coordinate their work schedules with their personal commitments, such as a child’s school or day care schedule. Concerning salaries, examiners also cited the special pay rate offered by USPTO as increasing the agency’s competitiveness with the private sector. Although entry-level pay for examiners may not be as high as in the private sector, examiners who have been with the agency for about 5 to 7 years can earn up to $100,000 annually, and new examiners can increase their pay relatively rapidly, in part because of the noncompetitive promotion potential available at the agency. However, some examiners commented that the benefit of the special pay rate is eroding over time because examiners do not receive annual locality pay adjustments to compensate for the high cost of living in the Washington, D.C., area. According to USPTO management, in 2002 the agency sought such an adjustment, but OPM denied the request because of a lack of justification. In addition to basic salary, examiners may also earn various cash awards based on production or other types of meritorious performance. Lack of Consistent Recruiting Efforts and Benefits, along with Changes in the Economy, Could Affect USPTO’s Efforts The long-term effect of USPTO’s recruiting efforts and use of benefits is difficult to predict for a variety of reasons. First, many of USPTO’s efforts have been in place for a relatively short duration and have not been consistently maintained. For example, as shown in table 7, USPTO suspended recruitment and hiring in fiscal year 2000, which agency officials said resulted in its inability to meet its hiring goals for the year. Except for 2002, in those years where USPTO used its recruiting strategy consistently, such as 2001, 2003, and 2004, it not only met its hiring goals, but exceeded them. The second reason that creates uncertainty about USPTO’s success in retaining examiners is that USPTO has occasionally suspended some important employee benefits. For example, funding constraints led USPTO to discontinue reimbursing examiners for their law school tuition in 2002 and 2003, although the agency resumed reimbursement in 2004, when funding became available. Examiners who participated in our focus groups expressed dissatisfaction with the inconsistent availability of the benefits. Regarding law school tuition reimbursement, one examiner said, “I started when they started the and then they cut it off and I had to pay myself, which creates a large incentive to leave the office now that I have . . . student loans to pay off.” Other examiners expressed similar views. More recently in March 2005, USPTO proposed to eliminate or modify other benefits such as examiners’ ability to earn credit hours and alter examiners’ ability to set their own work schedules. For example, unlike current practice, examiners would no longer be able to schedule hours off during midday without a written request approved in advance. These benefits were cited by examiners in our focus groups as key reasons for working at USPTO, and eliminating such benefits may impact future retention. The third and possibly the most important factor that adds to the uncertainty surrounding the success of USPTO’s recruitment efforts is the unknown potential impact of the economy. According to USPTO officials and examiners, because USPTO competes directly with the private sector for qualified individuals, changes in the economy have a greater impact on USPTO’s ability to attract and retain examiners than any actions taken by the agency. They told us that when the economy picks up, more examiners tend to leave USPTO and fewer qualified candidates accept employment offers. Conversely, they said that when there is a downturn in the economy, employment opportunities at USPTO become more attractive. When discussing reasons for joining USPTO, many examiners in our focus groups cited job security and lack of other employment opportunities, making comments such as “I had been laid off from my prior job, and this was the only job offer I got at the time”; “I looked towards the government because I wanted job security”; and “. . . part of the reason I came to the office is that when I first came out of college, the job market was not great.” The relationship between the economy and USPTO’s ability to attract and retain examiners is reflected in its attrition rates over time. As shown in figure 3, attrition among patent examiners declined from a high of almost 14 percent in 2000 to just over 6 percent in 2003. This decline coincided with a recession in 2001, a general slowdown of the economy, and subsequent collapse of the “high tech bubble”—which caused many Internet-based businesses to close, leaving computer scientists and engineers out of work. The decline in attrition was preceded by a more robust economy during a time when the high-tech industry was building up. At that time, attrition at USPTO was steadily rising. Since 2004, attrition has risen again to almost 9 percent, fueled in part by an increase in the number of examiners who retired. By the end of fiscal year 2010, about 12 percent of examiners will be eligible to retire. Another trend that could affect USPTO’s efforts to maintain a highly qualified patent examination workforce is the high level of attrition among younger, less experienced examiners. While attrition among examiners who have been at USPTO for 3 or fewer years has declined each year since 2000, attrition among these examiners continues to account for over half of all examiners who leave the agency. Attrition of examiners with 3 or fewer years of experience is a particularly significant loss for USPTO because the agency invests considerable time and money helping new examiners become proficient during the first few years. Managers and examiners told us that examiners usually become fully proficient in conducting patent application reviews in about 4 to 6 years. Managers we spoke with said the agency needs continuous recruiting efforts to offset these trends and continue to attract the best candidates. They said they hope to have constant recruitment efforts and year-round hiring in the upcoming years. USPTO Faces Long- standing Human Capital Challenges That Could Undermine Its Recruiting and Retention Efforts Although USPTO has taken a number of steps to attract and retain a qualified patent examiner workforce, the agency continues to face three human capital challenges of a long-standing nature that could also undermine its efforts in the future if not addressed. Current workforce models developed by GAO and others to help federal agencies attract and retain a qualified workforce suggest, among other things, that agencies establish an agencywide communication strategy, including opportunities for feedback from employees; involving management, employees, and other stakeholders in making key decisions; have appropriately designed compensation and awards systems; and develop strategies to address current and future competencies and skills needed by staff. However, USPTO lacks a collaborative culture, has an awards system that is based on outdated information, and requires little ongoing technical training for patent examiners. USPTO management and examiners do not agree on the need to address these issues. USPTO Has Not Established Effective Mechanisms for Managers to Communicate and Collaborate with Examiners Organizations with effective human capital models have strategies to communicate with employees at all levels of the organization, as well as involve them in key decision-making processes. However, lack of good communication and collaboration has been a long-standing problem at USPTO. For example, focus groups with examiners conducted by USPTO in 2000 identified a need for improved communication across all levels of the agency to assist in its efforts to retain examiners. Accordingly, one of the goals listed in the Commissioner of Patent’s 2003 performance appraisal plan was to establish an effective communication strategy. However, when we asked for the agency’s communication strategy, USPTO management officials acknowledged the agency does not have a formal strategy. Instead, USPTO officials provided us with a list of activities undertaken by the agency to improve communication. However, most of these activities focused on improving communication among managers but not between managers and other levels of the organization, such as between managers and patent examiners. The efforts to communicate with examiners were largely confined to presenting information to examiners and generally were not interactive, according to examiners. Patent examiners and supervisory patent examiners that participated in our focus groups frequently said that communication with USPTO management was poor and that managers provided them with inadequate or no information. They also said management is out of touch with examiners and their concerns and that communication with managers tends to be one way and hierarchical, with little opportunity for feedback. Management officials told us that informal feedback can always be provided by anyone in the organization—for example, through an e-mail to anyone in management. However, some patent examiners believe they will be penalized for offering any type of criticism of management actions or decisions and therefore do not provide this kind of feedback. The lack of communication between management and examiners is exacerbated by the contentious working relationship between USPTO management and union officials and the complexity of the rules about what level of communication can occur between managers and examiners without involving the union. Union officials stated that a more collaborative spirit existed between USPTO and the examiners’ union from the late 1990s to about 2001. During this period, both parties actively worked to improve their relationship. For example, in 2001, USPTO management and the union quickly reached an agreement that led to increased pay for examiners and paved the way for electronic processing of patent applications by having examiners rely more heavily on electronic searches of relevant patent literature. According to union officials, this agreement was negotiated in about 1-1/2 weeks, improved the morale of patent examiners, and made them feel valued and appreciated. Since that time however, both USPTO management and union officials agree that their working relationship has not been as productive. Both say that despite several attempts, neither USPTO managers nor union officials have improved this relationship and that issues raised by either side are routinely presented for arbitration before the Federal Labor Relations Authority because the two sides cannot agree. USPTO and union officials are currently disputing the validity of their 1986 collective bargaining agreement, which USPTO deems defunct. In February 2004, this issue was presented for arbitration to determine the validity of the agreement. According to union officials, the arbitrator agreed with their position that the agreement was still valid and ordered a 1-year hiatus on negotiations on a new agreement. USPTO contends that the arbitrator said the two had “tacit agreements” but did not define the term. In March 2005, without continuing any debate regarding the validity of the 1986 agreement, USPTO issued a proposed new collective bargaining agreement with the union. The union denounced this proposal, reporting in its newsletter to examiners that “USPTO declares war on employee professionalism and patent system integrity.” Some USPTO managers alluded to this contentious relationship as one of the reasons why they have limited communication with patent examiners, who are represented by the union even if they decide not to join. Specifically, they believe they cannot solicit the input of employees directly without engaging the union. Another official, however, told us that nothing prevents the agency from having “town hall” type meetings to discuss potential changes, as long as the agency does not promise examiners a benefit that impacts their working conditions. Union officials agreed that USPTO can invite comments from examiners on a plan or proposal; however, if the proposal concerns a negotiating issue, the agency must consult the examiners’ union, which is their exclusive representative with regard to working conditions. For example, union officials said that agency management can involve examiners on discussions of substantive issues related to patent law and practice, such as how to implement electronic filing, but must consult the union to obtain examiners’ views on issues such as the development of the Strategic Plan which contains initiatives that would entail, for example, additional reviews of examiners work and other changes to working conditions. Given the lack of effective communication mechanisms between management and patent examiners and the poor relationship between management and the union, patent examiners report little involvement in providing input to key decision-making processes. For example, some of the examiners in our focus groups stated that although they had heard of the agency’s Strategic Plan, they were not involved in developing it and had no idea what it entailed or how it was to be implemented. USPTO management officials we spoke to acknowledged that employees had no role in developing the Strategic Plan even though USPTO identifies its employees as a key stakeholder in the plan. This lack of employee involvement is not a new problem for the agency. For example, a study about the agency’s performance measurement and rewards system conducted in 1995 by a private consultant stated that the agency must strive to include employees at all levels of the organization in the decision- making process to both introduce a variety of perspectives and experiences and to generate the critical support of employees to any new system developed. Additionally, responses to employee surveys conducted in 1998 and 2001 by USPTO and others indicate that employees believed that they did not play a meaningful role in decision making. Specifically, a quarter of the examiners surveyed in 1998 expressed satisfaction with their level of involvement in decisions that affect their work. In 2001, less than half of examiners who responded to the survey said they believe USPTO management trusts and respects them or values their opinions. Agency- specific data from the 2004 federal human capital survey conducted by the Office of Personnel Management have not been released. Managers told us that examiners do not need to be involved in decision making because all of the agency’s senior managers—from the Commissioner down—“came up through the ranks.” Moreover, they said the basic role of the agency has not changed in 200 years. As a result, senior managers believe they bring the staff perspective to all planning and decision-making activities. However, examiners in our focus groups believe that senior managers are out of touch with the role of examiners, making comments such as “I think it would help if upper management who haven’t examined in decades could try to do some of it now—it’s so drastically different than when they were doing it—and realize how difficult it is, and then maybe they might get a clue. I really don’t think that they realize how much work it takes to examine an application. It is so different than when they were examining.” Examiners in our focus groups said that the lack of communication and involvement has created an atmosphere of distrust in management officials by examiners and has lowered examiners’ morale. Examiners’ Monetary Awards Are Based on Outdated Assumptions about the Time It Takes to Process a Patent Application According to human capital models, an agency’s compensation and rewards system should help it attract, motivate, retain, and reward the people it needs to achieve its goals. To ensure that their systems meet these criteria, agencies should periodically assess how they compensate staff and consider changes, as appropriate. Patent examiners’ monetary awards are based largely on the number of patent applications they process, but the assumptions underlying their annual application-processing quotas (called production quotas) have not been updated since 1976. Depending on the type of patent and the skill level of the examiner, each examiner is expected to process an average of 87 applications per year at a rate of 19 hours per application. Examiners who consistently do not meet their quotas may be dismissed. Patent examiners may earn cash awards based on the extent to which they exceed their production quotas. Although examiners in our focus groups generally support production quotas as a way to guide their work and provide an objective basis for cash awards, they said that the time estimates involved are no longer accurate. Examiners in our focus groups told us that, in the last several decades, the tasks for processing applications have greatly increased while the time allowed has not. For example, examiners said the number of claims per application have increased, which in turn increases the amount of relevant literature they must review and analyze for each application. Also, while the greater use of electronic search tools has improved their access to relevant patent literature, the use of such tools has also increased the amount of literature they must review. In addition, the complexity of applications in some fields has increased significantly, requiring more time for a quality review. Neither USPTO nor the examiners union has collected information on the effects that such changes as improvements in electronic search capabilities have had on the time required to review patent applications. Moreover, many examiners in our focus groups said that the time limitations of the current production quotas are inconsistent with producing high-quality work and do not adequately reflect the actual tasks and time required to examine applications. For example, examiners have responsibilities included in their job expectations, such as responding to calls from applicants and the public and providing more documentation for their decisions, which are not accounted for in the production model. Examiners expressed concern that although the agency’s emphasis on quality has increased under the Strategic Plan, examiners have not been allowed more time to fulfill these increased responsibilities for quality, and there are no negative consequences for examiners who produce low-quality work. Examiners told us that voluntarily working overtime to meet quotas is common at USPTO, and they find it demoralizing not to have enough time to do a good quality job. In commenting on a draft of this report, USPTO stated that quality is a critical element of an examiner’s performance standards and if an examiner does not maintain quality, their rating would reflect this deficiency. Consequences would depend on the level of deficiency. Employee surveys conducted since 1998 suggest that these concerns are not new to the agency. Specifically, a quarter of the examiners who responded to the agency’s employee surveys during the period 1998 to 2001 said that the amount of time available for their work was sufficient to produce high-quality products and services. The 1995 study conducted by a private consultant also noted that USPTO is production driven and that the agency’s emphasis on production placed considerable stress on examiners. Although less than 25 percent of patent examiners who left USPTO in 2002 and 2004 actually completed an exit survey, about half who did cited dissatisfaction with the nature of the job, the production system, and the workload as factors that had the most impact on their decision to leave the agency. In contrast, USPTO managers had a different perspective on the production model and its impact on examiners. They stated that the time estimates used in establishing production quotas do not need to be adjusted because the efficiencies gained through actions such as the greater use of technology have offset the demands resulting from changes such as greater complexity of the applications and increases in the number of claims. Moreover, they said that for an individual examiner, reviews of applications that take more time than the estimated average are generally offset by other reviews that take less time. USPTO Does Not Require Ongoing Technical Education for Patent Examiners Current workforce models suggest that professional organizations such as USPTO make appropriate investments in education, training, and other developmental opportunities to help build the competencies of its employees. Reviewing patent applications involves knowledge and understanding of highly technical subjects, but USPTO does not require ongoing training on these subjects. Instead, USPTO only requires newly hired examiners to take extensive training on how to be a patent examiner during the first year, and all other required training is focused on legal training. For example, newly hired examiners are required, within their first 10 months at the agency, to take about 200 hours of training on such topics as procedures for examining patent applications, electronic tools used in the examination process, and patent law and evidence. In addition, almost all patent examiners are required to take a range of ongoing training on legal matters, including patent law. As a result of the implementation of some Strategic Plan initiatives, additional mandatory training to help examiners prepare for tests to certify their legal competency and ensure their eligibility for promotion from a GS-12 level to a GS-13 is also required. In addition, patent examiners who have the authority to issue patents (generally GS-14s or above) must pass tests on the content of legal training every 3 years. In contrast, patent examiners are not required to undertake any ongoing training to maintain expertise in their area of technology, even though the agency acknowledges that such training is important, especially for electrical and electronic engineers. Specifically, in its 2001 justification for examiners’ special pay rates, the agency stated, “Engineers who fail to keep up with the rapid changes in technology, regardless of degree, risk technological obsolescence.” USPTO does offer some voluntary in-house training, such as technology fairs and industry days at which scientists and others are invited to lecture to help keep patent examiners current on the technical aspects of their work. Because this training is not required by USPTO, patent examiners told us they are reluctant to attend such training given the time demands involved. USPTO also offers a voluntary external training program for examiners to update their technical skills. Under this program, examiners may take technical courses related to their area of expertise at an accredited college or university. USPTO will pay up to $5,000 per fiscal year for each participant and up to $150 per course for required materials, such as books and lab fees. In addition, agency managers told us the agency will pay registration fees for a small number of examiners to attend conferences, although sometimes it will not pay travel expenses. While USPTO officials told us they knew of examiners who had taken advantage of these opportunities, the agency could provide no data on the extent to which examiners had taken advantage of these voluntary training opportunities. Some examiners in our focus groups said that they did participate in these training opportunities, but others said they did not because of the monetary costs or personal time involved. USPTO believes that a requirement for ongoing technical training is not necessary for patent examiners because the nature of the job keeps them up-to-date with the latest technology. According to agency officials, the primary method for examiners to keep current in their technical fields is by processing patent applications. However, patent examiners and supervisors in our focus groups said that often the literature cited in the application they review for patents, particularly in rapidly developing technologies, is outdated, can be too narrowly focused, and does not provide them the big picture of the field. For example, in certain fields, such as computer software and biotechnology, some examiners told us that the information cited in the application may be several years old even though it may have been current at the time the application was submitted. Conclusions To improve its ability to attract and retain the highly educated and qualified patent examiners it needs, USPTO has taken a number of steps recognized by experts as characteristic of highly effective organizations. However, the lack of an effective communication strategy and a collaborative environment that is inclusive of all layers within the organization could undermine some of USPTO’s efforts. Specifically, the lack of communication and collaborative culture has resulted in a general distrust of management by examiners and has caused a significant divide between management and examiners on important issues such as the appropriateness of the current production model and the need for technical training. We believe that unless USPTO begins the process of developing an open, transparent, and collaborative work environment, its efforts to hire and retain examiners may be negatively impacted in the long run. Recommendations for Executive Action We recommend that the Secretary of Commerce direct the Under Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office to take the following two actions: develop formal strategies to (1) improve communication between management and patent examiners and between management and union officials, and (2) foster greater collaboration among all levels of the organization to resolve key issues discussed in this report, such as the assumptions underlying the quota system and the need for required technical training. Agency Comments and Our Evaluation In written comments on a draft of our report, the Under Secretary of Commerce for Intellectual Property and Director of USPTO agreed with our findings, conclusions, and recommendations. The agency’s comments suggest that USPTO will develop a communication plan and labor management strategy and educate and inform employees about progress on initiatives, successes, and lessons learned. In addition, USPTO indicated that it would develop a more formalized technical program for patent examiners to ensure that their skills are fresh and ready to address state-of- the-art technology. USPTO also provided technical comments that we have incorporated, as appropriate. USPTO’s comments are included in appendix II. We are sending copies of this report to interested congressional committees; the Secretary of Commerce; the Under Secretary for Intellectual Property and Commissioner of the U.S. Patent and Trademark Office; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Scope and Methodology We were asked to report on various efforts being undertaken by the U.S. Patent and Trademark Office (USPTO) about its (1) overall progress in implementing the initiatives in the 21st Century Strategic Plan related to the patent organization; (2) efforts to attract and retain a qualified patent workforce; and (3) remaining challenges, if any, in attracting and retaining a qualified patent workforce. To determine USPTO’s progress toward implementing the Strategic Plan initiatives for the patent organization, we reviewed the initiatives contained in the plan, as well as agency documents regarding USPTO’s progress in implementing each initiative. We also interviewed key USPTO officials and union officials about the plan’s implementation. To determine what actions USPTO has taken to attract and retain a qualified patent workforce and what challenges, if any, the agency faces in this area, we reviewed USPTO’s Workforce Plan and other policies and practices related to human capital. We interviewed USPTO management, union officials, and relevant interest groups, as well as officials from the Department of Commerce, its Office of Inspector General (OIG), and the Office of Personnel Management (OPM) about human capital initiatives undertaken by USPTO. We reviewed evaluations of USPTO human capital management efforts by OIG and by a private consultant. We reviewed USPTO employee surveys, USPTO documents on hiring and retention, and OPM reports on USPTO. We also reviewed results from USPTO and OPM employee surveys and compared human capital policies and practices with best practices recommended by GAO and OPM. In addition, we attended a USPTO career fair for patent examiners. To obtain the perspective of patent examiners and supervisory patent examiners on issues related to USPTO’s ability to attract and retain a qualified patent examination workforce, we conducted 11 focus groups. Participants were randomly selected from all patent examiners and supervisory patent examiners who had been at USPTO at least 9 months. A total of 91 examiners and supervisory examiners attended the focus groups. The number of participants in the groups ranged from 6 to 11; participants in 8 of the groups were patent examiners while the other 3 groups encompassed supervisory patent examiners. Participants were selected from both USPTO locations (Alexandria and Crystal City, Virginia). We developed questions for the focus groups based on literature reviews and by speaking with USPTO management, union officials, and interest groups. In addition, we developed a short questionnaire that asked for individual views of issues similar to those being discussed in the groups. Following each discussion question, participants filled out the corresponding questions in their questionnaires. Trained facilitators conducted the focus groups and transcripts were professionally prepared. Prior to using the transcripts, we checked each for accuracy and found that they were sufficiently accurate for the purposes of this study. We conducted a content analysis in order to produce a summary of the respondents’ comments made during the focus groups. The classification plan was developed by two GAO analysts who independently reviewed the transcripts and proposed classification categories for each question. The classification categories were finalized through discussion with a third analyst. One analyst then coded all comments made during each discussion question into the categories. The accuracy of the coding was checked by another analyst, who independently coded a random sample of transcript pages for each question. The accuracy of the content coding was sufficiently high for the purposes of this report. Finally, the number of comments in each category and subcategory was tallied, and the resulting summary of the comments was verified by a second analyst. A quantitative analysis was conducted on the data from the questionnaires. Our review focused exclusively on the activities of the patent organization and not those of the trademark organization. We conducted our review from June 2004 through May 2005 in accordance with generally accepted government auditing standards. Comments from the U.S. Patent and Trademark Office Progress on Strategic Plan Initiatives USPTO issued its 21st Century Strategic Plan in June 2002, then updated and rereleased it in February 2003. The Strategic Plan responds to the Government Performance and Results Act and direction from Congress. The plan is centered on three themes—capability, productivity, and agility. Strategic Theme: Capability To become a more capable organization that enhances quality through workforce and process improvements, USPTO developed initiatives to improve the skills of its workforce (transformation), enhance its quality assurance program (quality), and improve processes through rule changes or proposed legislative changes (legislative/rules changes). Strategic Theme: Productivity The agency’s productivity initiatives are designed to accelerate the time to process patent applications by offering a range of examination options to applicants, reducing the responsibilities examiners have for searches of literature related to applications (pendency and accelerated examination), and creating financial incentives for applicants as well as an improved postgrant review process (shared responsibility). Strategic Theme: Agility To become an organization that responds quickly and efficiently to changes in the economy, the marketplace, and the nature and size of workloads, USPTO developed initiatives to implement electronic beginning-to-end processing of patents (e-government), increase reliance on the private sector or other intellectual property offices (flexibility), and streamline international patent systems and strengthen protection of patent rights as well as share search results with other international patent offices (global development). GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cheryl Williams, Vondalee R. Hunt, Lynn Musser, Cynthia Norris, and Ilga Semeiks made significant contributions to this report. Allen Chen, Amy Dingler, Omari Norman, Don Pless, and Greg Wilmoth also contributed to this report.
The U.S. Patent and Trademark Office (USPTO) is responsible for issuing U.S. patents that protect new ideas and investments in innovation and creativity. Recent increases in both the complexity and volume of patent applications have increased the time it takes to process patents and have raised concerns about the validity of the patents USPTO issues. Adding to these challenges is the difficulty that USPTO has had attracting and retaining qualified staff. In this context, GAO was asked to obtain information about USPTO's patent organization. Specifically GAO reviewed (1) overall progress in implementing the initiatives in its strategic plan; (2) efforts to attract and retain a qualified patent workforce; and (3) remaining challenges, if any, in attracting and retaining a qualified patent workforce. USPTO has made more progress in implementing its strategic plan initiatives to increase the agency's capability than initiatives aimed at decreasing patent pendency. USPTO has fully or partially implemented all 23 capability initiatives that focus on improving the skills of employees, enhancing quality assurance, and altering the patent system through changes in existing laws or regulations. In contrast, the agency has partially or fully implemented only 8 of the 15 initiatives aimed at reducing pendency. Lack of funding was cited as the primary reason for not implementing these initiatives. With passage of legislation in December 2004 to increase fees available to USPTO for the next two years, the agency is re-evaluating the feasibility of implementing some of these initiatives. Since 2000, USPTO has taken steps intended to help attract and retain a qualified patent examination workforce, such as enhancing its recruiting efforts and using many of the human capital benefits available under federal personnel regulations. However, it is too soon to determine the long-term success of the agency's recruiting efforts because they have been in place only a short time and have not been consistently sustained due to budgetary constraints. Long-term uncertainty about USPTO's hiring and retention success is also due to the unknown impact of the economy. In the past, when the economy was doing well, the agency had more difficulty in recruiting and retaining the staff it needed. USPTO faces three long-standing challenges that could also undermine its efforts to retain a qualified workforce: the lack of an effective strategy to communicate and collaborate with examiners; outdated assumptions in the production quotas it uses to reward examiners; and the lack of required ongoing technical training for examiners. According to patent examiners, the lack of communication and a collaborative work environment has resulted in low morale and an atmosphere of distrust that is exacerbated by the contentious relationship between management and union officials. Also, managers and examiners have differing opinions on the need to update the monetary award system that is based on assumptions that were established in 1976. As a result, examiners told us they have to contend with a highly stressful work environment and work voluntary overtime to meet their assigned quotas. Similarly, managers and examiners disagree on the need for required ongoing technical training. Examiners said they need this training to keep current in their technical fields, while managers believe that reviewing patent applications is the best way for examiners to remain current.
Background The Uruguay Round, the seventh in a series of multilateral negotiations known as “rounds,” established the World Trade Organization (WTO) on January 1, 1995, as the successor to the General Agreement on Tariffs and Trade (GATT). This round resulted in over a dozen separate agreements that, among others, covered intellectual property rights and trade in services and strengthened existing disciplines on agriculture. It also established a stronger dispute settlement process than had been available under the GATT. Moreover, unlike previous trade rounds, the Uruguay Round agreements were part of a “single undertaking,” meaning that all GATT members had to agree to all their provisions, with no discretion as to which accords they wished to accept. The WTO administers rules for international trade, provides a mechanism for settling disputes, and provides a forum for conducting trade negotiations. WTO membership has increased since its creation to 144 members, up from 90 GATT members when the Uruguay Round was launched in 1986. WTO membership is also diverse in terms of economic development, consisting of all developed countries and a large percentage of developing countries, from the more advanced to the very poor. Specifically, while the WTO has no formal definition of a “developing country,” the World Bank classifies 105 current WTO members, or approximately 73 percent, as developing countries. In addition, 30 members, or 21 percent of the total, are designated as “least developed countries.” The ministerial conference is the highest decisionmaking authority in the WTO, convenes at least once every 2 years, and consists of trade ministers from all WTO member countries. The WTO General Council, made up of representatives from all WTO member governments, implements decisions adopted by the members in between ministerial conferences. Decisionmaking in the WTO is largely based on consensus among its members rather than on a majority of member votes as it is in many other international organizations. Four ministerial conferences have taken place since the WTO’s creation. Prior to the third ministerial conference, held in Seattle in December 1999, WTO members announced their intention to launch a new round of multilateral trade negotiations. However, the Seattle conference ended without launching negotiations. Following four days of intensive talks, the conference was suspended without issuing a ministerial declaration. The failure to launch a new round in Seattle resulted from a combination of circumstances, including a lack of agreement among members on the issues to be discussed in a new round, the sensitivity and complexity of trade issues under consideration, and inherent difficulties in the negotiation process. Ultimately, at the fourth ministerial conference in Doha, Qatar, in November 2001, WTO members were able to reach consensus on a new negotiating effort, officially called the Doha Development Agenda. The Doha Declaration sets forth a work program for the negotiations to be concluded by January 1, 2005. Figure 1 illustrates the organizational structure that the WTO has established to conduct the negotiations mandated by the Doha Declaration. The new negotiating effort will encompass agriculture and trade in services, two critical areas where negotiations have been ongoing since 2000, under an existing Uruguay Round mandate. Special sessions of standing WTO bodies will also address the relationship between trade and the environment; attempt to clarify and improve provisions of the WTO Dispute Settlement Understanding; and negotiate the establishment of a multilateral notification and registration system for geographical indications for wines and spirits. The WTO also called for a special session of the Trade and Development Committee to identify and attempt to strengthen special and differential treatment provisions for developing countries. The status of this body has been the subject of debate and is unclear. While some countries consider it to be a legitimate negotiating group, the United States and several other countries contend that it belongs under the general Doha work program. In addition to the special sessions, two new negotiating groups have been created to review and propose revisions to WTO disciplines dealing with trade rules and to recommend cuts in tariffs and other steps to facilitate market access for nonagricultural products. Chairpersons of special sessions and new negotiating groups have been appointed by the WTO membership to serve up to the fifth ministerial conference in 2003, at which time all appointments will be reviewed. The Doha Declaration also mandated negotiations on numerous issues related to difficulties that developing countries face in implementing their Uruguay Round commitments. However, the declaration did not create a separate negotiating group for these implementation issues. Several Factors Led to Successful Doha Launch Several factors contributed to the WTO’s successful launch of new trade negotiations, which was a difficult feat, considering the lingering uncertainty about launching such a round among WTO members since their failure to do so 2 years earlier in Seattle. First, a strong relationship between the United States and the European Union, particularly the U.S. Trade Representative (USTR) and the European Union (EU) Commissioner for Trade, helped forge consensus among other WTO members. Second, WTO members used an effective strategy to prepare for the Doha ministerial conference. Third, two key developments occurring during the Doha ministerial greatly contributed to the developing countries’, particularly African countries’, willingness to launch negotiations. Finally, the tragic events of September 11th helped galvanize WTO members to show their support for a strong and healthy worldwide trading system. U.S.-EU Relationship Was Critical The strong support on the part of the United States and the European Union for the negotiations, bolstered by the positive relationship between the U.S. Trade Representative and the EU Commissioner for Trade, helped bring together other WTO members on specific issues and on the overall goal of launching a new set of global trade negotiations. WTO members did not agree to launch a new set of trade negotiations at the 1999 ministerial conference in Seattle due, in part, to a lack of consensus among major trading countries, especially the United States and the European Union. In Doha, by contrast, the United States and the European Union, while not agreeing on all of the key issues, were united behind a common goal of launching a new round. According to some member country representatives, the long-standing friendship between the U.S. Trade Representative and the EU Trade Commissioner, dating back to the 1980s, was a positive force in launching the negotiations. They noted that the two officials used their personal rapport to garner support for an agenda for these negotiations. Examples cited included their efforts to build bridges with developing countries and to work together to devise compromise language on trade and the environment, which allowed the European Union to make crucial concessions in agriculture. Several WTO member country representatives told us that agreement between the United States and the European Union was essential to forging a consensus to launch negotiations. Preparation Strategy Was Effective According to U.S. officials and foreign government representatives, a key strategy for achieving consensus to launch negotiations was holding two informal “mini-ministerials,” or informal meetings, among a cross section of developed and developing country members, in the months before the Doha ministerial conference. According to U.S. and foreign officials, these meetings helped to rebuild personal relationships among ministers that were crucial to overcoming the negative atmosphere evidenced by the WTO’s failure to launch negotiations in Seattle in 1999. Further, throughout 2001, trade ministers from major trading countries met individually with other ministers, especially from developing countries. In addition, developing countries, including many in Africa, participated in a network of informal meetings to prepare for the ministerial. Some developing country representatives said that in contrast to preparations before the Seattle ministerial, these efforts showed willingness on the part of developed countries to better understand and address their concerns. For example, several foreign representatives cited the fact that the U.S. Trade Representative spent a great deal of time before the Doha ministerial listening to developing countries’ views, particularly those of African nations. They added that this was a positive step in relations between developed and developing WTO member countries. According to U.S. and WTO officials and foreign representatives, this effort was significant, as negotiations could not have been launched without the developing countries’ support. Another preparatory strategy that helped WTO members reach consensus on an agenda for new negotiations was the nature of the text used as the basis for discussion at the ministerial meeting. Before the Doha ministerial, the General Council chair produced his own text based on input he received during exhaustive meetings with small groups of WTO members. This text reflected the consensus views of members on the issues, based on the chair’s own best judgment. In contrast, before the Seattle ministerial conference, specific proposals from various member countries on each issue drove the discussions on what should be included in the draft declaration. This led to a lengthy (32 page) draft declaration text that was a compendium of member country positions, including nearly 400 bracketed items, indicating disagreement among members. Key Developments in Doha Won Developing Country Support WTO officials and foreign representatives said that two developments at the Doha ministerial conference were crucial to gaining developing countries’ support to launch a new set of negotiations. First, the WTO granted the European Union a waiver from WTO’s most-favored-nation clause (MFN) to continue providing preferential market access to African, Caribbean, and Pacific (ACP) countries through its Cotonou Agreement, which was signed in 2000. The Cotonou Agreement will be valid for 20 years, during which time the European Union and ACP countries can enter into additional economic integration agreements, progressively removing barriers to trade. In addition, the agreement includes a pledge of 13.5 billion euros in development assistance to ACP countries for the initial 5- year period. The Secretary General of the ACP group has concluded that the Cotonou Agreement will help integrate ACP countries into the world economy by reinforcing regional integration, thus helping them to benefit from globalization. Second, WTO officials and U.S. and foreign country representatives said the adoption at Doha of a declaration clarifying the relationship between the WTO’s Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and public health was critical to gaining the support of many African nations. The Declaration on the TRIPS Agreement and Public Health states that the TRIPS agreement “does not and should not prevent Members from taking measures to protect public health.” Prior to the ministerial, African nations, nongovernmental organizations, and others had argued that TRIPS could prevent developing countries from gaining access to medicines needed to fight HIV/AIDS, tuberculosis, malaria, and other epidemics (see app. I for a further discussion of this declaration). September 11th Tragedy Galvanized Support for Negotiations The final major factor that U.S. and WTO officials and member country representatives cited as contributing to the ministerial’s success was the tragic events of September 11th. They said that after September 11th, many WTO members felt that it was essential for there to be a successful major international meeting to demonstrate the strength of the international community. Further, given the potential impact of the attacks on the world economy, which was already in recession, the ministerial would be an important barometer of the strength of the multilateral trading system. The combination of these factors led one official to remark that after September 11th, the ministerial had to succeed because “the price of failure was too high.” Early Decisions on Key Sensitive Issues Vital to Progress in Round The most important interim deadlines in the negotiations involve early key decisions on agricultural trade and the Singapore issues, including competition and investment in particular. Meeting the interim deadlines on these issues will be crucial to achieving overall progress in the negotiations and will provide a good indication of the ultimate prospects for the negotiations’ successful conclusion. The following section discusses the nature of these key decisions and analyzes the importance of these particular issues and what makes them sensitive and difficult to negotiate. Figure 2 shows the main interim deadlines and key events in the negotiations through the fifth ministerial conference in 2003. Meeting Interim Deadline on Agricultural Trade Crucial but Difficult According to several WTO member country representatives and senior WTO officials, whether WTO members meet the March 31, 2003, interim deadline for establishing the agricultural modalities (that is, numerical targets, timetables, and formulas for countries’ commitments) specified in the Doha Declaration will be a crucial indicator of the likelihood of success in the overall negotiations. This is attributable to the extreme importance of agricultural trade reform for a large number of WTO member countries, many of which want to see progress in the agriculture talks before coming to agreement on other issues. However, meeting this deadline will be difficult, because it necessitates reaching agreement on areas of long- standing dispute, particularly with regard to agricultural export subsidies and domestic support payments, which have generated strong domestic constituency concerns. Specifically, it is the negotiators’ intention to agree on a target and timetable for phasing out agricultural export subsidies. They will also need to agree on a definition of the types of domestic agricultural support payments that should be considered trade-distorting. Agricultural modalities also include devising a formula for WTO members to reduce tariffs, which, while important, is less controversial. Achieving agricultural trade reform in the negotiations is critical because improving access to countries’ agricultural markets is a major priority for a wide range of WTO member countries, including major agricultural exporters such as Canada, Australia, and Brazil, who want to expand their overseas markets. Proponents also include many developing countries, among them Colombia, the Philippines, South Africa, and Thailand, who wish to take advantage of their natural competitive advantage in the agricultural sector. Agricultural reform is also critical to the United States, whose farmers have faced a finite domestic market over the past several years, falling international commodity prices, and a strong dollar that had effectively inflated the cost of their exports to foreign markets. Because agricultural trade is such a top priority to so many WTO members, the chair of the special session of the Committee on Agriculture has called it a key to completion of the negotiations, stating that without progress in agriculture, there will be no progress in the overall round. Meeting the March 2003 interim deadline on modalities for export subsidies and domestic support payments will be difficult, as these areas have been particularly contentious, generating intense domestic constituency concerns in the European Union, Japan, and the United States. For the European Union, the goals set out in the Doha Declaration for phasing out export subsidies present a serious challenge. Over the years, European farmers have come to rely on the generous support and subsidies provided under the Common Agricultural Policy (CAP). In fact, the CAP has become the single biggest expenditure in the EU budget, representing approximately 42 percent of its budget. In recent years, the European Union has begun to reform the CAP and lower costs. Nevertheless, EU farmers in certain countries, such as France, wield considerable political power and could challenge the positions taken by governments that would agree to eliminate export subsidies. Similarly, in Japan, accepting substantial cuts in domestic support for agriculture and reduced tariffs on agricultural imports could jeopardize farmers’ support for the current government. Meanwhile, in May 2002, the United States enacted the Farm Security and Rural Investment Act of 2002 (P.L. 107-171), which will raise U.S. spending on domestic support for agriculture by about $73.5 billion over the next 10 years. Many WTO member countries are aggressively pursuing the goal of reducing and eventually eliminating export subsidies, including the 18- country coalition known as the Cairns Group, which accounts for one- third of the world’s agricultural exports, as well as India, Mexico, Nigeria, and the United States. The European Union is the main target of the Doha mandate regarding export subsidies, as its subsidies far exceed those of other countries. As shown in figure 3, in 1998 the European Union was responsible for about 90 percent ($6.6 billion) of all agricultural export subsidies used worldwide. In 2000, EU expenditures on export subsidies were about 170 times the amount paid by the United States. The European Union has stated that it cannot agree to eliminate export subsidies completely. The European Union and other WTO member countries maintain that disciplining export subsidies without addressing other programs affecting export competition, such as U.S. export credit guarantees, would be discriminatory. Like export subsidies, the Doha mandate of reducing domestic support payments to farmers is highly controversial. The Cairns Group maintains that only non-trade-distorting support payments, such as for pest and disease control measures, should be allowed. Non-trade-distorting domestic support programs are government funded and typically are not directed at particular products or related to production levels or prices. Although Japan and the European Union are willing to reduce support payments to their farmers, they want to maintain some types of payments linked to production. EU officials, for example, argue that certain payments to farmers, based on measurement such as acreage or number of animals, should continue to be allowed, provided they are tied to limits on production and serve worthwhile goals. These goals include stewardship of the rural environment and more humane treatment of farm animals. The United States has called for simplifying the rules for domestic support and establishing a ceiling on trade-distorting support that applies proportionately to all countries. As shown in figure 4, the European Union has provided much higher levels of support to its farmers than have other WTO members, and it also enjoys a much higher allowable ceiling for such support under its Uruguay Round commitments. A third aspect of the decision on modalities is to devise a formula for reducing tariffs on agricultural products. While less contentious than attempts to reach agreement on modalities for export subsidies and domestic support payments, agreeing on how to reduce agricultural tariffs is important, particularly to the United States. According to the U.S. Undersecretary of Agriculture, the average tariff for U.S. agricultural products is 12 percent, while the average global tariffs for food and agricultural products worldwide are 62 percent. Japan’s tariffs average 59 percent, while the Cairns Group’s and the EU’s are 30 percent. Tariff reductions are a sensitive issue for many developing countries. Some developing countries are reluctant to bring tariffs down to a level that might compromise the livelihood of significant segments of their populations who depend on agricultural production. Once WTO members agree on agricultural tariff modalities, they must submit tariff schedules detailing their proposed new tariff levels by the fifth ministerial conference in September 2003. Accomplishing this task 6 months after the March modalities deadline could be difficult for some WTO members. For example, some developing countries have limited staff, experience, and resources. In addition, the European Union must create tariff schedules not only for its current members but also for the 10 countries that are candidates to become EU members. Meeting Interim Deadline on Singapore Issues Is Highly Contentious A second critical decision point in the Doha Development Agenda involves whether negotiations should proceed on what are generally referred to as the Singapore issues, which include issues related to investment, competition, trade facilitation, and transparency in government procurement. Because of the extreme sensitivity of these issues, particularly regarding competition and investment, WTO members decided in Doha to delay the start of formal negotiations on these topics until they could make certain decisions at the fifth ministerial conference in 2003. These decisions are of key importance, because they, along with progress in agriculture, are likely to drive the rest of the negotiations. EU officials insist that moving forward with negotiations on competition and investment is essential to the successful conclusion of the overall negotiations, while developing countries have consistently opposed including them in the talks. Basic disagreement among WTO members on the meaning of the language in the Doha Declaration is likely to make the upcoming decision on the Singapore issues difficult, particularly in the areas of trade and investment and trade and competition. The language in the declaration is ambiguous on this issue. It states that for each of the four areas, negotiations should take place after the fifth ministerial conference “on the basis of a decision to be taken, by explicit consensus, at that Session on modalities of negotiations.” In the view of developed countries, including the United States and the European Union, the declaration calls for negotiations on these issues to be launched after the fifth ministerial in September 2003. However, India, with the support of some other developing countries, maintains that no such consensus was reached at Doha. In Doha, India held up consensus until it obtained a statement from the ministerial conference chair that each WTO member would have “the right to take a position on modalities that would prevent negotiations from proceeding …until that member is prepared to join in an explicit consensus.” The general view among WTO member representatives in Geneva is that the Doha Declaration does not mandate that negotiations on the Singapore issues be launched after the fifth ministerial conference. Instead, a decision on whether to proceed with these negotiations will have to be made at that 2003 ministerial meeting. The main controversy surrounding the Singapore issues deals with the EU’s strong advocacy for negotiations on trade and investment and trade and competition. Japan and South Korea also support negotiating these issues. The European Union argues that investment rules based on the principles of national treatment, MFN treatment, transparency, and the right to establish businesses overseas are necessary to contribute to a stable and predictable global business climate for foreign direct investment. Regarding trade and competition policy, the European Union advocates incorporating basic principles into domestic law, including nondiscrimination, transparency, due process, judicial review, a ban on certain cartels, and sufficient enforcement powers. Many developing countries, on the other hand, led by a group of countries known as the Like-Minded Group, have consistently expressed their strong opposition to the inclusion of the Singapore issues in the negotiating agenda. For example, India has acted to prevent any discussions in the applicable WTO working groups on these issues. India is concerned that any discussion might be construed as a negotiation, rather than as an effort to clarify the issues. India argued that undertaking new obligations in these areas would present too great a burden on developing countries. In fact, many developing countries maintain that they are still having difficulty implementing their Uruguay Round obligations (see app. I for a discussion of implementation issues). Some developing countries want to see progress on other issues, particularly improving the adequacy of the WTO’s technical assistance and capacity building efforts, before agreeing to launch negotiations on the Singapore issues. WTO Negotiations Face Several Overarching Challenges The overarching challenges facing the WTO in the negotiations launched in Doha will be (1) building consensus within its large and diverse membership, (2) overcoming various organizational difficulties, and (3) avoiding tensions generated by controversial events occurring outside the negotiations. Diverse Membership Makes Consensus More Difficult The WTO’s large and diverse membership makes reaching agreement by consensus more difficult, particularly between developed and developing country members. Developing countries are taking on a more active role in these negotiations as compared with those under the Uruguay Round, and in some cases they express different views about trade liberalization. For instance, some developing country members are concerned that commitments they make in the current negotiations could put them at a disadvantage as compared with their developed country counterparts. They argue that they should not be held to the same standard of trade liberalization as the developed country members. Balancing these different views will be a challenge in the negotiations. Further, China’s recent membership in the WTO could affect the dynamics of the organization, partly because of the size of its economy. WTO members include high-income countries like the United States, which alone accounts for about 13 percent of world trade; large, low-income countries like China and India, each having populations of over 1 billion; and members like Dominica, with fewer than 71,000 inhabitants, and Mongolia, which accounts for less than 0.01 percent of world trade. As mentioned earlier, the World Bank classifies 105 current WTO members, or approximately 73 percent, as developing countries, and about 21 percent of these as least developed countries. One WTO official pointed out that 80 percent of the WTO membership represents only 1.7 percent of world trade. Developing countries are taking a more active role in the current negotiations as compared with the Uruguay Round, and they will scrutinize more closely the commitments they agree to make. Notably, WTO members are calling the current negotiations the Doha Development Agenda, symbolizing the special emphasis on meeting the needs of developing countries. According to a WTO official and a developing country representative in Geneva, many developing country members have maintained that they had not fully realized that, under the Uruguay Round, all WTO members were obligated to implement the complete package of agreements and were to be held accountable by the WTO’s dispute settlement system. (Prior to the completion of the Uruguay Round, parties to GATT could opt out of agreements if they so chose.) One WTO official believed that, because WTO developing country members now better understand the WTO dispute settlement process, they are less likely to accept vague language in order to reach a consensus, as compared with the previous round. He said that this could make it harder to reach consensus, as negotiators demand more clarity. Finally, some developing country members have had difficulty implementing their Uruguay Round obligations. Some developing country members have views on trade liberalization that are different from those of their developed country counterparts. Although many developing countries are willing to liberalize their markets to gain concessions in areas in which they are most competitive, particularly in agricultural trade, others maintain that they need special exceptions to trade liberalization to help them develop. For example, some of these developing country members want to opt out of lowering tariffs for certain products, reevaluate existing tariff bindings for food security reasons, or continue to use export subsidies to promote development. The WTO’s challenge lies in the fact that the Doha Declaration calls for strengthening the ability of developing countries to argue for exceptions to trade liberalization through the WTO’s “special and differential treatment” provisions. By the end of July 2002, the Committee on Trade and Development was to identify those special and differential treatment provisions that are mandatory and those that are nonbinding in character, and to examine ways in which these provisions could be made more precise and effective. On July 24, 2002, the committee recommended that the General Council agree to set up a monitoring mechanism, whose details would be worked out later. It further asked the General Council to approve extending until December 31, 2002, the committee’s deadline for making clear recommendations for decisions on special and differential treatment. According to a WTO official, this exercise will be difficult, given the potential for such exceptions to undermine the overall negotiations’ goal to increase trade liberalization. China’s entry as a new member of the WTO in 2001 is another sign of the diversity of the WTO’s membership. Because China is a large economy and a significant trader, as well as a developing country under World Bank standards, the role it chooses to take in the WTO could affect the dynamics of the organization and therefore the negotiations. In fact, one high-level WTO official noted that China’s new mission to the WTO in Geneva is the fourth largest among the membership. WTO officials and member country representatives with whom we spoke generally believed that China would act in its own national interest and not necessarily side in all cases with any one bloc of WTO member countries. One WTO representative from a developing country predicted that China would form alliances where it made sense for China. Many WTO country representatives predicted that China would be active in the negotiations and would be likely to support further trade liberalization by other WTO members. However, it has been reported that Chinese officials have recently taken the position that China should not be expected to make significant concessions in the negotiations given the substantial commitments it has already made in joining the WTO, particularly in the area of market access for nonagricultural products. In addition, one WTO country representative predicted that China’s membership might have an impact on the outcome of the WTO negotiations on trade remedies. Specifically, WTO member countries may be less inclined to weaken WTO antidumping provisions because of the possibility of China’s dumping products on their markets. WTO Faces Organizational Challenges The WTO will face several organizational challenges during the negotiations. First, the WTO will face pressure to meet many developing country members’ high expectations for receiving technical assistance to help them fully participate and benefit from the negotiations. Second, some aspects of the WTO’s guiding principles for conducting the negotiations have the potential for slowing progress in the negotiations. Finally, uncertainties associated with the change in WTO leadership could potentially affect negotiations as a new WTO Director General took office in September 2002. High Expectations for Technical Assistance Must Be Met The WTO’s most difficult organizational task will be to meet some developing country members’ high expectations for receiving technical assistance mandated in the Doha Declaration. While the WTO has recently been allocated additional funds to meet these needs, a WTO official suggested that the WTO might lack the staff and resources to effectively utilize the funds. In addition, according to foreign country and WTO officials, some developing countries are expecting the WTO to expand its activities not only to play its traditional role of explaining WTO agreements but also to provide broader assistance, such as helping countries to increase their capacity to export. The latter involves providing development assistance, a role for which the WTO lacks the resources and expertise. Many developing countries have indicated that the adequacy of these efforts will affect their willingness to accept many of the developed countries’ priorities in the negotiations, according to WTO and foreign government officials. The WTO’s delivery of technical assistance to developing countries has become critical to the successful outcome of the Doha Development Agenda. The Doha Declaration calls for firm commitments to provide technical assistance and capacity building, which it identifies as “core elements of the development dimension of the multilateral trading system.” The WTO Director General has repeatedly highlighted the importance of these commitments, stating that further progress in trade liberalization is conditional on capacity building. The WTO is also highlighting the importance of coordination among other international organizations in providing trade-related development assistance to developing countries, particularly least developed countries. The focus of these efforts is through an enhanced Integrated Framework for Trade-Related Technical Assistance to the Least Developed Countries. The six core international agencies of the integrated framework, which include the WTO, issued a joint communiqué in February 2002 committing them to helping least developed countries and low-income economies to “stimulate supply-side responses to improve market access opportunities, diversify their production and export base, and enhance their trade- supporting institutions.” The Doha Declaration directs the WTO Director General to provide an interim report to the General Council in December 2002 and a full report to the fifth ministerial conference in 2003 on the implementation and adequacy of commitments made to provide technical assistance and capacity building to developing countries, including WTO efforts to enhance the integrated framework on behalf of least developed countries. However, despite a significant increase in funding from the WTO and commitments for a coordinated effort from other international organizations to expand technical assistance, several WTO and foreign government officials were concerned that developing countries’ expectations for these efforts may be unrealistic. These officials worried that developing countries may view such assistance as a condition to their agreeing to move forward in the negotiations. According to both a WTO official and a foreign government representative in Geneva, some developing countries expect to obtain assistance from the WTO with infrastructure projects to facilitate their capacity to export. However, the Deputy U.S. Trade Representative stated in April 2002 that the WTO’s mandate for providing technical assistance and capacity building relates “strictly to assisting these countries in negotiations and does not require broader development aid.” He further stated that the extent of these activities and what they should accomplish must soon be clarified. Some WTO member country representatives whom we interviewed agreed with the U.S. view that this clarification was essential so that developing countries cannot claim later that technical assistance was inadequate to obtain their willingness to participate further in the negotiations. This will be one of the critical issues to be addressed at the fifth ministerial conference in Mexico in September 2003. Principles Guiding the Negotiations Could Slow Progress In January 2002 the General Council established “principles and practices,” or guidelines, for conducting the negotiations. Certain aspects of these guidelines were written specifically to accommodate many of the concerns of developing countries. However, some of these guidelines could create delays in the negotiations. For example, a WTO official highlighted a requirement that chairs of negotiating bodies include the different views of members in draft texts if no consensus exists. This is partly because several developing countries claimed that the General Council chair’s draft Doha Declaration did not reflect their consistent opposition to negotiations on the Singapore issues. Consequently, these countries insisted on this requirement in the principles and practices to prevent drafts from being produced that do not reflect their positions. However, this limitation could make it harder for chairpersons to broker a final negotiating agreement, if it produces the type of unwieldy text that prevented consensus at the Seattle ministerial. Another element of the principles and practices that could delay the progress of the negotiations is guidance that only one negotiating body should meet at a given time, according to a WTO official. The objective of this guideline is to structure the talks so that small delegations would be better able to participate. Developing countries have stressed the importance of enabling greater inclusion of all members in the negotiating process. Although this is an important goal, it limits the total number of meetings that could possibly take place over the course of the negotiations. A chair of one of the negotiating groups believes chairs still have the flexibility to schedule overlapping meetings if absolutely necessary to move the talks along. However, he suggested that some WTO members, who do not support further trade liberalization, could use the principles and procedures as a means of restricting the flexibility of the negotiating bodies. Transition to New WTO Director General Presents Some Uncertainties A new WTO Director General from Thailand, Dr. Supachai Panitchpakdi, began a 3 year term in September 2002, replacing the former Director General, Mike Moore, from New Zealand. Before the Seattle ministerial in 1999, after failing to reach consensus on one candidate, WTO members selected both men to serve consecutive 3 year terms. While this latest transition in leadership could very well be a smooth one, any change in leadership could potentially affect the negotiations. The new Director General has come on board in the middle of difficult negotiations. In addition, the terms of the four current Deputy Directors General expire at the end of September, and four new deputies have been named to replace them as of October 1st. While the WTO is largely a member-driven organization, the Director General and his or her deputies can play an important role in facilitating consensus and organizing work so as to ensure maximum progress in the negotiations. Significantly, as the first Director General from a developing country, the new leader may face additional pressure to address the concerns of developing country members. For example, the new Director General will have to try to deliver on promises made regarding technical assistance. Outside Events Could Affect the Negotiations Events not directly part of the negotiations could also affect their progress. First, any ongoing contentious WTO dispute settlement cases concerning issues being negotiated could negatively affect negotiators’ willingness to reach agreement. For example, several WTO members have filed dispute settlement cases against the United States in reaction to its decision to impose higher tariffs on imported steel. Another event that may undermine the negotiations is concern on the part of many WTO members about the increase in the U.S. agricultural domestic support payments called for in the U.S. Farm Security and Rural Investment Act of 2002, mentioned earlier. This concern could affect U.S. credibility in persuading other countries to reduce such payments in the negotiations. Finally, more optimistically, the enactment of the Trade Act of 2002 (P.L. 107-210) this August, granting the U.S. President trade promotion authority, is likely to provide positive momentum in the negotiations. Dispute over U.S. Imposition of Tariffs on Steel Imports Some experts believe that tensions caused by a recent WTO dispute settlement case regarding increased U.S. tariffs on steel imports could diminish the level of trust and cooperation among negotiators that had existed at the time of the Doha ministerial conference. On March 5, 2002, President Bush agreed to impose tariffs of up to 30 percent on certain steel imports. This was in response to a U.S. International Trade Commission finding that the U.S. steel industry had been harmed by substantially increased imports of steel products. As a result of the tariff increases, the European Union, China, Japan, Korea, Brazil, and other WTO members have entered into consultations with U.S. officials under WTO dispute settlement procedures. On June 3, 2002, the WTO established a dispute settlement panel to hear concerns that these countries have about U.S. tariffs on steel imports. Some countries have indicated that they may also introduce tariffs to guard against what they perceive as a surplus of steel imports flooding their markets, as a result of the U.S. action. China has already enacted a tariff- rate quota on 9 categories of steel products to prevent a possible surge of steel imports resulting from U.S. actions. Similarly, the European Union has imposed a provisional tariff-rate quota on 15 categories of steel to prevent what the European Union describes as a potential flood of diverted steel coming into the EU market. Moreover, the European Union is considering imposing tariffs on imports from the United States amounting to about $335 million to offset potential losses as a result of increased U.S. steel tariffs. The U.S. Trade Representative has responded to WTO members’ criticism of U.S. tariffs on steel imports by emphasizing that “the WTO expressly permits safeguard measures to allow an industry injured by imports temporary relief and time to restructure.” He also pointed out that Japan, Korea, Brazil, and others have used similar safeguards in the past or are using them today. Further, he noted that in the 1980s the European Union and its member states provided more than $50 billion in government subsidies to restructure the European steel industry. WTO Member Concerns about U.S. Farm Legislation Several WTO members have expressed concern that the U.S. farm legislation passed earlier this year calls for raising domestic support payments to U.S. farmers over the next 10 years. They maintain that it undermines one of the main objectives set out in the Doha Declaration, that of reducing trade-distorting domestic support. The EU Commissioner for Agriculture has severely criticized the farm legislation, claiming that it will undermine ongoing multilateral efforts to reform global farm trade. Specifically, the Commissioner criticized not only the increase in domestic payments but also their potential to distort trade. For example, he maintained that one type of support payment in the legislation, termed “counter-cyclical payments,” would shield U.S. farmers from low agricultural prices and would result in overproduction. Similarly, Australian government officials have declared that because the act raises domestic support payments and would increase payments to U.S. farmers should commodity prices fall, the United States has effectively relinquished its leadership in the WTO agricultural talks, even though this leadership has historically been crucial to obtaining agricultural concessions from the European Union. Further, the Canadian Minister of Agriculture and Agri- Food called the farm legislation, particularly its price-based support payments, a serious blow to U.S. credibility in the WTO negotiations. The United States has countered that it is fully committed to the negotiations and will be a strong advocate for liberalizing trade in food and agricultural products. USTR officials stated that the new farm legislation supports U.S. farmers while maintaining U.S. obligations under WTO. The U.S. Secretary of Agriculture said that the legislation does nothing to change the resolve of the United States to negotiate a very aggressive result in the Doha Development Agenda negotiations. Moreover, the U.S. Undersecretary of Agriculture emphasized that the U.S. domestic support ceiling allowable under the WTO is low relative to those of other WTO members, and pledged that the United States would not exceed its allowable ceiling. For example (as shown earlier in fig. 4), in 1998, the EU ceiling was $80.4 billion, versus the U.S. ceiling of $20.7 billion. Stressing this point, the Undersecretary cited a fail-safe mechanism in the legislation directing the Secretary of Agriculture to use, to the maximum extent practicable, a so-called circuit breaker to ensure that the United States does not exceed its WTO limit on domestic support payments to agricultural producers. Outcome of U.S. Trade Promotion Authority Legislation On a positive note, for the first time since 1994, in August of this year, Congress granted the President trade promotion authority. Under this authority, Congress agrees to consider legislation to implement trade agreements negotiated by the President under a streamlined procedure with mandatory deadlines, no amendments, and limited debate. Prior to the act’s passage, WTO officials and member country representatives cited the lack of trade promotion authority for the U.S. President as a significant obstacle to progress in the WTO negotiations. Following congressional approval of this authority, the WTO Director General and trade ministers from the European Union, Japan, Australia, and other countries cited it as providing a boost to the WTO negotiations. Specifically, the WTO Director General noted that it has “renewed confidence to wrap up these talks” by the 2005 deadline. Concluding Observations On the whole, two main factors made it possible for WTO members to reach consensus on the Doha Development Agenda: (1) the strong U.S. and EU support for the negotiations, bolstered by the positive relationship between the U.S. Trade Representative and the EU Commissioner for Trade; and (2) the strategy of deferring some important decisions to the actual negotiations. Nevertheless, the first factor may not continue to be present throughout the negotiations, and the second may actually impair WTO members’ ability to reach a viable agreement. First, if the negotiations go beyond the WTO’s target date of January 2005, there will be a new set of players who may not have the same kind of positive relationship that existed when the negotiations were launched. Specifically, in January 2005, a new EU Commission is likely to take office and thus a new EU Commissioner for Trade may be appointed. In addition, a new U.S. Trade Representative could be named depending upon the outcome of the 2004 U.S. presidential election. Further, the current WTO Director General’s term of office expires on August 31, 2005, and the terms of his deputies are up a month later. Second, the Doha strategy of deferring several contentious decisions to the negotiations means that these decisions now need to be made, and the timetable is ambitious. For example, the outcome of the ministerial conference in September 2003 in Cancun, Mexico, will be a critical indication of whether key decisions on agricultural trade and the Singapore issues can be made. This is especially true because trade ministers have only 15 months thereafter to conclude the negotiations by January 1, 2005. During the Uruguay Round, the negotiations took 7 years to conclude, which was significantly longer than the original 4 year deadline. Further, these negotiations will take place in the context of significant organizational challenges. In particular, the larger and more diverse set of WTO developing country members are taking a more active role in the negotiations and placing greater demands on the organization than during the Uruguay Round. The WTO’s ability to successfully address these development dynamics could have a direct bearing on the progress of the negotiations. Therefore, the Doha Development Agenda is a serious test of WTO members’ ability to preserve positive relations while balancing their considerable organizational challenges and their strongly held disparate views on several politically sensitive trade issues. Agency Comments and Our Evaluation We requested comments on a draft of this report from the U.S. Trade Representative and from the Secretary of Agriculture, or their designees. The Assistant U. S. Trade Representative for WTO Multilateral Affairs and the Director of Multilateral Trade Negotiations, Foreign Agricultural Service, provided us on July 24th and August 8th, respectively, with technical oral comments on the draft, which we incorporated into the report. In addition, on July 26th we obtained and incorporated into the report oral comments from the Director of the Office of Policy, Import Administration, the Department of Commerce, on sections in the draft covering trade rules regarding countries’ unfair trade practices. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 12 days after its date. At that time, we will send copies of this report to the U.S. Trade Representative, the Secretary of Agriculture, the Secretary of Commerce, and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix IV. Additional Issues on the Doha Negotiating Agenda This appendix provides a brief synopsis of the issues to be negotiated in the Doha Development Agenda (other than agriculture, the Singapore issues, and special and differential treatment). As seen in figure 5, these issues include World Trade Organization (WTO) rules, nonagricultural market access, services, environment, dispute settlement, and a registry for geographical indications for wines and spirits. Also included in this appendix is a discussion of intellectual property and public health as well as the implementation of existing Uruguay Round agreements, for which the WTO did not create new negotiating groups or special sessions of existing WTO bodies. For a more detailed account of all the issues in the Doha Declaration and updates on their progress, refer to the WTO Web site at http://www.wto.org. WTO Rules The Doha Declaration launched negotiations in several areas pertaining to WTO rules. Specifically, WTO members agreed to negotiate on WTO rules dealing with antidumping, subsidies and countervailing measures, fisheries subsidies, and regional trade agreements. Antidumping and Countervailing Duties The negotiations on the trade remedies of antidumping and countervailing duties are among the most important and contentious on the Doha agenda. An increasing number of WTO members are applying antidumping and countervailing duty measures. At the same time, several members are voicing serious concerns about how their fellow members, particularly the United States, are implementing those measures, and questioning whether in some cases the measures are being applied fairly. While the United States accounted for about 20 percent of antidumping measures reported to the WTO in 2001, Canada and the European Union (EU) also made extensive use of these measures. In addition, some developing countries have become major users of antidumping measures. In 2001, for example, India actually reported more antidumping measures to the WTO than did the United States. Accordingly, before the 2001 Doha ministerial conference, countries including Brazil, Korea, and Japan called for clarifying rules on antidumping measures to prevent unjustified investigations and to remove ambiguity and excessive discretion in their implementation. Urging caution, the United States has strongly supported preserving current trade remedy laws while allowing for clarification of existing provisions. In addition, U.S. officials emphasize the need for enhanced disciplines on the way that WTO members apply trade remedy measures. According to U.S. officials, the major difficulty in these negotiations will be to find common ground, on the one hand, between WTO members who believe that clarifying and improving trade remedies require a major overhaul of the Uruguay Round agreements on antidumping and countervailing duties, and on the other hand, U.S. insistence that these agreements remain intact and that no changes in U.S. trade remedy laws are necessary. In the first phase of negotiations, member countries will identify and agree on the specific issues to be clarified and improved. A group of 14 countries has submitted a list of several topics that they want to raise in these discussions on trade remedies to clarify and improve, including a number on antidumping measures. Examples include the practice of excluding certain transactions from the calculation of a “dumping margin,” or establishing a clearer link between dumped imports and the resultant injury. The WTO Appellate Body has issued rulings regarding the use of these practices over the past few years, in some cases citing problems with EU and U.S. methodologies. While the United States is advocating caution and discretion in this initial phase of identifying WTO trade remedy disciplines to be clarified and improved, it is not discounting the need for improvement in some areas. In particular, the U.S. Trade Representative (USTR) has noted that any consideration of WTO rules must focus on improving the transparency of the processes of the rapidly increasing number of countries using trade remedies. Moreover, the United States is concerned about the way that other countries determine damages and trade-distorting practices. For example, one USTR official explained that he would like to see improved disciplines on what constitutes valid trade remedy investigations. The Doha Declaration does not specify any interim deadlines regarding the trade remedy negotiations. A U.S. negotiator told us that the really difficult decisions on trade remedies will likely be left to the end of the 3 year negotiating period, because so many trade-offs will be necessary to achieve any progress in this controversial area. Some labor and industry groups and some members of the Congress have expressed strong opposition to weakening U.S. trade remedy laws. For example, the American Federation of Labor/Congress of Industrial Organizations (AFL-CIO) has warned that including antidumping and countervailing duties in the WTO negotiations will weaken U.S. trade remedy laws and leave American workers vulnerable to other countries’ unfair trade practices. Some U.S. businesses, such as those in the steel industry, argue that effective rules against dumping and trade-distorting subsidies are an essential element of the multilateral trading system. The importance of antidumping and countervailing duty measures was emphasized in the trade promotion authority sections of the recently passed Trade Act of 2002. Thus, the law states that one of the principal U.S. trade negotiating objectives is to preserve the ability of the United States to rigorously enforce its trade laws, including antidumping and countervailing duty law, and avoid agreements that lessen the effectiveness of domestic and international rules on unfair trade, especially dumping or subsidies. Another provision in this legislation requires that the President report to Congress, at least 180 days before entering into a trade agreement, on the range of proposals advanced in the negotiations and how those proposals relate to the negotiating objectives on trade remedy laws. Fisheries Subsidies As part of the mandate to negotiate WTO rules, including subsidies in general, the Doha Declaration specifically calls for negotiations to “clarify and improve” WTO disciplines on fisheries subsidies. The United States was one of the major proponents for these negotiations, which it views as a win-win opportunity to reduce trade-distorting subsidies while supporting environmental and developmental goals. Fisheries subsidies will be covered under the general heading of “subsidies,” which are part of the agenda of the Negotiating Group on Rules. The United States is one of a group of countries known as Friends of Fish. The group believes that current trade disciplines under the WTO Agreement on Subsidies and Countervailing Measures are inadequate to address the negative effects of fisheries subsidies. They cite a range of studies that conclude that annual subsidies in the fisheries sector are between $14 and $20.5 billion. In an April 2002 paper submitted to the Negotiating Group on Rules, these countries argued that fisheries subsidies distort trade and contribute to “excessive fishing capacity,” leading to the depletion of fish stocks. They also argued that trade distortions and overcapacity in the fisheries sector “impede the sustainable development of many countries with significant fisheries resources.” The Friends of Fish paper also claims that developing countries cannot compete with subsidized, distant-water fishing fleets from wealthier countries. Additionally, nonsubsidizing countries seeking to safeguard a shared fish stock lose the extra catch gained by fishers from subsidizing countries, according to this paper. Japan and Korea, both major providers of subsidies in the fisheries sector, opposed specific reference to fisheries subsidies in the Doha Declaration. They argued that subsidies are not to blame for the depletion of fish stocks. Instead, they claimed that inadequate management regimes and uncontrolled illegal fishing were the main causes of the depletion of fish stock, and that subsidies designed to reduce capacity would actually be beneficial. The European Union, which includes a group of countries with subsidized fishing sectors, mostly in Southern Europe, is also unlikely to support efforts during the negotiations to reduce or eliminate fisheries subsidies, according to a USTR official. Efforts within the European Union to reform its fisheries policies have faced resistance from France, Spain, Italy, Portugal, Greece, and Ireland. Regional Trade Agreements Regional trade agreements are arrangements through which countries may grant more favorable terms of trade to countries within a regional group than to countries outside that arrangement. These arrangements may vary in form but generally include customs unions and free trade areas.They also differ in the extent to which their preferential treatment provisions cover trade in various economic sectors and products. Regional trade agreements have proliferated during the past decade. In addition, there has been interest in clarifying WTO rules on such arrangements. Nearly all WTO members have notified the WTO of their participation in one or more such agreements. WTO members are permitted to enter into preferential trade arrangements. Nevertheless, a fundamental debate has taken place concerning the compatibility of regional trade agreements with the multilateral trading system. At Doha, WTO members agreed to negotiations to clarify and improve the current WTO provisions that apply to regional trade agreements. However, countries are divided over whether such a mandate would require revising existing WTO rules or developing additional rules. Countries are also split over whether to apply new disciplines to existing regional trade agreements. Toward the end of the Uruguay Round, regional trade agreements emerged as an issue for certain countries as they became aware of how these agreements might work to their disadvantage. Several countries began calling for a review of the impact of these arrangements on multilateral commitments under WTO agreements. The concerns these countries express vary. For example, Australia and New Zealand are concerned that regional trade agreements may lead to an uneven process of trade liberalization, because existing rules allow countries negotiating these agreements to select those sectors that they wish to liberalize. In contrast, a U.S. official stated that Japan and Korea have opposed regional trade agreements in the past because such arrangements permit different terms of trade for certain products that originate in specified countries. India has sought reforms to WTO rules on regional trade agreements because it has felt excluded from those arrangements. And Japan, Korea, and Hong Kong have also argued that newly clarified and improved WTO rules should apply to existing agreements; otherwise, new disciplines may be irrelevant, because so many countries have already entered into regional agreements. Those WTO members that have entered into regional trade agreements generally have less of an interest in seeking further disciplines covering such agreements, which could then be applied retroactively. The United States, already a member of the North American Free Trade Agreement and currently considering other bilateral and regional free trade arrangements, including a free trade agreement encompassing the entire Western Hemisphere, has not taken a strong position in favor of reforming existing WTO provisions governing regional trade agreements. The United States has advocated more transparency in the implementation of existing obligations. Similarly, the European Union (which has notified the WTO of more than 30 regional trade agreements) has been hesitant to clarify disciplines, because it seeks flexible procedures for interpreting its existing regional agreements and future agreements with countries seeking to join the European Union. Other WTO members, such as Argentina, Brazil, Hungary, and Mexico, oppose the application of new disciplines to existing agreements. These countries argue that the application of new rules to already negotiated agreements may allow members to undertake dispute settlement cases on trade agreements that have been in existence for years. Nonagricultural Market Access The reduction of nonagricultural tariffs is one of the key goals of the new multilateral negotiations and has been the traditional focus of past multilateral negotiations. For example, previous multinational negotiations have reduced trade-weighted most favored nation (MFN) tariff rates on industrial goods from an average high of 40 percent at the end of World War II to about 4 percent at the conclusion of the Uruguay Round in 1994. Still, there is considerable potential for further cuts, as tariff reductions have not been evenly distributed across countries or applied equally among all products and sectors. According to the World Bank, even though developing countries agreed to cut their tariffs in the Uruguay Round, these tariffs are still on average considerably higher than those of the developed countries. For example, the post–Uruguay Round average ad valorem “bound” rate for developed economies was 3.5 percent, as compared with 25.2 percent for developing economies, according to the World Bank. The Doha Declaration mandates negotiations aimed at reducing or, as appropriate, eliminating tariffs for nonagricultural products, including reducing or eliminating tariff peaksand tariff escalation, as well as nontariff barriers. Negotiations are to be comprehensive in that no products are to be excluded, and they must take fully into account the principle of special and differential treatment for developing countries embodied in the General Agreement on Tariffs and Trade. This includes allowing for “less than full reciprocity” in meeting tariff reduction commitments. The negotiations on market access for nonagricultural goods face several difficulties. U.S. tariffs, as well as those of its industrialized trading partners, are already very low. For example, the average U.S. trade- weighted industrial tariff rate is about 3 percent, and more than 5,000 of the 10,000 U.S. tariff lines are now duty free. This leaves the United States with limited leverage to convince other countries to reduce their higher tariffs, according to USTR officials. In addition, the countries with the highest tariffs, primarily developing countries, are resistant to reducing their tariffs for several reasons. First, many already enjoy dutyfree access through U.S. and EU trade preference programs, so further reductions in MFN rates will only dilute the competitive advantage they receive from these programs. Second, many developing countries are resistant to making significant reductions in their nonagricultural tariffs, or are opposed to their elimination, because they rely on tariffs as a significant source of revenue. The key points of controversy will likely surround the issues of tariff reciprocity among countries, tariff peaks, and tariff escalation. The United States views “less than full reciprocity” as, among other things, allowing longer transition periods for implementing tariff concessions, and it will consider it on a case-by-case basis depending upon the situation and the country involved. Some developing countries may argue that less than full reciprocity entitles them, under some circumstances, to avoid eliminating or reducing their tariffs. A priority of many developing countries is to reduce the tariff peaks and tariff escalation practices that industrialized countries often employ, especially in sectors in which they have the greatest competitive advantage, such as textiles and apparel. For example, the relatively high U.S. textile and apparel tariffs (out-of-quota rates for various apparel items range from 20 to 33 percent) will be a certain target for developing countries because of the size of the U.S. market, according to a USTR official. However, it will be difficult for the United States to offer concessions in this area. Indeed, the U.S. textiles industry has proposed that the level of U.S. textile and apparel tariffs be frozen, while Asian and other countries’ tariffs are brought down to U.S. levels. On July 19, 2002, the Negotiating Group on Market Access established a program of meetings for the negotiations on market access for nonagricultural products. As a part of this program, the participants in the negotiations will aim at achieving “a common understanding on a possible outline of modalities by the end of March 2003 with a view to reaching an agreement on those modalities by May 31, 2003.” Trade in Services The large and growing volume of international trade in services makes services liberalization an important part of the Doha negotiating agenda. Over the past 10 years, international trade in services has grown dramatically, increasing from $783 billion in 1990 to $1.4 trillion (or about 19 percent of total world trade) in 2000. At the national level, trade in services accounts for nearly 80 percent of U.S. employment and private- sector gross domestic product. U.S. exports of commercial services were $279 billion in 2000, supporting more than 4 million services and manufacturing jobs. Other major services exporters in 2000 included the United Kingdom ($100 billion), France ($81 billion), Germany ($80 billion), and Japan ($68 billion). A U.S. Trade Representative official indicated that the U.S. objectives in the services negotiations include broad participation by many countries, reduction of restrictions, building upon previous services agreements, and expansion of regulatory transparency. A U.S. services industry representative noted that issues of importance for the negotiations also include providing regulatory transparency and personnel mobility and preventing a safeguard provision for trade in services. The Doha Declaration intends that WTO members complete the work they initiated in January 2000 under the General Agreement on Trade in Services (GATS). The declaration calls for pursuing the GATS’ intention of increasing developing country participation in world trade and achieving a progressively higher level of liberalization in the services trade. Guidelines and procedures for the negotiations include two key principles: no sectors should be excluded from the negotiations; and negotiations can occur in bilateral, plurilateral, or multilateral (including all members) groups, mainly using a request-offer method. However, according to a WTO official, the negotiations will be conducted predominantly on a bilateral basis using the request-offer approach, with results applied to all WTO members on an MFN basis. Although it is probably one of the least controversial issues on the Doha agenda, the services negotiations face several difficulties. One challenge is to convince developing countries to open up their services sectors. According to a U.S. services industry representative, this difficulty is attributable, in part, to the fact that developed countries can offer few concessions in the services area because their barriers are already so low in many sectors. Another difficulty is that some areas of trade in services that developing countries are interested in pursuing may be difficult to negotiate. For example, the tourism sector is one of the developing countries’ best economic growth opportunities. But liberalization in this sector will be difficult to negotiate because of its linkage to other sectors such as air and road transport and financial services. Finally, services negotiations are by nature especially complex, time consuming, and resource intensive. For example, they can involve a separate set of bilateral negotiations among all 144 WTO member countries, for each and every sector; and they would involve agreeing to change domestic laws and regulations and developing and implementing new administrative procedures. Trade and the Environment For the first time, the WTO will begin negotiations on trade and environment issues. These negotiations are intended to clarify the relationship between WTO rules and explicit trade measures included in multilateral environmental agreements (MEA). An MEA is any agreement between three or more signatory countries concerning some aspect of environmental protection. There are approximately 200 multilateral environmental agreements in place today. In addition, the negotiations are to address procedures for exchanging information between WTO committees and MEA secretariats, and the criteria for granting observer status. They are also to liberalize trade in environmental goods and services. These negotiations will be conducted in special sessions of the existing WTO Committee on Trade and Environment. During the preparation period before the Doha ministerial, the European Union demanded that the negotiations include environmental issues. Developing countries have generally resisted any efforts to negotiate these issues in the WTO, arguing that industrialized countries might use environmental standards as a form of “green” protectionism. The United States also opposed the EU’s objectives for addressing environment in the WTO. Of the three environment-related issues that the European Union specifically sought at the Doha ministerial, only the relationship between MEAs and WTO rules became part of the negotiating agenda. The European Union also sought negotiations to clarify countries’ use of the “precautionary principle” in taking trade measures to protect environmental and human health, and of “eco-labeling.” However, the issue of precaution was left out of the Doha Declaration entirely, while eco- labeling could be added to the negotiations if members decide at the next ministerial that there is consensus to do so. According to a USTR official, additional environmental issues are unlikely to be included in the negotiations. Clarifying the relationship between the WTO and the MEAs is the most prominent item on the negotiating agenda related to the issue of trade and the environment. To be acceptable to WTO members, including the United States, the Doha Declaration limited the scope of these negotiations. In particular, the results of the negotiations are limited to the applicability of WTO rules to parties to an MEA. Further, negotiations shall not affect WTO rights of any WTO member that is not a party to an MEA in question. However, there has never been a challenge under the WTO dispute settlement system to trade measures taken between parties to an MEA, and, according to both a USTR official and a WTO Secretariat official, such a challenge is unlikely. Consequently, the negotiations on this issue may have a limited impact. The Doha Declaration also limits the negotiations to clarifying the relationship between WTO rules and “specific trade obligations set out in MEAs.” Members are debating whether this language excludes trade measures that are not specifically mentioned by an MEA but that are taken to pursue an MEA objective. The European Union is likely to support a broader scope than are most other members, including the United States. Only about 20 MEAs contain trade provisions. For example, the Montreal Protocol on Substances that Deplete the Ozone Layer controls the production and consumption of ozone-depleting substances such as chlorofluorocarbons. The Basel Convention, which controls trade or transportation of hazardous waste across international borders, and the Convention on International Trade in Endangered Species are other multilateral environmental agreements containing trade provisions. The Committee on Trade and Environment has been charged with reviewing the effect of environmental measures on market access and reporting to the fifth ministerial conference on the desirability of future action. In addition, although the mandate in the Doha Declaration for negotiations on environmental goods and services appears in the section on trade and the environment, the negotiating body handling trade in services will cover environmental services. Further, the negotiating group addressing nonagricultural market access will cover environmental goods. However, the special session of the Committee on Trade and Environment will also play a role in these negotiations, including monitoring developments in the aforementioned negotiating groups and clarifying the concept of environmental goods. The United States believes that these negotiations allow “win-win” opportunities to provide trade liberalization and promote sustainable development. Dispute Settlement The Uruguay Round agreements, which created the WTO, also established a new dispute settlement system, replacing the procedures that had gradually emerged under the GATT. Unlike the GATT, the WTO Dispute Settlement Understanding (DSU) discourages stalemate by not allowing parties to block decisions. It also establishes a standing Appellate Body, making the dispute settlement process more stable and predictable. Nevertheless, many WTO member governments have argued there is still room for improvement in the existing WTO dispute settlement system. Beginning in 1997, the WTO Dispute Settlement Body, which administers the dispute settlement process, held a series of informal discussions on the basis of proposals and issues that members had identified to improve the DSU. However, this effort did not result in a consensus for change. Subsequently, at the Doha ministerial conference, WTO members agreed to initiate formal negotiations to improve and clarify DSU provisions. Many countries want these negotiations to address the issue of conflicting time lines in WTO rules regarding when a member can retaliate against another for failing to implement a dispute settlement ruling. Other priorities for the United States specifically include (1) streamlining the dispute settlement process to achieve faster results by preventing countries from delaying compliance with dispute settlement rulings and (2) increasing the transparency of the proceedings of dispute settlement and appellate panels. A potential area of disagreement in the DSU negotiations involves the way in which members can impose sanctions on other countries when they fail to implement adverse WTO decisions. The European Union wants to limit the ability of members to shift sanctions among various imports from the offending country. To the contrary, the United States supports shifting sanctions among imports. The Doha Declaration calls for concluding DSU negotiations by May 2003, and for taking steps to ensure that the results enter into force as soon as possible. Unlike other aspects of the negotiating agenda that the Doha Declaration mandated, the DSU negotiations will not be part of the single undertaking. In other words, the DSU negotiations will not be tied to the overall success or failure of the other negotiations, which are scheduled to conclude by January 2005. Registry for Geographical Indications for Wines and Spirits The Council for Trade-Related Aspects of Intellectual Property Rights (TRIPS) must resolve the issue of developing a registry and notification system for geographical indications. While TRIPS mandated that the Council negotiate the establishment of a multilateral system for notification and registration of geographical indications for wines, it did not establish a deadline for the negotiations. At the Doha ministerial, WTO members decided that these negotiations, which began in 1997, should be concluded by the fifth ministerial conference in 2003. The negotiations are being undertaken in special sessions of the Council for TRIPS. Proposals previously submitted in meetings of the Council for TRIPS adopted two different approaches. One proposal made by the European Union and supported by a number of other countries maintains that geographical indications on the registry for wines and spirits would be considered as protected by all WTO members. The proposal allows WTO members to challenge any geographical indication on the registry that they consider to be generic. The other proposal, made by United States, Canada, Chile, and Japan and supported by a number of other countries, treats the registry as a database to assist WTO members, but it contains no requirement that all members protect all items on the registry. The debate among WTO members over the meaning and purpose of this registry has been contentious. According to the U.S. Department of Agriculture, under the EU’s proposal, many WTO members would incur significant costs for the examination and enforcement of geographical indications, which would not be paid for through fees or trade concessions and would be offset by few benefits. In contrast, the European Union believes that its proposal would not impose any new substantive obligations on WTO members. In addition, the European Union contends that the joint U.S., Canadian, Chilean, and Japanese proposal to publish a list of geographical indications exclusively for informational purposes would not necessarily facilitate the protection of those indications, as called for in the TRIPS agreement. Issues Negotiated Outside New Negotiating Groups or Special Sessions of WTO Bodies WTO members have also agreed to negotiate two sets of issues outside of any new negotiating group or special session of existing WTO bodies. They include intellectual property rights and public health, and issues surrounding the implementation of existing Uruguay Round agreements. Intellectual Property Rights and Public Health Prior to the Doha ministerial, African nations, nongovernmental organizations, and others argued that the patent protection provisions of TRIPS were preventing developing countries from gaining access to medicines needed to fight HIV/AIDS, tuberculosis, malaria, and other epidemic diseases. For example, they argued that such provisions made some medicines unaffordable in some developing countries. In response, some developed countries, including the United States, maintained that TRIPS should not prevent access to such medicines, because the agreement is flexible. For example, it contains provisions allowing WTO members to grant licenses to domestically produce pharmaceuticals without the consent of the patent holder in situations of “national emergency or other circumstances of extreme urgency.” In response to this ongoing debate, ministers from WTO members adopted the Declaration on the TRIPS Agreement and Public Health in Doha that explicitly stated members’ opinions that TRIPS does not and should not prevent any WTO member from taking measures to protect public health. According to U.S. government officials, the declaration demonstrates the flexibility of TRIPS while keeping the provisions of the agreement intact. In addition, the declaration mandates that the WTO Council for TRIPS develop alternatives for members to take advantage of the flexibilities in TRIPS to allow them access to medicines even if they lack the ability to manufacture pharmaceuticals domestically. The Council for TRIPS is mandated to complete this work by the end of 2002. Implementation Issues Issues surrounding the implementation of agreements deal with long- standing concerns on the part of developing country members about the Uruguay Round agreements. These issues played a primary role in preparations for the Doha ministerial. Their discussion was facilitated when a group of seven countries, chaired by Uruguay, proposed that they be dealt with in three stages—before, during, and after the ministerial conference. Implementation issues involve two major concerns. First, many developing countries maintained that they lacked the capacity in terms of expertise, financial resources, and institutions to fully meet their Uruguay Round obligations such as complying with subsidies obligations and initiating trade-related investment measures. Given these difficulties, many developing countries demanded that deadlines for these obligations be extended. Second, many developing country members claimed that they had not reaped the economic benefits promised by the Uruguay Round agreements. They argued for changing the agreements to make them more balanced in favor of developing country interests. Examples include accelerating the schedule for increasing textile and apparel quota growth rates in the Uruguay Round Agreement on Textiles and Clothing. While developed countries have been agreeable in some cases to extending developing countries’ deadlines for implementing their Uruguay Round obligations and making other changes, they have maintained that any issues involving changes to the Uruguay Round agreements would have to be pursued in new trade negotiations. Ultimately, the Doha Declaration commits WTO members to addressing the implementation issues under two categories. First, issues with a specific negotiating mandate will be addressed in the relevant negotiating group. For example, since trade remedies are mandated for the negotiation group on WTO rules, concerns about antidumping practices will be folded into the negotiations on trade remedies. Second, other outstanding implementation issues not mandated for negotiations, such as initiatives surrounding textile and apparel trade, are to be addressed as a matter of priority by the existing WTO bodies. The WTO bodies are to report on those issues to the Trade Negotiations Committee for “appropriate action” by the end of 2002. Many implementation issues are contained in the Decision on Implementation-Related Issues and Concerns adopted at the Doha ministerial. Several WTO developing country representatives emphasized that it was very important that existing WTO bodies address outstanding implementation issues by the end of 2002, as mentioned in the Doha Declaration. One WTO official said that he expected developing countries to push hard for progress on those implementation issues not mandated for negotiation at the fifth ministerial conference in September 2003. Another WTO official was concerned that some developing countries might try to keep some implementation issues that were mandated for negotiations, particularly trade remedies, on a separate track. Geographical Indications Other than Wines and Spirits Whether or not to extend a higher level of protection for geographical indications for products other than wines and spirits is one of the more controversial implementation issues listed in the Doha Declaration but not mandated for negotiations. The issue has been assigned to the WTO’s Council for TRIPS, which is to report on appropriate action by the end of 2002. WTO officials have indicated that this is an important issue to watch, because it involves many WTO members with diametrically opposed positions and thus could affect the progress in the overall negotiations. Using the geographical indication when the product was made elsewhere, or when the product does not meet the standards originally associated with the geographical indication, can mislead consumers, and TRIPS requires countries to prevent the misuse of geographical indications. For wines and spirits, TRIPS provides an even higher level of protection, protecting geographical indications even when there is little risk of misleading the consumer. The European Union and some other WTO members believe that the Council for TRIPS should agree on rules for negotiating the extension of a higher-level protection for products beyond wines and spirits. These countries believe that extending heightened protection would benefit countries’ development, because geographical indications can be a means for countries—particularly developing countries—to market their products and secure higher prices, since product quality is associated with those geographical indications. Further, one WTO member has warned that failure to reach consensus on this issue would have implications for other subjects under negotiation: in particular, agriculture. The United States and some other WTO members believe that the Council for TRIPS should simply report to the Trade Negotiations Committee on its discussions, without proposing any rules for further negotiations. These members believe that existing protection for geographical indications for all products is sufficient, and that extending the higher level of protection to products other than wines and spirits would restrict trade and necessitate serious costs to governments, manufacturers, and consumers. According to the U.S. Department of Agriculture, examples of such costs include administrative mechanisms to implement the broadened standards, relabeling, and repackaging. Work Mandated by the Doha Declaration, but Not Part of Negotiations In addition to the issues under negotiation discussed in the report and in appendix I, the Doha Declaration mandates other areas of work that are not part of the negotiations. Figure 6 shows the organization of the WTO negotiations from appendix I and also identifies these additional areas in the general Doha work program. Objectives, Scope, and Methodology The Ranking Minority Member of the Senate Committee on Finance, the Chairman of the House Committee on Ways and Means, and the Chairman of the House Ways and Means Subcommittee on Trade asked us to (1) analyze the factors that contributed to the Doha ministerial conference’s successful launch of new WTO negotiations, (2) analyze the key interim deadlines for the most sensitive issues, from the present time through the next ministerial conference in 2003, and (3) evaluate the most significant challenges facing the WTO in the overall negotiations. We followed the same overall methodology to complete all three of our objectives. We obtained, reviewed, and analyzed documents from a variety of sources. From the WTO, we analyzed the Doha Ministerial Declaration, the Decision on Implementation-Related Issues and Concerns, and the Declaration on the TRIPS Agreement and Public Health, as well as numerous negotiating proposals from WTO member countries and other documents. From U.S. government agencies, we obtained background information and documentation regarding negotiating proposals and positions. We met with and obtained documents from a wide variety of U.S. government and private-sector officials, foreign government and private- sector officials, WTO officials, and officials from international nongovernmental and intergovernmental organizations. Prior to the Doha ministerial conference, we met with officials from the Department of Agriculture, the Department of Labor, the Environmental Protection Agency, the Department of Commerce, the Office of the U.S. Trade Representative, the Department of Justice, and the State Department. We also met with representatives from developed and developing countries in Washington, D.C., including Australia, Brazil, Canada, the European Union, France, Jamaica, Malaysia, Mexico, South Korea, Thailand, and Zambia. Further, we met with private-sector representatives from the AgTrade Coalition, the American Forest and Paper Association, the National Association of Manufacturers, the National Farmers Union, and the National Foreign Trade Council. After the Doha ministerial conference, we met with additional U.S., WTO, and foreign government officials, private-sector representatives, and nongovernmental and intergovernmental organizations to obtain their views about the negotiations launched in Doha. We also traveled to the WTO’s headquarters in Geneva, where we met with WTO member country representatives from developed and developing countries, including Brazil, Canada, Chile, China, Hong Kong, India, Jamaica, Japan, Malaysia, Mexico, and Uganda. We also met with WTO officials, including all the Deputy Directors-General and eight division directors. In addition, while in Geneva, we met with representatives from the South Centre and the International Centre for Trade and Sustainable Development. In Brussels, we met with officials from the European Commission, including the Directorates-General for trade and agriculture. Also in Brussels, we met with representatives of business and environmental groups from the European Union. In Washington, D.C., we met with private-sector representatives including those from the American Forest and Paper Association, the Coalition for Service Industries, the Center for International Environmental Law, and Kodak. We performed our work from August 2001 through July 2002 in accordance with generally accepted government auditing standards. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those listed above, Juan Gobel, Howard Cott, Jason Bair, Bradley Hunt, Lori Kmetz, Rona Mendelsohn, and Richard Seldin made key contributions to this report. GAO’s Mission The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
In November 2001, the World Trade Organization launched a new set of multilateral negotiations at its ministerial conference in Doha, Qatar. The ministerial conference was important because it laid out an ambitious agenda for a broad set of new multilateral trade negotiations, set forth in the Doha Ministerial Declaration. The declaration calls for a continuation of discussions on liberalizing trade in agriculture and services which began in 2000. In addition, it provides for new talks on market access for nonagricultural products, trade and the environment, trade-related aspects of intellectual property rights, and other issues. Four main factors led to the World Trade Organization's successful launch of new multilateral trade negotiations in Doha. First, the United States' and the European Union's clear support of the launch, bolstered by the strong personal relationship between the U.S. Trade Representative and the European Union Commissioner for Trade, facilitated agreement on the agenda for new negotiations. Second, World Trade Organization members applied an effective preparation strategy before the Doha ministerial conference. Third, some key developments at the Doha conference helped gain support from the developing countries for launching negotiations. Last, World Trade Organization officials and member country representatives said that the tragic events of September 11th galvanized Organization members to show their support for a strong and healthy worldwide trading system. The Doha Declaration requires negotiators to make early, crucial decisions because it mandates several important interim deadlines. One of these deadlines involves decisions on agricultural trade, where World Trade Organization members must agree on modalities, or methodologies, timetables, and desired targets, for reducing agricultural export subsidies, domestic support, and agricultural tariffs by March 31, 2003. The second interim deadline concerns the "Singapore issues," which Organization members must decide whether to include in the negotiations by the next ministerial conference in September 2003. The overriding challenge for the World Trade Organization in the negotiations will be to forge consensus within its large and diverse membership and to deal with several difficult organizational issues. In addition, the Organization will need to overcome the negative effects of outside events, such as disputes among its key members.
USCIS and SSA Have Reduced TNCs, but the Accuracy of E-Verify Continues to Be Limited by Both Inconsistent Recording of Employees’ Names and Fraud USCIS has reduced TNCs from 8 percent for the period June 2004 through March 2007 to almost 2.6 percent in fiscal year 2009. As shown in figure 1, in fiscal year 2009, about 2.6 percent or over 211,000 of newly hired employees received either a SSA or USCIS TNC, including about 0.3 percent who were determined to be work eligible after they contested a TNC and resolved errors or inaccuracies in their records, and about 2.3 percent, or about 189,000, who received a final nonconfirmation because their employment eligibility status remained unresolved. For the approximately 2.3 percent who received a final nonconfirmation, USCIS was unable to determine how many of these employees (1) were authorized employees who did not take action to resolve a TNC because they were not informed by their employers of their right to contest the TNC, (2) independently decided not to contest the TNC, or (3) were not eligible to work. USCIS has reduced TNCs and increased E-Verify accuracy by, among other things, expanding the number of databases that E-Verify can query and instituting quality control procedures to screen for data entry errors. However, erroneous TNCs continue to occur, in part, because of inaccuracies and inconsistencies in how personal information is recorded on employee documents, in government databases, or both. While some actions have been taken to address name-related TNCs, more could be done. Specifically, USCIS could better position employees to avoid an erroneous TNC by disseminating information to employees on the importance of providing consistent name information and how to record their names consistently. In our December 2010 report, we recommended that USCIS disseminate information to employees on the potential for name mismatches to result in erroneous TNCs and how to record their names consistently. USCIS concurred with our recommendation and outlined actions to address it. For example, USCIS commented that in November 2010 it began to distribute the U.S. Citizenship Welcome Packet at all naturalization ceremonies to advise new citizens to update their records with SSA. USCIS also commented that it has commissioned a study, to be completed in the third quarter of fiscal year 2011, to determine how to enhance its name-matching algorithms. USCIS’s actions for reducing the likelihood of name-related erroneous TNCs are useful steps, but they do not fully address the intent of the recommendation because they do not provide specific information to employees on how to prevent a name-related TNC. See our December 2010 report for more details. In addition, identity fraud remains a challenge because employers may not be able to determine if employees are presenting genuine identity and employment eligibility documents that are borrowed or stolen. E-Verify also cannot detect cases in which an unscrupulous employer assists unauthorized employees. USCIS has taken actions to address fraud, most notably with the fiscal year 2007 implementation of the photo matching tool for permanent residency cards and employment authorization documents and the September 2010 addition to the matching tool of passport photographs. Although the photo tool has some limitations, it can help reduce some fraud associated with the use of genuine documents in which the original photograph is substituted for another. To help combat identity fraud, USCIS is also seeking to obtain driver’s license data from states and planning to develop a program that would allow victims of identity theft to “lock” their Social Security numbers within E-Verify until they need them to obtain employment authorization. Combating identity fraud through the use of biometrics, such as through fingerprint or facial recognition, has been included in proposed legislation before Congress as an element of comprehensive immigration reform, but implementing a biometric system has its own set of challenges, including those associated with cost and civil liberties. Resolving these issues will be important if this technology is to be effectively implemented in combating identity fraud in the employment verification process. An effective employment authorization system requires a credible worksite enforcement program to ensure employer compliance with applicable immigration laws; however USCIS is challenged in ensuring employer compliance with E-Verify requirements for several reasons. For example, USCIS cannot monitor the extent to which employers follow program rules because USCIS does not have a presence in employers’ workplaces. USCIS is further limited by its existing technology infrastructure, which provides limited ability to analyze patterns and trends in the data that could be indicative of employer misuse of E-Verify. USCIS has minimal avenue for recourse if employers do not respond or remedy noncompliant behavior after a contact from USCIS compliance staff because it has limited authority to investigate employer misuse and no authority to impose penalties against such employers, other than terminating those who knowingly use the system for an unauthorized purpose. For enforcement action for violations of immigration laws, USCIS relies on Immigration and Customs Enforcement (ICE) to investigate, sanction, and prosecute employers. However, ICE has reported that it has limited resources to investigate and sanction employers that knowingly hire unauthorized workers or those that knowingly violate E-Verify program rules. Instead, according to senior ICE officials, ICE agents seek to maximize limited resources by applying risk assessment principles to worksite enforcement cases and focusing on detecting and removing unauthorized workers from critical infrastructure sites. DHS Has Instituted Employee Privacy Protections for E-Verify, but Resolving Erroneous TNCs Can Be Challenging USCIS has taken actions to institute safeguards for the privacy of personal information for employees who are processed through E-Verify, but has not established mechanisms for employees to identify and access personal information maintained by DHS that may lead to an erroneous TNC, or for E-Verify staff to correct such information. To safeguard the privacy of personal information for employees who are processed through E-Verify, USCIS has addressed the Fair Information Practice Principles, which are the basis for DHS’s privacy policy. For example, USCIS published privacy notices in 2009 and 2010 that defined parameters, including setting limits on DHS’s collection and use of personal information for the E-Verify program. Notwithstanding the efforts made by USCIS to address privacy concerns, employees are limited in their ability to identify and access personal information maintained by DHS that may lead to an erroneous TNC. In our December 2010 report, we recommended that USCIS develop procedures to enable employees to access personal information and correct inaccuracies or inconsistencies in such information within DHS databases. USCIS concurred and identified steps that it is taking to address this issue, such as developing a pilot program to assist employees receiving TNCs to request a records update, referring individuals who receive a TNC to local USCIS or CBP offices and ports of entry to correct records when inconsistent or inaccurate information is identified, and developing a Self-Check program to allow individuals to check their own work authorization status against SSA and DHS databases prior to applying for a job. However, we do not believe that the steps underway fully address the intent of our recommendation because, among other things, USCIS does not have operating procedures in place for USCIS staff to explain to employees what personal information produced the TNC or what specific steps they should take to correct the information. We encourage USCIS to continue its efforts to develop procedures enabling employees to access and correct inaccurate and inconsistent personal information in DHS databases. USCIS and SSA Have Taken Actions to Prepare for Mandatory Implementation of E- Verify, but Face Challenges in Estimating Costs USCIS and SSA have taken actions to prepare for possible mandatory implementation of E-Verify for all employers nationwide by addressing key practices for effectively managing E-Verify system capacity and availability and coordinating with each other in operating E-Verify. However, USCIS and SSA face challenges in accurately estimating E-Verify costs. Our analysis showed that USCIS’s E-Verify estimates partially met three of four characteristics of a reliable cost estimate and minimally met one characteristic. As a result, we found that USCIS is at increased risk of not making informed investment decisions, understanding system affordability, and developing justifiable budget requests for future E-Verify use and potential mandatory implementation if it. To ensure that USCIS has a sound basis for making decisions about resource investments for E- Verify and securing sufficient resources, in our December 2010 report, we recommended that the Director of USCIS ensure that a life-cycle cost estimate for E-Verify is developed in a manner that reflects the four characteristics of a reliable estimate consistent with best practices. USCIS concurred and senior program officials told us that USCIS, among other things, has contracted with a federally funded research and development center to develop an independent cost estimate of the life-cycle costs of E- Verify to better comply with our cost-estimating guidance. Our analysis showed that SSA’s E-Verify estimates substantially met three of four characteristics of a reliable cost estimate. However, we found that SSA’s cost estimates are partially credible because SSA may not be able to provide assurance to USCIS that it can provide the required level of support for E-Verify operations if it experiences cost overruns within any one fiscal year. In our December 2010 report, we recommended that the Commissioner of SSA assess the risk around SSA’s E-Verify workload estimate, in accordance with best practices, to ensure that SSA can accurately project costs associated with its E-Verify workload and provide the required level of support to USCIS and E-Verify operations. SSA did not concur, and stated that it assesses the risk around its workload cost estimates and, if E-Verify were to become mandatory, SSA would adapt its budget models and recalculate estimated costs based on the new projected E-Verify workload volume. As discussed in our December 2010 report, SSA does not conduct a risk and uncertainty analysis that uses statistical models to quantitatively determine the extent of variability around its cost estimate or identify the limitations associated with the assumptions used to create the estimate. Thus, we continue to believe that SSA should adopt this best practice for estimating risks to help it reduce the potential for experiencing cost overruns for E-Verify. Mr. Chairman, this concludes my statement. I will be pleased to respond to any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or stanar@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Evi Rezmovic, Assistant Director; Christine Hanson; Sara Margraf; and Linda Miller. Additionally, key contributors to our December 2010 report include Blake Ainsworth, David Alexander, Tonia Brown, Frances Cook, Marisol Cruz, John de Ferrari, Julian King, Danielle Pakdaman, David Plocher, Karen Richey, Robert Robinson, Douglas Sloane, Stacey Steele, Desiree Cunningham, Vanessa Taylor, Teresa Tucker, and Ashley Vaughan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the E-Verify program, which provides employers a tool for verifying an employee's authorization to work in the United States. The opportunity for employment is one of the most powerful magnets attracting immigrants to the United States. According to the Pew Hispanic Center, in early 2009 approximately 11 million unauthorized immigrants were living in the country, and an estimated 7.8 million of them, or about 70 percent, were in the labor force. Congress, the administration, and some states have taken various actions to better ensure that those who work here have appropriate work authorization and to safeguard jobs for authorized employees. Nonetheless, opportunities remain for unscrupulous employers to hire unauthorized workers and for unauthorized workers to fraudulently obtain employment by using borrowed or stolen documents. Immigration experts have noted that deterring illegal immigration requires, among other things, a more reliable employment eligibility verification process and a more robust worksite enforcement capacity. E-Verify is a free, largely voluntary, Internet-based system operated by the Verification Division of the Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS) and the Social Security Administration (SSA). The goals of E-Verify are to (1) reduce the employment of individuals unauthorized to work, (2) reduce discrimination, (3) protect employee civil liberties and privacy, and (4) prevent undue burden on employers. Pursuant to a 2007 Office of Management Budget directive, all federal agencies are required to use E-Verify on their new hires and, as of September 2009, certain federal contractors and subcontractors are required to use E-Verify for newly hired employees working in the United States as well as existing employees working directly under the contract. A number of states have also mandated that some or all employers within the state use E-Verify on new hires. From October 2009 through August 2010, E-Verify processed approximately 14.9 million queries from nearly 222,000 employers. This testimony is based primarily on a report we issued in December 2010 and provide updates to the challenges we noted in our 2005 report and 2008 testimony. The statement, as requested, highlights findings from that report and discusses the extent to which (1) USCIS has reduced the incidence of TNCs and E-Verify's vulnerability to fraud, (2) USCIS has provided safeguards for employees' personal information, and (3) USCIS and SSA have taken steps to prepare for mandatory E-Verify implementation. Our December 2010 report also includes a discussion of the extent to which USCIS has improved its ability to monitor and ensure employer compliance with E-Verify program policies and procedures. (1) USCIS has reduced tentative nonconfirmations (TNCs) and increased E-Verify accuracy by, among other things, expanding the number of databases that E-Verify can query and instituting quality control procedures to screen for data entry errors. However, erroneous TNCs continue to occur, in part, because of inaccuracies and inconsistencies in how personal information is recorded on employee documents, in government databases, or both. While some actions have been taken to address name-related TNCs, more could be done. Specifically, USCIS could better position employees to avoid an erroneous TNC by disseminating information to employees on the importance of providing consistent name information and how to record their names consistently. In our December 2010 report, we recommended that USCIS disseminate information to employees on the potential for name mismatches to result in erroneous TNCs and how to record their names consistently. USCIS concurred with our recommendation and outlined actions to address it. For example, USCIS commented that in November 2010 it began to distribute the U.S. Citizenship Welcome Packet at all naturalization ceremonies to advise new citizens to update their records with SSA. USCIS also commented that it has commissioned a study, to be completed in the third quarter of fiscal year 2011, to determine how to enhance its name-matching algorithms. USCIS's actions for reducing the likelihood of name-related erroneous TNCs are useful steps, but they do not fully address the intent of the recommendation because they do not provide specific information to employees on how to prevent a name-related TNC. (2) USCIS has taken actions to institute safeguards for the privacy of personal information for employees who are processed through E-Verify, but has not established mechanisms for employees to identify and access personal information maintained by DHS that may lead to an erroneous TNC, or for E-Verify staff to correct such information. To safeguard the privacy of personal information for employees who are processed through E-Verify, USCIS has addressed the Fair Information Practice Principles, which are the basis for DHS's privacy policy. For example, USCIS published privacy notices in 2009 and 2010 that defined parameters, including setting limits on DHS's collection and use of personal information for the E-Verify program. (3) USCIS and SSA have taken actions to prepare for possible mandatory implementation of E-Verify for all employers nationwide by addressing key practices for effectively managing E-Verify system capacity and availability and coordinating with each other in operating E-Verify. However, USCIS and SSA face challenges in accurately estimating E-Verify costs. Our analysis showed that USCIS's E-Verify estimates partially met three of four characteristics of a reliable cost estimate and minimally met one characteristic. As a result, we found that USCIS is at increased risk of not making informed investment decisions, understanding system affordability, and developing justifiable budget requests for future E-Verify use and potential mandatory implementation if it. To ensure that USCIS has a sound basis for making decisions about resource investments for E-Verify and securing sufficient resources, in our December 2010 report, we recommended that the Director of USCIS ensure that a life-cycle cost estimate for E-Verify is developed in a manner that reflects the four characteristics of a reliable estimate consistent with best practices. USCIS concurred and senior program officials told us that USCIS, among other things, has contracted with a federally funded research and development center to develop an independent cost estimate of the life-cycle costs of E-Verify to better comply with our cost-estimating guidance.
Background The four major federal land management agencies—BLM, the Forest Service, FWS, and NPS—manage their land and resources in accordance with their respective missions and authorities. BLM and the Forest Service are responsible for managing about 69 percent of federal land for a variety of uses, including recreation, timber harvesting, livestock grazing, oil and gas production, and mining. FWS is responsible for managing about 14 percent of federal land, primarily to conserve and protect fish and wildlife, and their habitat, although other uses, such as hunting and fishing, are allowed when they are compatible with the primary purposes for which the lands are managed. NPS manages approximately 12 percent of federal land to conserve, preserve, protect, and interpret the nation’s natural, cultural, and historic resources. In comparison, BoR, which manages about 1 percent of federal land, has a much narrower primary mission—to manage, develop, and protect water and related resources in an environmentally and economically sound manner. Accordingly, BoR maintains 348 reservoirs, 476 dams, and 58 hydroelectric plants on federal land and is the largest wholesale supplier of water in the United States and the second-largest hydroelectric power producer in the nation. BoR land is largely managed to meet its primary mission, but this land also provides other benefits, such as recreation. These agencies may collect a variety of data to manage and oversee their activities. For our 2011 report, we examined over 100 data elements that fall into three broad categories: (1) information on federal land and the resources the agencies manage, (2) revenues generated from selected activities on federal land, and (3) information on federal land subject to selected land use designations. We developed this list of data elements by reviewing, among other things, the request letter for the work, past GAO and Congressional Research Service reports, and interviewing agency officials. The five agencies may collect other data related to land management that were not included in this review. The three data element categories are described below. Federal land and resources. We identified 57 data elements in this category that relate to (1) information on the total surface and subsurface acres of federal land managed by each of the five land management agencies and the total acres managed for specific purposes, such as hardrock mining or grazing, and (2) the volume of various resources, such as oil and gas and timber extracted or harvested from federal land. Revenues generated from activities on federal land. We identified 35 data elements in this category that relate to information on revenues generated from activities on federal land, which are derived from the use or sale of land and resources. Sources of revenue include revenues generated from oil and gas activities, hardrock mining, and special use or right-of-way permits issued for transmission lines, filming activities, and concession activities. We also included cost recovery fees—which are intended to recover agency costs for processing certain plans, applications, or permits associated with various activities on federal land—in this category of data elements. Federal land use designations. Data elements in this category relate to information on the number of acres each agency manages that are associated with various special designations of federal land, such as wilderness areas, wild and scenic rivers, paleontological sites, and critical habitat set aside for endangered species. Some of these land use designations apply to all five federal land management agencies, but some are unique to a specific agency, and the number of land use designations applicable to each agency varies. Extent to Which Data Elements Are Collected by the Five Agencies Varied The five agencies varied in the extent to which they collected the over 100 land and resources, revenue, and federal land use designation data elements that we queried them about. Specifically, all five agencies collected only 4 of the same data elements of the over 100 data elements that we asked them about. These 4 elements related to total surface acres managed, total acres managed within each state, the number of special use permits generated for filming activities on federal land, and the number of cultural and historic sites listed on the National Register of Historic Places. In contrast, none of the agencies collected information for 33 data elements that we asked them about, such as the percent of total acres under oil, gas, or coal leases that have surface disturbance or where the surface disturbance has been reclaimed, or information on the potential quantities of oil, gas, and coal resources on federal land. Specifically, of the 57 federal land and resource data elements we asked each of the five agencies about, BLM and the Forest Service collected the most—22 and 20 data elements respectively—and BoR collected the least—3 data elements. Table 1 lists the 57 federal land and resources data elements we asked about and indicates which of the five agencies collected them. Of the 35 revenue data elements we asked each of the five federal agencies about, BLM collected the most and NPS collected the least, 22 and 6, respectively. Table 2 lists the 35 data elements that relate to revenues generated from activities on federal lands and which of the five agencies collected them. Some land use designation data elements that we asked the five federal land management agencies about applied to all five of them, and some were unique to a specific agency. As a result, the number of land use designations applicable to each agency varied. Specifically, 26 federal land use designation data elements applied to BLM, 21 to the Forest Service, 21 to FWS, 30 to NPS, and 17 to BoR. NPS collected the most information on federal land use designation data elements and BoR collected the least, 25 and 1, respectively. Table 3 lists the data elements collected for federal land use designations by those that apply to all agencies and those that apply to each of the five agencies. Agency officials cited various reasons why their agencies did not collect certain information, such as they believed another federal agency collected it, it was inconsistent with the agency’s mission, or they lacked the authority or resources to do so. For example, according to BLM officials, the agency does not collect information on the total acres of land designated as Globally Important Bird Areas. The American Bird Conservancy designates these areas and, along with the National Audubon Society, collects information about these sites. BLM is informed if any designations are on its land but does not track these areas. Similarly, according to FWS officials, the United Nations World Heritage program keeps records for acres designated as World Heritage Sites, and FWS relies on this entity for information about these sites. In addition, some agencies did not collect data because they believe collecting it would be inconsistent with the agency’s mission. For example, according to NPS officials, with regard to data related to various aspects of coal, oil and gas, and hardrock operations on NPS land, these activities, if they are allowed at all, are quite limited on NPS land because they are inconsistent with the mission of the agency. For this reason, NPS does not collect information on the potential amounts of these resources on NPS land. These officials also told us that for any quantities of oil and gas extracted from NPS land, the Department of the Interior’s Office of Natural Resources Revenue would collect this information. In addition, BoR did not collect 54 of the 57 federal land and resource data elements we examined because, according to agency officials, these data did not relate to BoR’s mission. The officials noted that while BoR manages land associated with its mission, other activities do occur on its land that are incidental to its mission and are generally managed by another agency. For example, Lake Mead in Nevada and Arizona is a National Recreation Area located on BoR land and managed by NPS. Thus, NPS would collect data on the number of acres acquired for national recreation areas, such as Lake Mead, and not BoR. Further, some agencies cited a lack of authority or resources to collect certain data elements as their reason for not doing so. For example, according to agency officials, the Forest Service does not collect information on surface land disturbed by coal mining because it is not within Forest Service authority to require collection of this information. Forest Service officials said Interior’s Office of Surface Mining, Reclamation and Enforcement may collect this information. They added that it is not within the scope of the Forest Service’s authority to require the collection of information on surfaces disturbed by oil and gas activities, but they thought that BLM might collect this information. BLM officials stated that they would like to collect this information, but funding is not available to do so. Approximately Three- Quarters of the Data Elements Collected Are Stored in a Primary Agency Data System When information was collected by the five agencies, it was more often stored in a primary agency data system—a centralized electronic data system maintained at an agencywide level—than in other formats. Specifically, approximately three-quarters of the data elements that the agencies collected were stored in a primary agency data system. For example, we queried each agency about 57 federal land and resources data elements, and while the number of data elements each agency collected varied significantly, ranging from 3 to 22, the majority of the information that was collected was stored in a primary agency data system. BLM collected 22 federal land and resource data elements, and 15 of these elements were stored in a primary agency data system. These included data elements related to the total acres that have been leased for coal development, total acres that have been leased for oil and gas development, and total acres for livestock grazing. However other data, such as acres of surface and subsurface land, acres managed within each state, and potential quantity of coal reserves on leased land that the agency manages, were kept in BLM state offices in other formats, such as electronic spreadsheets or hard copy. In contrast, all the data elements the Forest Service, FWS, and BoR collected were available in a primary agency data system. Similarly, we asked each agency about 35 specific revenue data elements, and again while the number of data elements each agency collected varied significantly, ranging from 6 to 22, the majority of the information that was collected was stored in a primary agency data system. For example, BLM stored all 22 of the revenue data elements it collected in primary agency data systems, including those related to revenues generated from right-of-way permits for transmission lines and water and wind projects and special use permits for camping, day use, filming, and concession activities. In contrast, of the 6 revenue data elements that NPS collected, 3 were stored in a primary agency data system—including those related to recreation fees, use fees, and concession receipts—and 3 were stored in other formats—including special use and right of way permits, which are kept at the park unit level. With regard to data elements on federal land use designations, the number stored in primary agency data systems or in other formats also varied significantly by agency. For example, only 1 of the 17 land use designation data elements applicable to BLM was stored in a primary agency data system. Other data elements, such as those related to the number of cultural and historic resource sites, National Monuments, and National Historic and National Scenic Trails, were documented in spreadsheets at BLM headquarters. Some data elements, such as the number of paleontological sites and total acres designated as critical habitat under the Endangered Species Act, were maintained at multiple field offices in other formats, such as electronic files or hard copy. In contrast, 13 of the 15 data elements that the Forest Service collects on land use designation are stored in primary agency systems. These include information on total acres designated as Wilderness Areas, National Forests, National Grasslands, National Monuments, National Tallgrass Prairie, Land Utilization Projects, administrative sites, and Research Natural Areas; and the total river miles designated as Wild and Scenic Rivers. At NPS, the format for data storage was more of a mix, with 18 of the 25 data elements stored in primary agency data systems and 7 stored in other formats, including electronic spreadsheets, Web sites, or paper files at agency headquarters or in park units. Less than Half of the Agency Data Stored in Primary Agency Data Systems Were Assessed to Be Potentially Reliable We assessed the potential reliability of the data elements that the five agencies collected and determined that less than half of the data elements stored in a primary agency data system were potentially reliable. Generally, we assessed data elements as potentially reliable when information about the completeness and accuracy of a specific data element provided high assurance of its reliability. It is important to note that we assessed the potential reliability of these data elements for a given period of time, and additional analysis would be needed to determine the reliability of specific data elements for specific purposes. With regard to federal land and resource data elements, we assessed as potentially reliable 24 data elements that the five federal agencies stored in a primary agency data system: At BLM, of the 15 data elements that were stored in a primary agency data system, 6 were assessed to be potentially reliable. At the Forest Service, of the 20 data elements that were stored in a primary agency data system, 4 were assessed to be potentially reliable. At FWS, of the 10 data elements that were stored in a primary agency data system, 6 were assessed to be potentially reliable. At NPS, of the 10 data elements that were stored in a primary agency data system, 8 were assessed to be potentially reliable. At BoR, of the 3 data elements that were stored in a primary agency data system, none was assessed to be potentially reliable. Reasons why these data were found to be potentially unreliable included concerns about the accuracy and completeness of the data and internal controls for data quality. For example, all of the federal land and resource data elements in the Forest Service’s Land Area Report data system and the Automated Lands Program data system were assessed as potentially unreliable, in part because their associated data systems had weak internal controls for data quality. With regard to the revenue data elements stored in primary agency data systems, we assessed 17 as potentially reliable: At BLM, of the 22 data elements that were stored in a primary agency data system, 13 were assessed to be potentially reliable. At the Forest Service, of the 9 data elements that were stored in a primary agency data system, 1 was assessed to be potentially reliable. At FWS, of the 10 data elements that were stored in a primary agency data system, none was assessed to be potentially reliable. At NPS, of the 3 data elements that were stored in a primary agency data system, all 3 were assessed to be potentially reliable. At BoR, of the 4 data elements that were stored in a primary agency data system, none was assessed to be potentially reliable. Reasons why these data were found to be unreliable varied. For example, at BLM, we assessed two data elements—revenues generated by coal bonus bids and coal rents in the Collection and Billing System—as potentially unreliable, in part because BLM did not provide sufficient information about the accuracy and completeness of these data elements in the Collection and Billing System. In addition, at FWS, we assessed the revenues generated from the right-of-way permits data element as potentially unreliable, in part because the revenues cannot be broken down by type of permit, and even if the type of permit were known, the frequency of revenues generated from these permits is unknown (e.g., annually or one-time). With regard to the data on land use designations, we assessed as potentially reliable 25 land use designation data elements stored in primary agency data systems: At BLM, the 1 data element that was stored in a primary agency data system was not assessed to be potentially reliable. At the Forest Service, of the 13 data elements that were stored in a primary agency data system, 2 were assessed to be potentially reliable. At FWS, of the 6 data elements that were stored in a primary agency data system, all 6 were assessed to be potentially reliable. At NPS, of the 18 data elements that were stored in a primary agency data system, 17 were assessed to be potentially reliable. At BoR, the 1 data element collected was not stored in a primary agency data system. As with the other two types of data elements, the reasons why these data were found to be potentially unreliable varied. For example, we found the land use designation data element at BLM potentially unreliable, in part because of limitations with the accuracy of historic data. In contrast, we found eight data elements stored in the Forest Service’s Land Area Report data system potentially unreliable, in part because the data system used few internal controls for data quality, and the data in the system had not been audited. Chairman Lamborn, Ranking Member Holt, and Members of the Subcommittee, while we recognize that managing the vast federal estate is a daunting task, this task becomes even more challenging when federal land managers do not have access to complete, accurate, and comprehensive land inventory data. This concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contacts and Acknowledgements For further information about this testimony, please contact Anu K. Mittal at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Elizabeth Erdmann, Assistant Director; Antoinette Capaccio, Carol Kolarik, Rebecca Shea, and Lisa Turner also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government manages about 650 million acres, or 29 percent, of the 2.27 billion acres of U.S. land. Four land management agencies—the Bureau of Land Management (BLM), the Fish and Wildlife Service (FWS), the National Park Service (NPS) in the Department of the Interior (Interior), and the Forest Service, in the Department of Agriculture—manage about 95 percent of these federal acres. Interior’s Bureau of Reclamation (BoR) manages another 1 percent of these acres. The five agencies collect certain data to help manage federal lands under their jurisdiction. This testimony summarizes GAO’s findings from G AO-11-337, a report issued in April 2011. In this report, GAO reviewed the extent to which the five agencies collect certain federal land and resource data (referred to as data elements), how these data elements are stored, and their potential reliability. GAO included over 100 data elements at each agency in its analysis. These elements can be categorized as information on (1) federal land and the resources the five agencies manage, (2) revenues generated from selected activities on these lands, and (3) federal land subject to selected land use designations, such as wilderness areas. The five agencies varied in the extent to which they collected the over 100 land and resources, revenue, and federal land use designation data elements that GAO asked them about. Specifically, all five agencies collected data on four basic data elements, which related to total surface acres managed, total acres managed within each state, the number of special use permits generated for filming activities on federal land, and the number of cultural and historic sites listed on the National Register of Historic Places. In contrast, none of them collected information for 33 other data elements, such as the percent of total acres under oil, gas, or coal leases that have surface disturbance or where the surface disturbance has been reclaimed, or information on the potential quantities of oil, gas, and coal resources on federal land. Agency officials cited various reasons why the agencies did not collect certain information, such as believing another federal agency collected it, it was inconsistent with the agency’s mission, or they lacked the authority or resources to do so. When an agency collected information, it was usually stored in a primary agency data system—a centralized electronic data system maintained at an agencywide level. For example, GAO queried each agency about 57 federal land and resources data elements, and while the number of data elements each agency collected varied significantly, ranging from 3 to 22, the majority of the information that was collected was stored in a primary agency data system. Similarly, GAO asked each agency about 35 specific revenue data elements, and again while the number of data elements each agency collected varied significantly, ranging from 6 to 22, the majority of the information that was collected was stored in a primary agency data system. When the agencies collected information but did not store it in a primary agency data system, it was available in other formats such as paper files, land use plans, or other agency documents and files that may have been located in multiple field locations. GAO assessed the potential reliability of the data elements that the five agencies collected and determined that less than half of the data elements stored in a primary agency data system were potentially reliable. Generally, data elements were assessed as potentially reliable when information about the completeness and accuracy of a specific data element provided high assurance of its reliability. It is important to note that GAO assessed the potential reliability of these data elements for a given period of time, and additional analysis would be needed to determine the reliability of specific data elements for specific purposes. Among the reasons some of these data were assessed to be potentially unreliable were insufficient information about the accuracy and completeness of data elements and lack of internal controls for data quality.
IRS’ Reports Do Not Provide a Complete Picture of Mission- Critical Systems’ Status IRS’ Year 2000 status reports do not provide a complete picture of the status of IRS’ mission-critical systems because IRS does not monitor Year 2000 status for its mission-critical systems in their entirety. Instead, IRS monitors the Year 2000 status of the components of an information system, such as the application software, systems software, and hardware for each of its three types of computers—mainframes, minicomputers/file servers, and personal computers. IRS also monitors its telecommunications networks separately. As part of IRS’ Year 2000 risk mitigation efforts, IRS has hired a contractor to conduct periodic risk assessments. The contractor’s December 1998 report recommended exploring the feasibility of tracking status on a system-by-system basis to provide a clear view of IRS’ ability to achieve Year 2000 compliance. The report stated that such a system view would permit IRS to, among other things, help assess the need to target resources to achieve Year 2000 compliance. IRS officials said that IRS’ approach to monitoring Year 2000 compliance corresponds to how IRS’ Information Systems organization is structured to carry out its work. Specifically, IRS officials said that separate organizational units are responsible for application software, systems software and hardware, and telecommunications networks. Therefore, IRS monitors its Year 2000 status by these areas. They do not believe the benefits of monitoring status on a system-by-system basis outweigh the costs, given the amount of time remaining to complete IRS’ Year 2000 work. Reports Indicate That IRS Met the January 1999 Completion Goal for Some Areas but Not for Others IRS’ reports indicate that it met the January 1999 completion goal for some areas but not for others. The reports indicate that IRS met the January 1999 goal for correcting the application software for its existing systems and upgrading telecommunications networks. Since May 1998, when we last testified on this topic, IRS has also made progress in an area that we said was lagging—upgrading systems software and hardware. Despite this progress, however, IRS did not achieve its January 1999 completion goal for any of its three types of computer hardware. IRS fully implemented the Year 2000 aspects for one of its major system replacement projects. For the other system replacement project, 6 of the 10 service centers were using the full suite of Year 2000 changes. Reports Indicate That IRS Met its Goal for Application Software for Existing Systems and Telecommunications Networks Since we testified in May 1998, IRS has continued to make progress in correcting the application software for its mission-critical systems. As of February 6, 1999, IRS reports indicate that IRS has corrected 88 percent of these applications, thereby exceeding its 85 percent goal. In addition to completing this work, IRS has hired a contractor to review all of the corrected application software to determine whether IRS made any errors. This effort began in August 1998 and is scheduled to continue through May 1999. In addition, IRS reports indicate that it met its goal for completing work on its telecommunications networks. In May 1998, we said that, according to IRS, telecommunications networks presented the most significant correction challenge and were likely the highest risk for not being completed by January 31, 1999. As of February 6, 1999, with the exception of three areas, IRS reported that it met its goal for these networks. Reports Indicate That IRS Did Not Meet the Goal for Systems Software and Hardware IRS’ reports indicate that IRS made significant progress in an area that in May 1998 we said was lagging—upgrading systems software and hardware for its three types of computers: mainframes, minicomputers/file servers, and personal computers. Despite this progress, IRS did not meet the January 31, 1999, completion goal for its three types of computers. For IRS’ mainframe computers, IRS officials said IRS fell short in meeting its goal because of delays in receiving the Year 2000 upgrades for one of its system replacement projects. IRS officials said those upgrades are to be received and implemented by March 1, 1999. For minicomputers/file servers, IRS reports indicate that as of February 6, 1999, IRS’ Information Systems organization had completed 60 percent of the work for upgrading systems software and hardware—a significant increase from 13 percent that was done in May 1998, when we last testified on the IRS’ Year 2000 status. According to IRS, systems software and hardware for 13 of the 27 mission-critical systems that use minicomputers/file servers were not upgraded by January 31, 1999. The systems software and hardware for 7 of the 13 systems are not scheduled to be Year 2000 compliant until after March 1999. As a result of the delay, some changes are not to be tested until October 1999, when the second part of the Year 2000 end-to-end test is to begin. This delay reduces the time available to make any needed corrections before January 1, 2000. For personal computers, IRS officials said they plan to replace about 35,000 personal computers and the associated systems software between February 1999 and July 1999 to achieve Year 2000 compliance. As a part of this replacement effort, IRS plans to reduce the number of commercial software and hardware products in its inventory from about 4,000 to 60 core standard products. According to IRS officials, thus far, IRS has completed testing on 5 of the 60 core products. IRS plans to complete the testing for the remaining 55 products by April 1999. IRS’ goal is to eliminate all nonstandard products by July 1999. Full Implementation of Year 2000 Changes Achieved for One of the Two Replacement Projects; Less Than Full Implementation Achieved for the Other For one of IRS’ two major system replacement projects, IRS implemented the Year 2000 changes at all 10 service centers by January 31, 1999; for the other system replacement project, 6 of the 10 service centers were using the full suite of Year 2000 changes for the system by January 31, 1999. IRS’ two major system replacement projects are Service Center Mainframe Consolidation (SCMC) and the Integrated Submission and Remittance Processing (ISRP) System. SCMC is to consolidate the mainframe computer tax processing activities from the 10 service centers to 2 computing centers—thereby reducing the total number of tax processing mainframe computers from 67 to 12. Specifically, SCMC is to (1) replace and/or upgrade mainframe hardware, systems software, and telecommunications networks; (2) replace about 16,000 terminals that support frontline customer service and compliance activities; and (3) replace the system that provides security functions for on-line taxpayer account databases with a new system known as the Security and Communications System (SACS). Replacement of the terminals and the implementation of SACS are critical to IRS’ achieving Year 2000 compliance. The other replacement project is ISRP. ISRP is a single, integrated system that is to perform the functions of two systems that are not Year 2000 compliant—the Distributed Input System that IRS uses to process tax returns and the Remittance Processing System that IRS uses to process tax payments. IRS completed the Year 2000 critical portions of SCMC by January 31, 1999. Specifically, in early October 1998, IRS completed its implementation of the 16,000 terminals that are needed for frontline customer service and compliance activities. Also, as of January 31, 1999, all 10 service centers were using SACS. Originally, IRS had planned to have the other aspects of SCMC besides SACS—that is, the tax processing activities of the 10 service centers— moved to the 2 computing centers by December 1998. As of January 31, 1999, the tax processing activities for three service centers had been moved to the computing centers. IRS is determining the number of additional service centers that are to be moved in 1999. SCMC officials have developed several different schedule options for moving the tax processing activities of the remaining seven service centers. At the time we prepared this statement, IRS officials had not yet selected a schedule option. According to IRS officials, the tax processing activities of all 10 service centers do not need to be moved before 2000 because the existing mainframes in each of the 10 service centers have been made Year 2000 compliant. Thus, in all likelihood, at the start of the 2000 filing season, some service centers will be processing their data locally, whereas others will have their data processed at the computing centers. IRS’ Year 2000 end-to-end test is designed to include both processing scenarios. Both functions of ISRP—tax return processing and remittance processing—were to be implemented in November 1998. However, as a result of problems that occurred during the pilot test of ISRP and the contingency option IRS implemented for the 1999 filing season to address those problems, 4 of the 10 service centers are not to begin using the remittance processing portion of ISRP until August 1999. For the 1999 filing season, the contingency option for ISRP is to retain enough of the old tax processing and remittance processing equipment in the service centers so that IRS could revert to the old systems if ISRP experiences problems. However, four of the service centers did not have enough floor space to accommodate both the old tax processing and remittance processing systems and the ISRP equipment. As a result, these four service centers are to continue using the old remittance processing equipment during the 1999 filing season and convert to ISRP in August 1999. These four service centers were among the top five remittance processing centers during the peak of the 1998 filing season. We recognize that this contingency option may have been the only feasible one for IRS. However, as we reported in December 1998, these four service centers are to receive their equipment so late in 1999 that their staffs will have no experience with the new equipment before the 2000 filing season in processing the large volume of remittances that occur in the peak of the filing season. SCMCTwo Remaining, Critical Year 2000 Activities Still Remain; One of Which Is Behind Schedule In addition to fixing its existing systems, IRS still needs to complete two critical activities for its Year 2000 efforts, and one of these activities is behind schedule. The two critical activities are the completion of (1) an unprecedented Year 2000 end-to-end test of 97 of IRS’ 133 mission-critical systems and (2) 36 contingency plans for IRS’ core business processes. Unprecedented End-to-End Test Is to Begin in April 1999 Using thousands of test cases, IRS’ Year 2000 end-to-end test is to assess the ability of IRS’ mission-critical systems to function collectively in a Year 2000 compliant environment. These cases are intended to replicate the many different kinds of transactions that IRS’ information systems process on any given day to help assess whether IRS’ systems can perform all date computations using data and systems date clocks with January 1, 2000, or later. The test will involve 97 of IRS’ 133 mission-critical systems. Most of IRS’ mission-critical system application software has been tested individually; however, the ability of the application software to operate collectively, using Year 2000 compliant systems software and hardware, with all systems date clocks set forward to simulate the Year 2000, has not been fully tested. In July 1998, IRS began the preliminary activities associated with conducting the end-to-end test. These activities included, but were not limited to, establishing a dedicated test environment to replicate IRS’ tax processing environment, developing test plans and procedures, and doing some preliminary testing of some systems with the systems date clock set forward to 2000. Currently, IRS is developing baseline data from the 1999 filing season that will be ultimately used for the Year 2000 end-to-end test. The end-to-end test is to have two parts. The first part is scheduled to begin in April and end in July 1999. The second part is to begin in October and end in December 1999. The April test is to include the application software that is currently being used for the 1999 filing season. The October test is to include the application software changes that are needed for the tax law changes that are to be implemented for the 2000 filing season. The need to conduct this test has in turn created an additional challenge in completing the work necessary for the 2000 filing season. As shown in table 1, to accommodate the Year 2000 end-to-end test, IRS revised its traditional milestones for implementing tax law changes for the 2000 filing season, thereby compressing the amount of time available to develop and test these changes. Under this compressed schedule, instead of having until January 2000, IRS must program and test all tax law changes that are to take effect in the 2000 filing season before September 30, 1999. Under the compressed schedule, business requirements are to be delivered to IRS’ Information Systems organization by February 28, 1999; the Information Systems organization is scheduled to complete the application software changes by June 15, 1999; and testing of these application software changes is be completed by September 30, 1999. Staggered Milestones Developed for Completing IRS’ Contingency Plans In 1999, IRS is to complete the development of 36 contingency plans that IRS determined are needed to address various Year 2000 failure scenarios for its core business processes. IRS’ initial goal was to have these plans completed by December 1998; however, IRS’ revised goal is to complete 18 submissions processing contingency plans, 2 customer service contingency plans, and 3 key support services plans by no later than March 31, 1999. One key support services contingency plan and 12 compliance contingency plans are to be completed by May 31, 1999. In June 1998, we reported that IRS’ Year 2000 contingency planning efforts fell short of meeting the guidelines included in our Year 2000 Business Continuity and Contingency planning guide. Accordingly, we recommended that the Commissioner of Internal Revenue take a series of steps to broaden IRS’ contingency planning effort to help ensure that IRS adequately assesses the vulnerabilities of its core business processes to potential Year 2000 induced system failures. Specifically, we recommended that the Commissioner take the following steps: (1) solicit the input of business functional areas to identify core business processes and identify those processes that must continue in the event of a Year 2000 failure; (2) map IRS’ mission-critical systems to those core business processes; (3) determine the impact of information systems failures on each core business process; (4) assess existing contingency plans for their applicability to potential Year 2000 failures; and (5) develop and test contingency plans for core business processes if existing plans are not appropriate. Since we issued our report, IRS has been taking actions to address our recommendations. IRS has solicited the input of its business officials and established working groups to identify failure scenarios and to develop the contingency plans. The working groups determined IRS should develop 36 contingency plans that cover various aspects of its core business areas of submissions processing, customer service, compliance, and key support services. One factor influencing the staggered schedule for completing contingency plans was that the staff assigned to develop plans have competing responsibilities, such as the development of business requirements to implement tax law changes as well as other business improvement initiatives. Under the staggered schedule, with the exception of the key support services area, earlier completion milestones were established for those aspects of three other core business areas that, according to IRS officials, were likely to experience a Year 2000 before the other areas. To the extent that the plans require additional actions, such as those associated with testing or preparatory activities, these delays reduce the time available to complete these activities. According to IRS officials, the completion milestones of March and May 1999 reflect when the technical work for the plans is to be completed. Once that work is completed, the plans are to be approved by the official responsible for the core business process and tested. According to IRS officials, a contractor is still developing the testing approach. As a result, these officials could not provide us with the completion milestones and staff requirements for testing the contingency plans. Other Business Initiatives Are Creating Competing Demands on Certain Staff Needed for Year 2000 Efforts In addition to Year 2000 efforts, IRS has other ongoing business initiatives that are placing competing demands on its information systems and business staff. The Commissioner’s Executive Steering Committee (ESC) and IRS’ risk mitigation efforts have provided a forum for addressing these issues. Concurrent with its Year 2000 efforts, IRS is continuing to make changes to its information systems to accommodate changes resulting from various business initiatives. These initiatives include the SCMC project that we discussed previously, implementation of the IRS Restructuring and Reform Act provisions, and of various taxpayer service initiatives. While we do not question the importance of these initiatives, as we have said before, the need to make a significant number of tax law changes for the 2000 filing season introduces an additional risk, albeit one that we could not quantify, to IRS’ Year 2000 effort. In November 1997, the Commissioner established the ESC Steering Committee (ESC) to identify risks to the 1999 filing season and the entire Year 2000 effort and to take actions to mitigate those risks. In addition, IRS hired a contractor to conduct periodic risk assessments. The contractor’s most recent report was issued in December 1998. Recent ESC documents, the contractor’s December 1998 risk assessment report, and our interviews with SCMC officials have identified the following examples of competing demands on staff in IRS’ Information Systems organization and business organizations: Documents prepared for the September 1998 ESC meeting stated that IRS’ Information Systems organization that is responsible for systems software issues was “overextended” because of Year 2000 demands, SCMC, and support for the Year 2000 end-to-end test. The contractor’s December 1998 risk assessment report indicated that some of IRS’ core business area staff face competing demands from the need to (1) identify business requirements for the 2000 filing season and (2) complete Year 2000 contingency plans. As we said previously, IRS’ goal is to have business requirements completed by the end of February. According to the minutes from the January 1999 ESC meeting, IRS’ Internal Audit has also raised a concern about the availability of sufficient staff to support the Year 2000 end-to-end test given the other Year 2000 demands. According to IRS officials, Internal Audit has not released a formal report on this matter. IRS’ draft paper on the SCMC schedule options states that one of the risks for each of the schedule options is the resource drain on IRS staff and contractors from the filing season, the Year 2000 end-to-end test, and critical staff being used to train any new SCMC staff. The draft option paper notes that the extent of the drain varies somewhat depending on how many service centers are to have their tax processing activities moved to the computing centers in 1999. Over the last several months, IRS has taken various actions to address these competing demands. For example: To address the “overextension” of the Information Systems organization that is responsible for systems software, the Chief of that organization said that he obtained contractor support and transferred staff from other areas. He said the additional staff, coupled with the delays in moving the tax processing activities of the service centers to the computing centers, helped alleviate this overextension. To address the competing demands on the business staff to develop Year 2000 contingency plans and finalize business requirements for the 2000 filing season, IRS officials decided to stagger the completion milestones for contingency plans. To help prioritize the work within the Information Systems organization IRS officials told us they have established another executive steering committee. In addition, the minutes from the January 1999 ESC meeting said that the Commissioner has asked the cognizant staff to identify the source of each of the 2000 filing season requirements—(i.e., IRS Restructuring and Reform Act, Taxpayer Service Improvement Initiative, etc.). This identification is the first step for providing the additional information that would be useful for establishing priorities for IRS’ Information Systems staff. Concluding Observations Since our testimony in May 1998, IRS has made considerable progress in completing its Year 2000 work. However, IRS did not complete all the work that it had planned to do by January 1999. This unfinished work and upcoming critical tasks are to be completed in the remainder of 1999. At the same time IRS is addressing its Year 2000 challenge, it is undertaking other important business initiatives, such as preparing for the 2000 filing season and implementing SCMC. These various initiatives place competing demands on IRS’ business and Information Systems staff. To date, IRS has taken actions to address these competing demands, including delaying the completion milestones for some Year 2000 activities. In the next 5 months, IRS will pass several key milestones. As IRS passes each one, it will have more information on the status of its Year 2000 effort and the amount of remaining work. This information should help IRS and Congress assess the level of risk to IRS’ core business processes in 2000. For example: By the end of February 1999, the business organizations are to submit their requirements to IRS’ Information Systems organization for the 2000 filing season. In the event that business requirements for the 2000 filing season are not submitted on time, IRS increases the risk that some tax law changes may not be thoroughly tested before they are implemented. From April to July 1999, IRS is to conduct its Year 2000 end-to-end test. The results of this test will be an indicator of the extent to which, for the work completed thus far, IRS has been successful in making its systems Year 2000 compliant. The results of this test should also provide information on how many Information Systems staff will be needed for correcting any problems that are identified. By the end May 1999, IRS is to complete its contingency plans. These plans should provide information on any additional steps needed to implement the plans. We plan to continue to monitor IRS’ progress in meeting these key milestones. Mr. Chairman, this concludes my prepared statement. I welcome any questions that you may have. (268840) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Internal Revenue Service's (IRS) year 2000 efforts and the remaining challenges IRS faces in making its information systems year 2000 compliant, focusing on: (1) the extent to which IRS monitors the year 2000 status of its mission-critical systems in their entirety; (2) whether IRS met the January 31, 1999, completion goal for the areas that it monitors--application software, systems software, hardware, and telecommunications networks; (3) the status of two remaining, critical year 2000 tasks--conducting year 2000 testing and completing 36 contingency plans; and (4) the fact that other business initiatives are creating competing demands on staff needed for year 2000 efforts. GAO noted that: (1) a complete picture cannot be provided of the year 2000 status of IRS' 133 mission-critical systems because IRS does not report year 2000 status for these systems in their entirety; (2) instead, IRS monitors the year 2000 status of the components of an information system, such as the application software, systems software, and hardware, for each of its three types of computers--mainframes, minicomputers/file servers, and personal computers; (3) IRS reports that it met the January 31, 1999, completion goal for some of the areas that it monitors but not for others; (4) IRS reports that it met the January 1999 completion goal for: (a) correcting application software; (b) upgrading telecommunications networks; and (c) fully implementing one of its two major system replacement projects; (5) despite significant progress since GAO's testimony in May, IRS did not meet the goal for: (a) upgrading systems software and hardware for its three types of computers; and (b) fully implementing the other major system replacement project; (6) as a result of not meeting the goal for upgrading systems software and hardware, some changes will not be tested until late in 1999, reducing the time available to make corrections before January 2000; (7) for the replacement project, some service center staffs will have no experience before 2000 using the new system to process peak filing season volumes of remittances; (8) IRS must conduct the year 2000 end-to-end testing of its mission-critical systems; (9) testing is to begin in April 1999; (10) the second critical task is to develop 36 contingency plans that IRS has determined are needed to address various failure scenarios for its core business processes; (11) IRS is developing these plans in response to GAO's June 1998 report; (12) IRS has delayed the completion dates so that the first set of plans are to be completed by March 31, 1999, and the second set of plans by May 31, 1999; (13) as IRS continues its year 2000 efforts, it will face the challenge of how to address the competing demands on its staff; (14) these competing demands are created by IRS' other major business initiatives, such as implementing tax law changes and completing the non-year 2000 portions of one of IRS' major system replacement projects; and (15) to address these competing demands, in the past several months, IRS has: (a) transferred staff from other areas; (b) hired additional staff; and (c) delayed some activities.
Background The Pollution Prevention Act of 1990 established the national policy that pollution prevention, as opposed to pollution control, is the preferred method of addressing the nation’s pollution problems. The act also specified that reduction of pollution at its source (source reduction) is the preferred method to prevent pollution and should be used whenever possible. Source reduction includes modifying equipment, technology, processes, or procedures; reformulating or redesigning products; substituting raw materials; and improving operations and maintenance. EPA generally delegates responsibility for the day-to-day implementation of environmental programs to state agencies that perform a variety of regulatory functions. Examples of regulatory functions include issuing permits to limit facilities’ emissions, conducting inspections, and taking enforcement actions against violators. States may also provide nonregulatory technical assistance, public education, and outreach activities to industry. Because many of the nation’s environmental statutes are medium-specific, state environmental agencies and EPA have traditionally been organized around separate medium-specific program offices. State program offices receive federal grants under environmental laws, such as the Clean Air Act, the Clean Water Act, and the Resource Conservation and Recovery Act. Each state program office has traditionally conducted its own regulatory activities and reported them to EPA. Program offices within a state may have had little contact with each other. For example, an air inspector may not know whether a facility is complying with hazardous waste or water regulations or what impact a required remedial action is likely to have on releases to other media. Most of EPA’s funding for state environmental programs has also traditionally been medium-specific, although in recent years EPA has provided some funding for the states’ multimedia activities. Managing the states’ regulatory functions to cut across medium-specific program lines is a recent phenomenon. In 1991, the New Jersey legislature directed that state’s Department of Environmental Protection to conduct a pilot project. In 1992, the New York Department of Environmental Conservation began integrating its environmental programs by using a facility-management approach, under which the agency assigned a team and a “facility manager” employed by the state to coordinate environmental programs at targeted facilities. The Massachusetts Department of Environmental Protection began testing multimedia pollution prevention inspections in 1989 and in 1993 adopted the approach statewide. Other states, such as Oregon, Washington, and Wisconsin, have taken steps to integrate their regulatory activities, but their efforts have been either very recent or limited in scope. Massachusetts, New York, and New Jersey have moved toward integrating their regulatory activities to promote the use of pollution prevention strategies, particularly source reduction, rather than strategies that rely on pollution control. Pollution control methods include installing devices that treat waste after it has been produced. The three states have also sought to address problems arising from fragmented, medium-specific approaches to environmental management, such as pollution shifting, whereby equipment intended to control pollution in one medium merely transfers pollutants to another medium rather than reducing or eliminating them at the source. Each State Has Chosen a Different Approach to Integration Each of the three states has taken a different approach to integrating its regulatory activities (see table 1). Massachusetts conducts multimedia, facilitywide inspections instead of numerous medium-specific inspections. The state also coordinates its enforcement activities to address violations in all media. New York coordinates the activities of its separate medium-specific environmental programs and targets its efforts at the firms generating most of the state’s toxic discharges. New Jersey is testing the use of facilitywide permits, which would replace a facility’s medium-specific permits with a single permit governing the facility’s releases to all media. Although the three states have taken different approaches to integrating regulatory activities, each state looks at whole facilities and their production processes to identify opportunities to prevent pollution. All three states plan to evaluate the environmental outcomes of integrating environmental management. Although they have just begun to develop evaluation plans, the data needed to fully evaluate their initiatives will not be available for some time. On the basis of their experiences thus far, officials in Massachusetts and New York generally consider the integrated approaches in their states to be successful, while New Jersey officials believe that it is too early to predict the success of that state’s permitting test. Massachusetts Uses Facilitywide Inspection and Enforcement In contrast to the medium-specific inspections most states use to assess whether a facility’s releases to a specific medium comply with state and EPA regulations, Massachusetts has developed a multimedia approach that incorporates inspections for all media into a single, facilitywide inspection that focuses on a facility’s production processes. Inspectors follow the flow of materials used in the production processes and the inputs and outputs of each process. At each step of a process, an inspector identifies areas of regulatory concern and opportunities to prevent pollution. EPA Region I helped Massachusetts develop the single, unified inspection procedure used by that state’s inspectors. Massachusetts began testing its facilitywide approach to inspection and enforcement in 1989 and then implemented it statewide in 1993. The Massachusetts Department of Environmental Protection annually conducts about 1,000 inspections at the approximately 20,000 facilities in the state that are subject to facilitywide inspections. To support this approach, the Department reorganized its Bureau of Waste Prevention, which had been organized with separate air, waste, and water sections, each of which had performed its own compliance, enforcement, and permitting activities. In field offices, these sections were replaced by a combined section for compliance and enforcement and a separate section for permits. The Bureau did not eliminate medium-specific units in the central office because the medium-specific nature of federal environmental statutes necessitated some corresponding organization. Instead, the Department established an Office of Program Integration to coordinate these medium-specific units and foster pollution prevention. Massachusetts’s enforcement actions encompass violations in all media and encourage violators to use source reduction techniques to achieve compliance. When notifying a facility of any violation, the state encourages the facility to implement any specific opportunities for source reduction that the state inspector has identified and informs the facility that the state’s Office of Technical Assistance can assist in identifying and pursuing additional opportunities for source reduction. The state also forwards a copy of all enforcement documents to the Office of Technical Assistance, which in turn contacts the facility to offer free, confidential assistance. When serious violations and large penalties are involved, the state may negotiate agreements requiring facilities to undertake pollution prevention measures in exchange for reduced penalties. In addition to its multimedia inspections, Massachusetts recently tested facilitywide permits that incorporate pollution prevention by combining the various permits issued to a facility for each medium into a single permit. According to a state official, Massachusetts ended this test because of a lack of participation by the business community, which apparently believed that a permit process with a pollution prevention component would be more complicated than the existing permit process, which was focused exclusively on pollution control. Although Program Is Not Yet Fully Evaluated, Massachusetts Believes It Is a Success Under a fiscal year 1995 multimedia demonstration grant from EPA, Massachusetts is required to evaluate the results of its integrated management efforts. After fiscal year 1995, Massachusetts plans to develop and test a number of “environmental-yield” indicators, such as the number of unregistered waste streams discovered and waste streams eliminated as well as the amount of emissions reduced. During the next few years, the state plans to assess the effectiveness of its integrated program by measuring the extent to which pollution has been reduced at its source. Massachusetts officials believe that the implementation of the state’s facilitywide inspection approach has improved the state’s enforcement program. They reported that facilitywide inspections have successfully found sources of pollution that had not been registered or permitted, promoted pollution prevention, and encouraged companies to seek technical assistance from the state. According to state officials, facilitywide inspections have streamlined the regulatory process by replacing numerous single-medium inspections with one multimedia inspection at most facilities. However, the transition from medium-specific to facilitywide inspections has been challenging. It has required inspectors, previously knowledgeable about a single environmental statute, to become familiar with multiple statutes, techniques to prevent pollution, and industry’s manufacturing processes. According to state officials, inspectors have found it difficult to keep abreast of regulations in numerous environmental programs, as well as the latest strategies to prevent pollution. As a result, some inspectors are concerned that they may overlook compliance problems outside their area of expertise. In a 1994 report on the state’s enforcement program, EPA praised the program’s emphasis on pollution prevention but questioned whether inspectors were focusing on pollution prevention to the detriment of taking enforcement actions. The report noted, however, that the state had begun an enforcement training course that stressed the importance of stronger enforcement actions. In addition, Massachusetts has found that facilitywide inspections are unworkable at the state’s largest, most complex facilities, which constitute five percent of the firms it inspects. According to a state official, the state uses a single-medium approach at these facilities because facilitywide inspections at these facilities take too long, require too many inspectors, and demand too much expertise. New York Uses a Facility-Management Approach New York is pursuing integrated environmental management by coordinating its medium-specific activities. In 1992, New York started to target its regulatory activities at the approximately 400 facilities that produced about 95 percent of the state’s toxic discharges. The state still performs single-medium program activities, such as inspections and permitting, but state officials coordinate these activities to provide an integrated approach at targeted plants. To coordinate activities at each of these plants, New York has assigned employees of its Department of Environmental Conservation as facility managers at 94 plants. According to a state official, however, it will likely take more than the originally planned 10 years before New York will be able to assign a facility manager to each of its 400 targeted facilities. The facility manager serves as the primary point of contact between the state and a plant. Working with a team of inspectors and other technical staff, the facility manager plans and oversees inspections, enforcement, and other regulatory activities at the facility. For example, the facility manager guides team members in developing a profile of the facility that includes permit data, compliance history, and other information chronicling the plant’s emission and waste-handling practices. In doing so, the facility manager can assess what is needed to enhance the facility’s efforts to prevent pollution. Because developing the expertise needed to perform multimedia inspections is difficult, New York requires its inspectors to perform only medium-specific inspections. However, the facility managers coordinate these inspections to provide an integrated inspection approach. As the liaison between the state and the facility, each facility manager must work closely with company officials. One facility manager pointed out that an advantage of this relationship is that the facility manager can sometimes convince the company to implement pollution prevention strategies without enforcement actions. New York uses enforcement actions as an opportunity to require a company to undertake projects to prevent multimedia pollution. For example, after identifying environmental violations by a chemical manufacturer, the state negotiated a multimedia consent order requiring the manufacturer to adopt air, water, and other compliance measures and to fund an employee from the Department of Environmental Conservation to assist the facility manager by serving as a full-time monitor at that facility. A consent order at another facility required the company to fund a monitor and develop a chemical-specific pollution prevention program with specified reduction goals. New York also allows companies to reduce their penalties for violating environmental laws by performing actions that provide environmental benefits, such as contributing to emergency preparedness programs for toxic spills. In addition to coordinating inspection and enforcement activities, New York plans to test the use of integrated permits at 3 or 4 of the 400 targeted facilities. Initial testing has begun at one facility. Although Challenges Remain, New York Believes Program Is a Success According to New York officials, the state’s facility-management approach has improved the efficiency of its regulatory activities while simplifying the facilities’ compliance activities. New York’s approach operates more efficiently because each facility manager coordinates all of the state’s regulatory activities and the various inspectors approach each facility as a team. One facility manager said New York’s approach has been effective in bringing problem facilities into compliance more rapidly because the facility manager is able to focus on problems in all media at one time. State officials report that industry has benefited from having a single point of contact with the state to coordinate the state’s inspection visits. Although the facility-management approach is labor-intensive and challenging for the facility managers—who must develop expertise in a wide range of federal and state laws, industry processes, and techniques to prevent pollution—the difficulty in obtaining detailed knowledge about each environmental program is mitigated by the presence of single-medium inspectors on each facility’s inspection team. As part of a departmentwide review, New York plans to develop performance measures to evaluate its program. These measures will assess the amount of pollution prevented and the impact of environmental programs on the state’s natural resources. New York officials have not yet established milestones for performing this evaluation. New Jersey Is Testing Facilitywide Permits New Jersey is testing the use of a single, integrated permit for industrial facilities, an approach that departs from the existing practice of issuing permits to industrial facilities on a medium-specific basis. Under the existing practice, a facility may have dozens of medium-specific permits that regulate environmental releases through “end-of-the-pipe” treatment. Depending on the medium-specific program, permits may state what pollutants may be discharged, prescribe technology-based discharge limits, or contain other requirements. In 1991, the New Jersey legislature passed a Pollution Prevention Act that directed the state’s Department of Environmental Protection to test the use of facilitywide permits at industrial facilities. The test is intended to identify ways to streamline and integrate medium-specific requirements, incorporate pollution prevention into the permitting process, and improve the overall administrative efficiency of permitting by consolidating all of a facility’s environmental permits for air, water, and solid and hazardous waste into a single, facilitywide permit. This permit incorporates a pollution prevention plan that examines all of a facility’s production processes and identifies those that use or generate hazardous substances regulated under New Jersey’s Pollution Prevention Act. Thus, the permit encourages the facility to consider those substances for elimination or reduction. In the past, industry has criticized permits to approve a facility’s production processes and equipment, particularly air permits, because they hampered the facility’s efforts to respond quickly to changing market conditions. Facilities that wish to make even minor changes to a process often had to go through lengthy preapproval procedures. As an incentive to participate in its permitting pilot, New Jersey allows facilities with facilitywide permits to change processes without preapproval, as long as the changes will not increase releases of hazardous substances or increase the generation of waste. Companies that take advantage of this operating flexibility are required to expand the number of pollutants that come under their plans to prevent pollution. New Jersey’s facilitywide permit requires facilities to at least meet existing emission standards. State officials believe that requiring facilities to achieve the lower emission levels identified in their source reduction plans would deter them from identifying opportunities to reduce emissions. State officials expect that facilities will voluntarily undertake additional source reduction projects and reduce their emissions to obtain such benefits as reduced costs for raw materials and waste disposal. New Jersey officials selected 18 facilities from those that volunteered to participate in the test of facilitywide permits. According to state officials, issuing the first permit took 3 years because major changes were made in the state’s permitting process and some participants did not calculate the information on waste generation needed to identify opportunities to prevent pollution. New Jersey issued the first facilitywide permit in December 1994 to a pharmaceutical manufacturer that makes tablets, ointments, creams, and inhalation products for asthmatics. As of December 1995, two additional permits had been issued. New Jersey Believes It Is Too Early to Evaluate the Program’s Success Because New Jersey has issued only a few facilitywide permits, state officials believe that it is too early to evaluate the program’s success or predict whether this permitting approach should be used more extensively. Nonetheless, New Jersey officials have already found that some facilities lack key technical data about the amount of waste generated, such as accurate data on baseline emissions for a whole facility. New Jersey’s legislature has directed the state’s Department of Environmental Protection to report by March 1, 1996, on the results of the test and include recommendations as to whether the state should expand the use of facilitywide permits. Industry’s Views To obtain industry’s views on integrated approaches, we interviewed officials representing six firms that had participated in the integrated initiatives in the three states. These officials generally believed that their state’s integrated approach was beneficial to the environment while increasing regulatory efficiencies and reducing costs to industry. Company representatives at two small facilities in Massachusetts reported that the facilitywide inspections, coupled with the state’s technical assistance, contributed to source reduction at their facilities. For example, according to an official from a Massachusetts electroplating company, the awareness of preventing pollution that was gained from the state’s facilitywide inspections and technical assistance has convinced the company of the value of reducing pollution at its source. The company anticipates that replacing a hazardous chemical with a nonhazardous one will allow it to pay lower annual compliance fees as a small- rather than large-quantity generator of hazardous waste. According to a representative of a New York manufacturer, its facility manager has been able to expedite changes in the company’s production processes. For example, in less than a month the facility received approval to substitute ethanol for methanol, a change that eliminated the need for at least 30 air permits. According to this representative, the approval process ordinarily would have taken 8 or more months. Representatives of a New Jersey pharmaceutical manufacturer, the first company in that state to obtain a facilitywide permit, stated that this facility has eliminated one hazardous substance and substantially reduced the use of two others. The company eliminated 1-1-1 trichloroethane, an ozone-depleting substance, in its label-making process by changing to an aqueous-based process that uses no hazardous substances. The facility also developed a recycling program to recover Freon, an ozone-depleting substance, from its production of inhalers. Representatives of this firm also thought that the facilitywide permit had simplified their company’s compliance activities. For example, a new 5-year permit combines 70 air and water permits, as well as approvals of hazardous waste storage, into a single permit that eliminates the need for the company to frequently renew multiple permits. The company’s facilitywide permit consolidates a 3-drawer horizontal file cabinet filled with permits into one 4-inch binder (see fig. 1). The company also enjoys greater operating flexibility under New Jersey’s air regulations, which allow holders of facilitywide permits to change production processes without a lengthy preapproval process if the change does not increase hazardous emissions to air or discharges to water. According to representatives of the pharmaceutical manufacturer, the company spent $1.5 million in capital and labor resources to develop the permit but anticipates annual cost savings of $300,000 from reduced costs for waste disposal and raw materials. The company also anticipates substantial reductions in administrative costs because it will no longer have to frequently replace numerous individual permits. Officials at other facilities, however, were less positive about their state’s integrated approach. For example, while supportive of New York’s integrated approach, an official of a company in that state thought that the competitive marketplace, rather than the government, prompted industrial involvement in preventing pollution. Similarly, an official from a Massachusetts company stated that an interest in economic efficiency drove the company’s interest in reducing waste. EPA’s Funding and Reporting Systems Present Problems for States With Multimedia Initiatives According to officials from Massachusetts, New York, and New Jersey, while EPA has provided funding for their multimedia pollution prevention activities, reaching agreements with EPA to fund such activities has required extensive negotiations. Obtaining funds for Massachusetts also required EPA’s approval as well as congressional authorization to reprogram funds from other activities. New Jersey and EPA officials have discussed ways to incorporate that state’s multimedia activities into EPA’s medium-specific grant system, but they have not fully resolved the issue. Officials in all three states concurred that even though EPA’s grant system has some flexibility, having to petition the agency to obtain funds may discourage some states from considering multimedia initiatives. Medium-Specific Program Grants Do Not Readily Fit Multimedia Activities EPA has provided grants to each of the three states to support their multimedia pollution prevention activities. Massachusetts received a $288,000 grant in fiscal year 1990 for its facilitywide inspection pilot; New York received a $222,276 grant in fiscal year 1993 to conduct outreach and technical assistance projects; and New Jersey received a $207,000 grant in fiscal year 1993 to assist the state with its permitting pilot. However, all three states subsequently found that continued funding for multimedia activities was not easily obtained under the current federal medium-specific grant programs. For each medium-specific grant program, the states use EPA’s guidance to prepare annual plans detailing the activities they intend to perform in the coming fiscal year. Once EPA approves a state’s plan, it allocates funding on the basis of the planned activities. In fiscal years 1993 and 1994, Massachusetts and New Jersey requested that EPA provide additional credit for work performed under these medium-specific programs for their facilitywide inspection and permit programs. The two states asked that EPA, in calculating their allocation, give them extra credit for multimedia activities because these activities encompass all media programs, require additional staff training and guidance, and contain an additional component to prevent pollution. After extensive negotiations, Massachusetts and EPA signed agreements attached to medium-specific grants for fiscal years 1993 and 1994. These agreements allowed the state to conduct facilitywide inspections and to support its multimedia activities by using the funds allocated for compliance and enforcement activities under its existing medium-specific grants. Because of the potential benefits from multimedia activities and the difficulty of funding them through medium-specific grants, EPA awarded Massachusetts a $1 million grant in fiscal year 1995 to demonstrate multimedia activities. This grant was made with funds that would have otherwise been awarded through medium-specific grants, and no new funds were granted. According to EPA and state officials, although the grant was intended to alleviate their concerns about using medium-specific funding for multimedia activities, it does not permanently resolve the problem of funding for multimedia activities because it can be renewed for only 2 years. EPA and New Jersey officials have extensively discussed ways to fund that state’s facilitywide permit activities through medium-specific grants. As of September 1995, New Jersey and EPA have not fully resolved this issue. New York asked EPA for a special allocation from its medium-specific grants to support the state’s pollution prevention unit because if the unit’s duties were part of a medium-specific program they would be eligible for EPA’s support. New York also noted that its multimedia program represents a new way of doing business because its focus is on preventing pollution at the state’s largest dischargers. After extensive negotiations, EPA agreed to allow New York to fund the multimedia activities of the pollution prevention unit with funding for medium-specific activities. New York’s pollution prevention unit incurred costs of $838,000 in fiscal year 1994 and operated under a comparable agreement in fiscal year 1995. Officials in all three states noted that having to extensively negotiate with EPA to obtain funds for an integrated approach may discourage other states from adopting multimedia initiatives. Reporting Results From Multimedia Inspections to EPA’s Medium-Specific Reporting Systems Is Difficult In addition to the problems with obtaining funds for multimedia activities, Massachusetts has encountered problems in reporting its multimedia activities to EPA, as required under various federal environmental statutes. For example, while Massachusetts conducts facilitywide inspections and prepares comprehensive reports detailing the results from multimedia inspections, EPA requires the state to report the results to multiple medium-specific reporting systems, each of which has different formats, definitions, and reporting cycles. According to a Massachusetts official, preparing these duplicative reports is both wasteful and demoralizing to staff. Recent EPA Initiatives Address State Multimedia Activities A grant program EPA recently proposed may provide states with easier access to multimedia funding and promote the reporting of their integrated facilities management activities. As part of EPA’s fiscal year 1996 budget request, the President proposed that the Congress give EPA’s Administrator the authority to allow states to consolidate numerous medium-specific grants into a new “Performance Partnership” grant program. These grants would allow states to allocate funds to reflect local priorities while continuing to pursue national policy objectives and fulfilling all federal statutory requirements. The grant program would include new performance measures to simplify reporting requirements while ensuring continued environmental protection. EPA plans to work with state officials to develop performance measures that assess the programs’ environmental impact, instead of using measures that focus only on the number of medium-specific program activities performed. According to officials in Massachusetts, New York, and New Jersey, each state plans to participate in this grants program. EPA is also studying the effectiveness of initiatives to prevent pollution in eight northeastern states, including the multimedia efforts in Massachusetts, New York, and New Jersey. The study, which EPA planned to complete by December 1995, will compile data on the experiences of industrial facilities with government activities on how to prevent pollution. In addition, EPA plans to conduct a national study of pollution prevention effectiveness in 1996. Conclusions Although the three states have not yet fully assessed the effectiveness of integrating environmental management, this approach shows potential for reducing pollution and increasing regulatory efficiency. Officials representing Massachusetts and New York, the states having the most experience with integrated approaches, generally report improvements in promoting pollution prevention and achieving regulatory efficiencies. Industry representatives also reported positive results from using this approach. Nonetheless, drawbacks exist. For example, performing integrated inspections and promoting pollution prevention requires inspectors to have additional expertise. Each of the three states has found it difficult to fund its multimedia activities through EPA’s grants for medium-specific programs. While EPA has worked with these states to resolve the funding problems, the extensive negotiations that were required could discourage other states from adopting multimedia initiatives. In addition, Massachusetts had problems reporting multimedia activities under medium-specific reporting systems. A new grant program recently proposed by EPA has the potential to facilitate the multimedia funding and reporting process for the three states. If successful, this grant program may resolve funding and reporting issues for those other states that are interested in using an integrated environmental management approach in their regulatory activities. Agency Comments We provided copies of a draft of this report for review and comment to EPA, the Massachusetts Department of Environmental Protection, the New York Department of Environmental Conservation, and the New Jersey Department of Environmental Protection. On December 8, 1995, we met with EPA officials, including the Director of the Pollution Prevention Policy Staff, who generally agreed with the report’s findings. The officials stated that the funding and reporting problems noted in the report are, at least in part, the result of (1) medium-specific statutes and appropriations and (2) the medium-specific accountability processes associated with them. On December 5, 1995, we met with Massachusetts and New York state officials, including the Director of the Office of Program Integration of the Massachusetts Department of Environmental Protection and the Chief of the Bureau of Pollution Prevention of the New York State Department of Environmental Conservation. On December 8, 1995, we met with New Jersey officials, including the Director of the Office of Pollution Prevention of the New Jersey Department of Environmental Protection. These state officials agreed with the report’s facts and findings and suggested some technical corrections, which we have incorporated into the report as appropriate. Scope and Methodology We performed our work at the Massachusetts Department of Environmental Protection, the New York Department of Environmental Conservation, and the New Jersey Department of Environmental Protection. According to EPA, these states are among the leaders in adopting integrated approaches to regulatory activities. We contacted six companies that had significant experience with their state’s integrated efforts—three in Massachusetts, two in New York, and one in New Jersey. We also performed work at EPA’s headquarters in Washington, D.C., and at the agency’s regional offices in Boston and New York City, the EPA offices that cover the states we visited. We performed our work in accordance with generally accepted government auditing standards from May 1995 through December 1995. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 10 days after the date of this letter. At that time, we will send copies of the report to other appropriate congressional committees and the Administrator of EPA. We will also make copies available to others upon request. Please call me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix I. Major Contributors to This Report Resources, Community, and Economic Development Division, Washington, D.C. Lawrence J. Dyckman, Associate Director Ed Kratzer James S. Jorritsma Bruce Skud Janet G. Boswell The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) the environmental management approaches used in Massachusetts, New York, and New Jersey; (2) state and industry experiences with these integrated management approaches; and (3) the Environmental Protection Agency's (EPA) role in state efforts to reduce pollution. GAO found that: (1) Massachusetts has adopted a single, integrated inspection approach to assess facilities' compliance with environmental statutes; (2) New York is using a facility-management strategy to coordinate medium-specific environmental programs; (3) New Jersey is testing the use of single, integrated permits for industrial facilities, rather than issuing separate permits for pollution releases; (4) although Massachusetts and New York intend to implement their integrated approaches statewide, New Jersey believes that it is too early to evaluate the success of its pilot program; (5) state industry officials believe these integrated management approaches are beneficial to the environment, achieve regulatory efficiencies, and reduce costs; and (6) EPA has proposed a new grant program that will help states gain easier access to funding for multimedia programs, as well as ease the reporting of multimedia activities.
Scope and Methodology We conducted our audit work from August 2000 through June 2001 in accordance with generally accepted U.S. government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. We briefed DOD managers, including officials in DOD’s Purchase Card Program Management Office and the Defense Finance and Accounting Service (DFAS), and Navy managers, including Navy Supply Command, Navy Comptroller, SPAWAR San Diego, and Navy Public Works Center San Diego officials on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. We referred instances of potentially fraudulent transactions that we identified during our work to our Office of Special Investigations for further investigation. Our work was not designed to identify, and therefore we did not determine, the extent of fraudulent, illegal, or abusive transactions. Our control tests were based on stratified random probability samples of 135 SPAWAR San Diego purchase card transactions and 121 Navy Public Works Center San Diego transactions. Further details on our objectives, scope, and methodology are included in appendix III. Weak Purchase Card Environment Contributed to Ineffective Controls Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. It provides discipline and structure as well as the climate which influences the quality of internal control. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) A weak internal control environment at SPAWAR San Diego and the Navy Public Works Center San Diego contributed to internal control weaknesses, fraud, and abuse. The importance of the “tone at the top” or the role of management in establishing a positive internal control environment cannot be overstated. GAO’s internal control standards go on to state that, “management plays a key role in demonstrating and maintaining an organization’s integrity and ethical values, especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate.” The specific factors that contributed to the lack of a positive control environment at these two units included a proliferation of purchase cardholders, ineffective training of cardholders and certifying officers, ineffective rebate management, and a lack of monitoring and oversight. Proliferation of Cardholders Resulting in Unreasonable Span of Control SPAWAR San Diego and the Navy Public Works Center San Diego did not have specific policies governing the number of cards issued or establishing criteria for identifying employees eligible for the privilege of cardholder status. At both units, cards were given out on the basis of a request from an individual employee’s supervisor. The request was then forwarded to the unit’s purchase card agency program coordinator, who approved the request and began the process for obtaining a new card from Citibank. According to SPAWAR and Navy Public Works Center officials, specific criteria did not exist for either the supervisors or the program coordinators to use in requesting and approving purchase cards for employees. This flawed policy has resulted in a proliferation of purchase cards at the two units. For example, as of September 30, 2000, about one in three, or 36 percent of SPAWAR San Diego employees and about one in six, or 16 percent, of Navy Public Works Center San Diego employees were cardholders. As a result, at the end of fiscal year 2000, about 1,526 SPAWAR San Diego employees and 254 Navy Public Works Center San Diego employees were authorized to procure goods and services. Within this weak control environment, these two Navy units had given purchase cards to over 1,700 employees, most of whom had credit limits of $20,000 or more and the authority to make their own purchase decisions. Table 1 shows the proliferation of cardholders and the percentage of employees at SPAWAR and the Navy Public Works Center in San Diego that were cardholders as of September 30, 2000. Most of SPAWAR’s 1,526 cardholders had a $25,000 credit limit and most of the Navy Public Works Center’s 254 cardholders had a $20,000 credit limit. Information we obtained from six large defense contractors on their purchase card programs showed that the percent of the contractors’ employees that were cardholders ranged from about 2 percent to nearly 4 percent—significantly less than at SPAWAR and the Navy Public Works Center in San Diego. The proliferation of cardholders, particularly at SPAWAR San Diego, created a situation where it was virtually impossible to maintain a positive control environment. For example, at SPAWAR, a significant span of control issue existed, with one approving official responsible for certifying monthly summary billing statements covering an average of over 700 monthly purchase card statements relating to 1,526 purchase cardholders. At the Navy Public Works Center San Diego, the span of control problem was not as serious, with six approving officials responsible for certifying monthly summary statements covering an average of 55 monthly statements for 254 cardholders. The span of control issue is particularly important for purchase cards because supervisors and, in some cases, cardholders themselves, are responsible for authorizing purchases, rather than an independent contracting officer as is the case under the standard procurement process. Thus, the approving official serves as a key control in certifying cardholder purchases. Lack of Documented Evidence of Training for Cardholders and Approving Officials The lack of documented evidence of purchase card training also contributed to a weak internal control environment at SPAWAR and the Navy Public Works Center in San Diego. GAO’s internal control standards emphasize that effective management of an organization’s workforce—its human capital—is essential to achieving results and is an important part of internal control. Training is key to ensuring that the workforce has the skills necessary to achieve organizational goals. In accordance with Navy Supply Command (NAVSUP) Instruction 4200.94, all cardholders and approving officials must receive purchase card training. Specifically, NAVSUP Instruction 4200.94 requires that prior to the issuance of a purchase card, all prospective cardholders and approving officials must receive training regarding both Navy policies and procedures as well as local internal operating procedures. Once initial training is received, the Instruction requires all cardholders to receive refresher training every 2 years. Although we found the training policies and procedures to be generally adequate, we determined that SPAWAR and the Navy Public Works Center lacked documentation to demonstrate that all cardholders and approving officials had received the required training. Based on our tests of fiscal year 2000 purchase card transactions, we estimate that about 40 percent of the SPAWAR transactions, totaling at least $6.8 million, and 56 percent of the Navy Public Works Center transactions, totaling at least $10.9 million, were made by cardholders for whom there was no documented evidence that they had received either the required initial training or refresher training on purchase card policies and procedures. SPAWAR San Diego management contended that we should accept training provided under the Navy’s previous purchase card program as meeting the training requirements under the new program. Although we determined that the policies and procedures related to cardholder responsibilities were essentially the same under the previous Navy purchase card program, we found that several cardholders had received the prior training as many as 2 years to 6 years before the current program began. Therefore, these cardholders had not received the required biennial refresher training. The Navy Public Works Center San Diego had no documented evidence that its cardholders had received any purchase card training prior to March 2000. We also found no documented evidence that two of six Navy Public Works Center approving officials had received training on purchase card policies and procedures prior to assuming certifying officer responsibilities. SPAWAR’s one approving official had received all required training. Purchase Card Rebates Not Effectively Managed We found ineffective management of purchase card rebates by the Navy, SPAWAR San Diego, and the Navy Public Works Center San Diego. The Navy requested that Citibank defer payment of all of the purchase card rebates it earned since the current purchase card program began in November 1998 because, according to DOD and Navy officials, it had not yet determined how to record and allocate the rebates to Navy programs. According to Citibank officials, Citibank plans to pay cumulative purchase card rebates and accrued interest to the Navy on July 31, 2001, the payment date required in the Navy’s latest purchase card contract task order modification. Citibank estimates that the total payment will be about $8.8 million, including an estimated $8.3 million in cumulative rebates and an estimated $530,000 in accrued interest on these rebates. In addition, the Navy had not established policies and procedures for managing rebates and had not monitored its rebate earnings. As a result, the Navy, SPAWAR San Diego, and the Navy Public Works Center San Diego were not aware that Citibank had miscalculated the rebates that SPAWAR and the Navy Public Works Center should have earned during fiscal year 2000 by about $150,000. Specifically, the rebates due SPAWAR were understated by $136,760, while the Navy Public Works Center’s rebates were overstated by $12,039. Further, SPAWAR and Navy Public Works Center managers were not effectively managing purchase card payments to maximize the amount of rebates earned. We determined that delays in the receipt of monthly purchase card statements had precluded the opportunity for these two units to earn another $242,000 in fiscal year 2000 rebates. We do not know the extent to which these factors have adversely impacted the Navy’s total fiscal year 2000 purchase card rebates. Program Monitoring and Audit Function Not Effective SPAWAR and the Navy Public Works Center in San Diego had not established an effective monitoring and internal audit function for the purchase card program. Further, the Navy’s purchase card policies and procedures did not require that the results of internal reviews be documented or that corrective actions be monitored to help ensure that they are effectively implemented. NAVSUP Instruction 4200.94 calls for agency program coordinators to perform semiannual reviews of their units’ purchase card program, including adherence to internal operating procedures, applicable training requirements, micro-purchase procedures, receipt and acceptance procedures, and statement certification and prompt payment procedures. Further, these reviews are to serve as a basis for agency program coordinators to initiate appropriate action to improve the local program or correct problem areas. However, the Instruction does not require written reports on the results of internal reviews to be submitted to either local management or a central Navy office for monitoring and oversight. As a result, the Navy did not have a consistent process for documenting the results of purchase card reviews, identifying systemic problems, and monitoring corrective actions to help ensure that they are effectively implemented. This weakness also impaired the Navy’s ability to assess purchase card controls for possible inclusion in its Annual Statements of Assurance pursuant to 31 U.S.C. 3512(d) (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982), which requires agency heads to make annual disclosures regarding the adequacy of their internal controls. The Secretary of the Navy’s fiscal year 2000 Annual Statement of Assurance did not disclose any control weaknesses related to the purchase card program. Our analysis of SPAWAR San Diego Agency Program Coordinator fiscal year 2000 reviews showed that these reviews identified problems with about 42 percent of the monthly cardholder statements that were reviewed. The problems identified were consistent with the control weaknesses discussed later in this testimony, including lack of independent documentation that the Navy received items ordered by purchase card, accountable items that were not recorded in the property records, inadequate documentation for transactions, split purchases, and transactions that did not appear to be related to government business purposes. During our review, we saw correspondence and other documentation showing that SPAWAR San Diego management had considered the findings identified in its agency program coordinator evaluations, but directed that corrective actions should not be implemented due to complaints from cardholders and their supervisors regarding the administrative burden associated with procedural changes that would be needed to address the review findings. As a result, the agency program coordinator had not used these reviews to make systematic improvements in the program. Rather, these reviews generally resulted in the reviewer counseling the cardholders or, in some instances, recommending that cardholders attend purchase card training. During fiscal year 2000, the SPAWAR San Diego Office of Command Evaluation internal review group had not conducted any reviews or audits of the purchase card program. Further, although the SPAWAR San Diego Command Inspector General reviewed the SPAWAR purchase card program during fiscal year 2000 and prepared a draft report summarizing the results of this review, the final report has not yet been issued. Our review of the draft report determined that the Command IG identified a number of internal control problems that are consistent with our findings, including issues related to receipt and acceptance, training, and split purchases. The Navy Public Works Center San Diego purchase card agency program coordinator did not perform any systematic reviews of the program during fiscal year 2000. He told us that his monitoring efforts consisted of scanning some monthly invoices for duplicate payments, split purchases, and other suspicious payments. However, he did not document these actions. Further, the Public Works Center internal review group in the Office of Command Evaluation did not perform any reviews during fiscal year 2000. However, Navy Public Works Center managers told us that they asked the Naval Audit Service to review the Center’s purchase card program during fiscal year 2000 because of concerns about the growth of the program, the adequacy of internal controls, and recent instances of fraud. Although the Naval Audit Service completed its fieldwork in November 2000 and briefed Navy Public Works Center San Diego management on its findings, the results of that effort have yet to be externally reported. According to the Navy’s Deputy Assistant Auditor General, the Naval Audit Service plans to finalize its work and issue a report in the fall of 2001. Breakdown of Critical Internal Controls Basic internal controls over the purchase card program were ineffective at the two units we reviewed. Based on our tests of statistical samples of purchase card transactions, we determined that the three transaction-level controls that we tested were ineffective, rendering SPAWAR San Diego and Navy Public Works Center San Diego purchase card transactions vulnerable to fraudulent and abusive purchases and theft and misuse of government property. As shown in table 2, the specific controls that we tested were (1) independent, documented receipt and acceptance of goods and services, (2) independent, documented certification of monthly purchase card statements, and (3) proper accounting for purchase card transactions. In addition, we tested whether the accountable items—easily pilferable or sensitive items— included in some of the transactions in our samples were recorded in the units’ property records to help prevent theft, loss, and misuse of government assets. Our tests of SPAWAR and Navy Public Works Center fiscal year 2000 purchase card transactions that included accountable property items, showed that the two units failed to record one or more accountable items in their property records for nearly all of these transactions. Further, when we analyzed the property items included in our sampled transactions, we found that SPAWAR and the Navy Public Works Center did not record 46 of the 65 accountable items included in our sampled transactions in their property records. Moreover, when we asked to inspect these items, the two units could not provide conclusive evidence that 31 of them, including laptop computers, Palm Pilots, and digital cameras, were in the possession of the government. Lack of Independent Documented Receipt and Acceptance Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. Simply put, no one individual should control all the key aspects of a transaction or event. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) SPAWAR San Diego and the Navy Public Works Center San Diego generally did not have independent, documented evidence that they received items ordered by purchase card. That is, they generally did not have a receipt for the acquired goods and services that was signed by someone other than the cardholder. As a result, there is no documented evidence that the government received the items purchased or that those items were not lost, stolen, or misused. NAVSUP Instruction 4200.94 generally requires segregation of duties between the individual making the purchase and the individual responsible for documenting receipt and acceptance of goods and services acquired by purchase card. However, employees at the two units were not following these procedures. In some instances employees were following an alternative procedure permitted by the NAVSUP Instruction whereby independent authorization of a purchase order can be substituted for independent confirmation of receipt of the items purchased. However, the alternative procedure does not provide any assurance that the items ordered and paid for were received. Based on our test work, we estimate that SPAWAR San Diego did not have independent, documented evidence to confirm the receipt and acceptance of goods and services acquired with the purchase card for about 65 percent of its fiscal year 2000 transactions totaling at least $10.1 million. For the Navy Public Works Center San Diego, we estimated that 47 percent of its fiscal year 2000 purchase card transactions totaling at least $6.6 million did not include independent, documented receipt of goods and services. The types of items in our sampled transactions that lacked independent evidence of receipt and acceptance included computers, monitors, and compact disk writers that were purchased at stores such as Byte and Floppy Computer, Dell Computer, and CompUSA. Further, during fiscal year 2000, SPAWAR and the Navy Public Works Center in San Diego made over 2,000 transactions totaling over $468,000 for items from The Home Depot, Best Buy, Circuit City, and Wal-Mart. Our review of the five purchase card fraud cases related to Navy activities based in San Diego, discussed in appendix II, showed that fraudulent purchases had been made to acquire items for personal use from these same stores. Because the Navy purchases items for valid, government purposes from stores that are widely used by consumers to acquire items for personal use, verification of receipt of goods and services by an individual other than the cardholder is necessary to reduce the risk of fraudulent transactions. Lack of Proper Certification of Monthly Purchase Card Statements Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) We assessed a 100-percent failure rate at both units for this critical control. NAVSUP Instruction 4200.94 and, to a greater extent, a policy memorandum issued by the Navy Comptroller’s office on June 3, 1999, do not provide adequate internal controls and are inconsistent with the responsibilities of certifying officers reflected in statutes and DOD’s fiscal policy guidance. Approving officials at the two units told us that they were not following the existing procedures due to time constraints and the Navy Comptroller’s policy memorandum. Under 31 U.S.C. 3325 and DOD’s Financial Management Regulation, disbursements are required to be made on the basis of a voucher certified by an authorized agency official. The certifying official is responsible for ensuring (1) the adequacy of supporting documentation, (2) the accuracy of payment calculations, and (3) the legality of the proposed payment under the appropriation or fund charged. Proper certification of bills for payment is a preventive control that requires and provides the incentive for certifying officers to maintain proper controls over public funds. It also helps detect fraud and improper payments, including invalid (unsupported or prohibited) transactions, split purchases, and duplicate payments. Further, section 933 of the National Defense Authorization Act for Fiscal Year 2000 requires the Secretary of Defense to prescribe regulations that ensure, among other things, that each purchase cardholder and approving official is responsible for reconciling charges on a billing statement with receipts and other supporting documentation. According to NAVSUP Instruction 4200.94, upon receipt of the individual cardholder statement, the cardholder has 5 days to reconcile the transactions appearing on the statement by verifying their accuracy to the supporting documents and notify the approving official in writing of any discrepancies in the statement or sign and forward it to the approving official. The approving official is responsible for ensuring that all purchases made by the cardholders within his or her cognizance were appropriate and that the charges are accurate. However, the Instruction further states that within 5 days of receipt of the cardholders’ statements, the approving official must review and certify the monthly summary statement for payment, whether or not the cardholder has reviewed the statement and notified the official of any discrepancies or agreement with the statement. That is, the approving official is to presume that all transactions on the monthly statements are proper unless notified in writing by the purchase cardholder. Under this process, the certifying officer relies upon the silence of a cardholder who may have failed to timely forward corrections or exceptions to the account statement or, even worse, may not have even reviewed the statement. A certifying officer in these circumstances is not taking steps to assure that a payment is proper and an agency therefore cannot rely on the certification for assurance that a payment is for the proper amount and a legal purpose. This NAVSUP policy is inconsistent with the purpose of certifying vouchers prior to payment, which is to maintain proper control over public funds and assure that payments are made for proper amounts and purposes. Certifying officers are responsible for the correctness of facts and computations in the voucher, and the legality of the proposed payment under the appropriation involved. A certifying officer is liable for losses resulting from improper certifications, but may be relieved from liability if the certification was based upon official records and the officer did not know, and could not have reasonably discovered, the correct information. While certifying officials may rely on systems, controls, and personnel that process transactions rather than personally reviewing the supporting documentation, they must show that their reliance was reasonable. Regardless of what system is used, there is no authority to make known improper payments. At SPAWAR and the Navy Public Works Center, the certifying officers relied on a process without assurances that even a minimal review of the facts and computations underlying the proposed payment or the legality of such payment was carried out before certification was made. Thus, the certifying officers may not be able to demonstrate that their reliance on such a system is reasonable. In addition to the problems with NAVSUP Instruction 4200.94, the Navy Comptroller’s June 3, 1999, policy memorandum further weakens the certification process. The policy memorandum does not explicitly state that the cardholder must review the statement of account and notify the approving official of any improper or incorrect items within 5 days of receipt. Nonetheless, the approving official must certify the invoices based on the presumption that all cardholder accounts are proper unless notified in writing within 5 days of receipt of the invoice. Navy officials told us that it is assumed that cardholders would review the statements and notify the approving officials of any problems. While the cardholder’s review is not explicitly required, the memorandum states that the change in policy “will ensure that the cardholder will inform the AO in a prompt manner of any duplicate payments or fraudulent or improper charges to his account.” Again, by requiring certification within 5 days, whether or not a cardholder has reviewed a statement, the June 3, 1999, policy memorandum requires a certifying officer to rely upon a process that does not require review of a proposed payment or otherwise assure that a payment is properly payable before certification occurs. All seven approving officials at the two activities (one at SPAWAR and six at the Navy Public Works Center) told us that they never reviewed the cardholders’ supporting documentation before signing and submitting purchase card statements for payment. Accordingly, we assessed the failure rate for this control as 100 percent for both SPAWAR San Diego and the Navy Public Works Center San Diego. Approving officials explained that they certify purchase card statements for payment without reviewing cardholders’ supporting documentation because (1) they do not have time to review the documentation and (2) the Navy Comptroller’s June 3, 1999, guidance relieves them of this responsibility. With regard to the first issue, both activities are faced with a significant span of control issue that makes the overall purchase card environment difficult, if not impossible, to control. With an average of over 700 monthly cardholder statements at SPAWAR San Diego and only one approving official—who is also the Agency Program Coordinator—proper certification of monthly summary statements within 5 days of receipt is not physically possible. Thus, the SPAWAR San Diego approving official told us that the certification process is largely a “rubber stamp” with no real verification of the underlying cardholder support for the monthly summary statements. The environment is somewhat more manageable at the Navy Public Works Center San Diego, with six approving officials charged with certifying summary statements that cover an average of 55 cardholder statements each month. However, Public Works Center approving officials also told us that they did not review all cardholder supporting documentation before certifying purchase card statements for payment. As a result, these two Navy units paid their monthly purchase card bills without knowing whether the charges were valid. As discussed later in this statement, this has contributed to payments being made for unauthorized and improper transactions. With regard to the second issue, the June 3, 1999, policy memorandum appears to improperly assign certifying officer accountability to cardholders. The policy memorandum stated that because it is not possible for the approving official to personally review and verify individual cardholder transactions and statements of account, the approving official is to certify the purchase card statements for payment based on the presumption that all cardholder accounts are proper unless the approving official has been notified to the contrary by the cardholder. The memorandum goes on to say that, “his new policy recognizes that the ultimate responsibility for purchases being proper is with the cardholder.” However, under 31 U.S.C. 3528 and DOD’s Financial Management Regulation, certifying officers are liable for an illegal, improper, or incorrect payment as a result of an inaccurate or misleading certification. An agency may not shift certifying officer liability to other employees. The policy memorandum is also inconsistent with GAO’s internal control standard for ensuring that only valid transactions are entered into. According to DOD and Navy officials, this policy memorandum has created confusion about whether the cardholder or the approving official is responsible for proper certification of purchase card statements for payment. Problems in Proper Accounting for Purchase Card Transactions Transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) The two units we reviewed did not have controls in place to ensure that purchase card transactions were recorded to customer accounts in a timely manner and that local accounting records reflected the proper classification of expense. The timely and accurate recording of purchase card transactions is important to ensure the reliability of data and information used in day-to-day management and decision-making, particularly for working capital fund activities such as SPAWAR and the Navy Public Works Center. We have previously reported that DOD has long-standing problems accumulating and reporting the full costs associated with its working capital fund operations. Recording Purchase Card Costs to Customer Accounts The two units did not always record purchase card costs to customer accounts within required time frames. Consistent with GAO’s internal control standards and Statement of Federal Financial Accounting Standards (SFFAS), No. 4, “Managerial Cost Accounting Standards,” SPAWAR San Diego and Navy Public Works Center San Diego operating procedures require timely recording of purchase card costs to projects that received the goods and services acquired by purchase card. This is an important control because as working capital fund operations, SPAWAR and the Navy Public Works Center are to provide their customers with information on the full cost of goods and services provided—either through billing or other information. Further, working capital fund activities are to operate on a break-even basis over time—that is, not make a profit or incur a loss. Accurate and timely recording of customer transactions are key to ensuring that these working capital fund objectives are met. However, based on our tests of fiscal year 2000 purchase card transactions, we estimate that 83 percent of the SPAWAR transactions totaling at least $15.3 million and an estimated 35 percent of the Navy Public Works transactions totaling at least $5 million had not been recorded to customer or overhead accounts within 5 days of receipt of the purchase card statements. As time passes, the likelihood that documentation will be available to properly record transactions decreases. For example, because SPAWAR did not have the documentation to support timely and accurate recording of purchase card transactions, it wrote off as a loss $657,642 in fiscal year 2000 transactions that could not be identified to a specific job order. Further, according to the SPAWAR Accounting Officer, as of the end of fiscal year 2000, SPAWAR had a backlog of about $5.6 million in purchase card transactions that had not been recorded to customer accounts or its own overhead account. As a result of unrecorded transactions, year-end data on actual overhead costs used to estimate future overhead rates for billing purposes were unreliable. Navy Public Works Center San Diego officials were unable to provide reliable information on the amount of unrecorded purchase card transactions at the end of fiscal year 2000 because systems weaknesses rendered their fiscal year-end data incomplete and unreliable. Classifying Purchase Card Costs by Object Class In addition to problems with timely and accurate recording of transactions to customer accounts, SPAWAR San Diego and the Navy Public Works Center San Diego did not properly classify purchase card transactions in their detail accounting records to show the nature and type of expenditures made using purchase cards. Office of Management and Budget (OMB) Circular A-11, Preparation and Submission of Budget Estimates, requires federal agencies to report obligations and expenditures by object class, such as salaries, benefits, travel, supplies, services, and equipment, to indicate the nature of the expenditures of federal funds. Object classification data are reported by appropriation in the President’s Annual Budget Submissions to the Congress. OMB prepares summary reports of object class data to support budget projections and other analyses. Accurate object classification data are critical to the reliability of information reported in the President’s budget submission and budget projections and other analyses that are based on these data. In addition, because the Congress has asked for and is using object class information for its oversight activities, it is important that these data be properly recorded. We previously reported that inaccurate reporting by object class hampers congressional oversight. DOD Purchase Card Program Office guidance requires payments of monthly purchase card statements to be recorded as summary records in the Navy’s accounting systems and has directed that these summary records be recorded to the object class for supplies and materials, regardless of the nature of the expenses incurred. After purchase card statements have been paid, SPAWAR San Diego and the Navy Public Works Center San Diego are to record the individual transactions included in the summary payment record in their local accounting records. However, we determined that SPAWAR did not classify summary records related to payment of monthly purchase card statements to any expense category in the Navy’s accounting system and recorded all of the purchase card transactions in our sample to object class 25, as services, in its local accounting records. Consistent with DOD Purchase Card Joint Program Management Office guidance, the Navy Public Works Center San Diego recorded both the summary records related to payment of monthly purchase card statements and the detailed transactions to object class 26, as supplies and materials, in its local accounting system. Because SPAWAR San Diego and the Navy Public Works Center San Diego did not ensure that their detail transaction records reflected the proper classification of expense, 100 percent of the SPAWAR and Navy Public Works Center transactions in our samples were recorded to the wrong object class. For example, although the majority of the SPAWAR purchase card transactions in our sample—76 transactions totaling over $73,000— were for equipment purchases, none of these transactions were properly classified and recorded. Further, SPAWAR did not maintain sufficient documentation to determine the correct object class for 15 of the transactions in our sample totaling about $12,000. In addition, although the Navy Public Works Center recorded all of the purchase card transactions in our sample as supplies and materials, many of these transactions should have been recorded as contractual services or equipment. Also, the Navy Public Works Center did not maintain sufficient documentation to determine the proper object class for nine of the transactions in our sample totaling about $6,000. Failure to Record Accountable Items in Property Records An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Most of the accountable items—easily pilferable or sensitive items—in our samples were not recorded in property records. Recording these items in the property records is an important step to ensure accountability and financial control over these assets and, along with periodic inventory, to prevent theft or improper use of government property. Consistent with GAO’s internal control standards, DOD’s Property, Plant, and Equipment Accountability Directive and Manual, which was issued in draft for implementation on January 19, 2000, and the Appropriation, Cost and Property Accounting procedures (referred to as the NAVSOP 1000-3M) issued by DFAS Cleveland, require accountable property to be recorded in property records as it is acquired. Accountable property includes easily pilferable or sensitive items, such as computers and related equipment, cameras, cell phones, and power tools. The NAVSOP property procedures require such property to be recorded in property records along with a description of the item, property identification number, model and serial number, manufacturer, acquisition cost, and the location or custodian of the property. Based on our tests of fiscal year 2000 purchase card transactions that included accountable property items, we estimate that SPAWAR did not record all accountable items in its property records for about 84 percent of its purchase card transactions, covering at least $5.4 million in accountable property. Based on our tests of Navy Public Works Center fiscal year 2000 purchase card transactions that included accountable property items we estimate that the Center did not record all accountable items in its property records for about 95 percent of its purchase card transactions, covering at least $317,000 in accountable property. Our analysis of the individual property items included in our sampled SPAWAR and Navy Public Works Center fiscal year 2000 purchase card transactions showed that the two units did not record a total of 46 of the 65 accountable items that were included in our sampled transactions in their property records, including 36 SPAWAR items and 10 Public Works Center items. SPAWAR officials told us that they were not aware of the requirement to record items such as computer monitors, cameras, and palm pilots in the property records. Moreover, when we asked to inspect these items, the two units could not provide conclusive evidence that 31 of them were in the possession of the government, including 19 SPAWAR items and 12 Public Works Center items. The unverified accountable items included laptop computers, Palm Pilots, and digital cameras. Of the 31 items that could not be verified, 5 items had been transferred to other locations throughout the world and SPAWAR and the Public Works Center officials were unable to conclusively demonstrate their existence and location, serial numbers for 4 items did not match those on the purchase documentation, 3 items were declared to be lost or stolen by employees who had custody of these items, and the existence of the remaining 19 items could not be verified because a serial number was not included in the purchase documentation or a receipt for the item including a serial number could not be located. The three items that were declared lost or stolen included two Palm Pilots that could not be located—one at SPAWAR San Diego and the other at the Navy Public Works Center San Diego—and a video conferencing camera that was reported stolen from a Public Works Center employee’s car. SPAWAR and Navy Public Works Center officials directed the employees to prepare lost property reports on the two Palm Pilots. Subsequently, in early June 2001, Navy Public Works Center officials advised us that their lost Palm Pilot had been located and showed us what they believed was the item in question. However, because the serial number had been rubbed off, we could not confirm that it was the accountable item acquired by purchase card. In late June 2001, SPAWAR officials advised us that their lost Palm Pilot had been located. Because we had already completed our review of SPAWAR property controls, we did not attempt to view and confirm the existence of this Palm Pilot. The third item was a video conferencing camera that was reported stolen on January 27, 2001, by a Navy Public Works Center San Diego employee. According to the employee, the video camera was stolen from his car along with a laptop computer that also belonged to the government. The employee submitted a claim to his insurance company, and the claim was paid on February 27, 2001. Although the stolen items cost about $3,876, the employee’s insurance policy limited payment of claims for business property to $2,500. However, about two months later, in May 2001, our investigators determined that the insurance check in the amount of $2,500 was deposited in the employee’s personal bank account instead of being endorsed to the government. The employee admitted to our investigators that he had been reimbursed for the stolen items from his insurance company. On May 3, 2001, the employee issued a check to the government in the amount of $2,500. Navy Public Works Center officials told us that they are considering assessing the employee for the remaining loss of $1,376 and taking possible disciplinary action for the failure to reimburse the government for the equipment loss in a timely manner. Potentially Fraudulent, Improper, and Abusive Transactions We identified several cases of potentially fraudulent, improper, and abusive transactions at both SPAWAR San Diego and the Navy Public Works Center San Diego. Given the breakdown of controls described in this testimony, the two units would have difficulty detecting and preventing these three types of transactions. We considered potentially fraudulent purchases to be those which were unauthorized and intended for personal use. Some of these instances may involve the use of compromised accounts, in which an account number was stolen and used to make fraudulent purchases. Other cases involve the cardholder making unauthorized purchases for personal use. The transactions we determined to be improper are those purchases intended for government use, but are not for a purpose that is permitted by law or regulation. We also identified as improper a number of purchases made on the same day from the same vendor, which appeared to circumvent cardholder single transaction limits. Federal Acquisition Regulation and NAVSUP Instruction 4200.94 guidelines prohibit splitting purchase card transactions into more than one segment to avoid the requirement to obtain competitive bids on purchases over the $2,500 micro- purchase threshold or to circumvent higher single transaction limits for payments on deliverables under requirements contracts. We defined abusive transactions as those that were authorized, but the items purchased were at an excessive cost or for a questionable government need, or both. In these instances, it appears that cardholders were permitted to purchase items for which there was not a reasonable, documented justification. As discussed in our Objectives, Scope, and Methodology, our work was not designed to identify, and we cannot determine, the extent of fraudulent, improper, or abusive transactions. Potentially Fraudulent Transactions Although both SPAWAR and the Navy Public Works Center had policies and procedures that were designed to prevent fraudulent purchases, our tests showed that the controls were not implemented as intended. For example, as discussed previously, controls for independent verification of receipt and acceptance and proper certification of monthly statements prior to payment were ineffective. Fraudulent activities must then be detected after the fact during supervisor or internal reviews and disputed charge procedures must be initiated to obtain a credit from Citibank. Table 3 shows examples of potentially fraudulent transactions that we identified from the universe of fiscal year 2000 purchases by SPAWAR San Diego and Navy Public Works Center San Diego cardholders. Navy Public Works Center San Diego officials told us that they were aware of the potentially fraudulent transactions that we identified. The officials told us that the apparently fraudulent Public Works Center transactions were all made using a limited number of cardholder accounts and that they had referred these transactions to the Naval Criminal Investigative Service. SPAWAR officials also told us that they were aware of their potentially fraudulent transactions. Both Public Works Center and SPAWAR officials said that they had submitted disputed charged forms to Citibank and had received credits for these transactions. However, given the extensive breakdowns in purchase card controls that we identified, SPAWAR and the Navy Public Works Center have no assurance that all fraudulent charges were detected. Our Office of Special Investigations is conducting a further investigation of the potentially fraudulent purchases we identified. Improper Transactions We identified several SPAWAR San Diego transactions that involved the improper use of federal funds. For example, one case involved flowers costing $97 purchased for Secretary’s Day. We also identified several transactions for food for employee-related activities, including food costing $75 for an office outing. The Federal Acquisition Regulation, 48 C.F.R. 13.301(a), provides that the Governmentwide Commercial Purchase Card “may be used only for purchases that are otherwise authorized by law or regulations.” Therefore, a procurement using the purchase card is lawful only if it would be lawful using conventional procurement methods. Pursuant to 31 U.S.C. 1301(a), “ppropriations shall only be applied to the objects for which the appropriations were made . . . .” In the absence of specific statutory authority, appropriated funds may only be used to purchase items for official purposes, and may not be used to acquire items for the personal benefit of a government employee. For example, without statutory authority, appropriated funds may not be used to furnish meals or refreshments to employees within their normal duty stations. Free food and other refreshments normally cannot be justified as a necessary expense of an agency’s appropriation because these items are considered personal expenses that federal employees should pay for from their own salaries. Likewise, appropriated funds may not be used to purchase gifts for employees or others unless an agency can demonstrate that the items further the purposes for which the appropriation was enacted. The purchase of the flowers and food were both personal rather than official in nature and, therefore, may not be paid for with appropriated funds. Another transaction involved the purchase of a file cabinet from Macy’s at a cost of $1,462. Purchases of file cabinets are subject to rules prescribed in Title 41 of the Code of Federal Regulations, Subtitle C, “Federal Property Management Regulations System,” which cover specific procedures that must be followed to limit the purchases of new filing cabinets, including disposing of all records that have been authorized for disposition in accordance with authorized disposal schedules and transferring inactive records not needed for daily business to approved agency records centers. After taking appropriate steps to maximize the use of existing filing cabinets, if the agency determines that additional filing cabinets are required, the FAR requires the agency to submit a requisition to the General Service Administration (GSA). We found no documented evidence that the required procedures were followed. Further, we found no documented justification for purchasing the file cabinet from Macy’s instead of through GSA, as required. Potentially Abusive Transactions We also identified a number of potentially abusive transactions. These were purchases of items supposedly for official use but without any documented agency determination that these items were necessary for government business rather than merely to satisfy the personal preference of individual employees. When a contracting official—in this case, a purchase cardholder—purchases an item based on his or her own preferences (or the desires of another agency official or employee) without a management decision that the item is necessary, he or she is abusing the procurement process. Some of these items fall into categories described in GAO’s Guide for Evaluating and Testing Controls Over Sensitive Payments (GAO/AFMD-8.1.2, May 1993). The guide states that “Abuse is distinct from illegal acts (noncompliance). When abuse occurs, no law or regulation is violated. Rather, abuse occurs when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior.” Our review of the transactions in our samples as well as our analytical review of the universe of SPAWAR and Navy Public Works Center fiscal year 2000 transactions identified a number of purchases that appear to be abusive, as shown in table 4. For example, SPAWAR San Diego cardholders purchased 9 flat-panel computer monitors at a total cost of $13,192. The cost of each monitor ranged from $800 to $2,500. In contrast, the current GSA schedule cost of a standard 17-inch computer monitor is about $300. We were unable to find any pre-purchase agency determination that the nature of the work performed by SPAWAR officials or employees was such that standard monitors would not satisfy their needs. SPAWAR’s commanding officer later told us that flat-panel monitors save space and energy and are easier on the eyes. However, his opinion did not constitute an official agency determination that these monitors were needed. It appears more likely that cardholders purchased these more costly monitors to satisfy the personal preferences of individual SPAWAR officials or employees. Our sample transactions also included four SPAWAR purchases of PDAs for a total cost of $1,150. We performed a similar review of the Navy Public Work Center’s fiscal year 2000 transactions and generally did not identify the same types of potentially abusive transactions. We did, however, identify nine PDA purchases for a total cost $3,642. Again, we were unable to find any pre- purchase agency determination that these officials or employees needed PDAs to perform their work. Therefore, it again appears likely that the PDAs were acquired to satisfy the personal preferences of the individuals for whom they were purchased. Split Purchases Our analysis of the universe of fiscal year 2000 Navy purchase card payments made by DFAS San Diego identified nearly $100 million in purchases made on the same day from the same vendor, which appeared to circumvent cardholder single transaction limits—including about $2.5 million in potential SPAWAR split purchases and nearly $4.7 million in potential Navy Public Works Center split purchases. The Federal Acquisition Regulation and Navy purchase card policies and procedures prohibit splitting a transaction into more than one segment to avoid the requirement to obtain competitive bids for purchases over the $2,500 micro-purchase threshold or to avoid other established credit limits. DOD and Navy purchase card policies and procedures prohibit such actions as improper use of the purchase card. Once items exceed the $2,500 micro-purchase threshold, they are to be purchased in accordance with simplified acquisition procedures, which are more stringent than those for micro-purchases. Our analysis of the universe of fiscal year 2000 SPAWAR San Diego and Navy Public Works Center San Diego transactions identified a number of potential split purchases. To determine whether these were, in fact, split purchases, we obtained and analyzed the supporting documentation for 20 purchases each at SPAWAR and the Navy Public Works Center. We found that in many instances, cardholders made multiple purchases from the same vendor within a few minutes or a few hours for items such as computers, computer-related equipment, and software, that involved the same, or sequential or nearly sequential purchase order and vendor invoice numbers. Based on our analyses, we concluded that 18 of the 20 SPAWAR purchases and 14 of the 20 Navy Public Works Center purchases that we examined were split into two or more transactions to avoid micro-purchase thresholds. Tables 5 and 6 provide examples of cardholder purchases that we believe represent split purchases intended to circumvent the $2,500 micro-purchase limit or cardholder transaction limits. In addition to the items in table 6, we identified three Navy Public Works Center purchases totaling $147,000 that were made to the same vendor on the same day by a cardholder with a $100,000 transaction limit. The Navy Public Works Center did not have receipts to document the items acquired. Planned Actions to Mitigate Control Weaknesses When we brought the control failures and other issues we identified to the attention of the Executive Officer at the Navy Public Works Center San Diego, he demonstrated a proactive position to identifying and correcting the weaknesses. According to the Executive Officer, because of concerns about recent instances of purchase card fraud, the Navy Public Works Center requested a Naval Audit Service review of purchase card activity and undertook a number of corrective actions as a result of auditor findings. For example, in February 2000, the Navy Public Works Center San Diego started to reduce the number of purchase cardholders, which totaled 359 at that time, to about 250, and in September 2000, the Public Works Center revised its purchase card policies and procedures to comply with NAVSUP Instruction 4200.94. In addition, due to the time required for review and proper certification of purchase card statements before payment, the Executive Officer told us that he would consider further reducing the number of cardholders to help ensure adequate review of documentation prior to certifying statements for payment. During fiscal year 2001, according to a SPAWAR acquisition official, SPAWAR reduced the number of its cardholders from over 1,500 to 1,070; however, it continued to have only one approving official who was responsible for certifying monthly summary purchase card statements, and the average number of individual monthly purchase card statements remained about the same. In addition, the SPAWAR San Diego Commanding Officer told us that SPAWAR planned to implement an Enterprise Resources Planning (ERP) system. The ERP system is expected to help improve overall controls for the purchase card program, including a central electronic file of imaged documents supporting purchase card transactions and an audit trail of actions by individuals executing various purchase card processing functions. A SPAWAR official advised us that SPAWAR implemented its ERP system in mid-July 2001. However, unless substantial improvements are made in the overall control environment and employees actually follow purchase card policies and procedures, the ERP system will simply automate the same weaknesses as the current manual process. Conclusions The serious breakdown in internal controls at SPAWAR San Diego and Navy Public Works San Diego are the result of a weak overall internal control environment, flawed or nonexistent policies and procedures, and employees that do not adhere to valid policies. The proliferation of cardholders at these two activities resulted in over 1,700 cardholders with essentially the authority to make their own purchase decisions in an environment that lacked basic controls over receipt of government property, certification of monthly statements, and accountability over sensitive property items. Our work found that these weak internal controls resulted in lost, stolen, missing, or misused government property, potentially abusive use of purchase cards, and payment of unauthorized and potentially fraudulent charges. The combination of these factors also contributed to the five known fraud cases and leaves the government highly vulnerable to significant additional fraud, waste, and abuse from the purchase card program at these two Navy units. Following this testimony, we plan to issue a report that will include recommendations to DOD and the Navy for improving internal controls over purchase card activity. Mr. Chairman, Members of the Subcommittee, and Senator Grassley, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. Contacts and Acknowledgements For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095. Individuals making key contributions to this testimony included Wendy Ahmed, Christie Arends, Bertram Berlin, Sharon Byrd, Francine DelVecchio, Stephen Donahue, Michael Chambless, Douglas Ferry, Gayle Fischer, Kenneth Hill, Wilfred Holloway, Jeffrey Jacobson, John Kelly, Richard Larsen, John Ryan, and Sidney Schwartz. Background The Navy’s purchase card program is part of the Governmentwide Commercial Purchase Card Program, which was established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. DOD reported that it used purchase cards for 95 percent of its eligible transactions—more than 10 million transactions, valued at $5.5 billion—in fiscal year 2000. The Navy’s reported purchase card activity represented nearly one third of the reported DOD total during fiscal year 2000—2.7 million transactions, valued at $1.7 billion. According to unaudited DOD data, SPAWAR and Navy Public Works Center San Diego-based activities accounted for $68 million (about 15 percent) of the $451 million in fiscal year 2000 Navy purchase card payments processed by DFAS San Diego. Although SPAWAR San Diego and the Navy Public Works Center San Diego are both working capital fund activities, their missions are very different. SPAWAR San Diego is a highly technical systems operation staffed by scientists and engineers who provide research, technology, and engineering support to other Navy programs worldwide. The Navy Public Works Center San Diego provides maintenance, construction, and operations support to other Navy programs in the San Diego area. Governmentwide Purchase Card Program Guidelines Under the Federal Acquisition Streamlining Act of 1994, the Defense Federal Acquisition Regulation Supplement guidelines, eligible purchases include (1) micro-purchases (transactions up to $2,500 for which competitive bids are not needed); (2) purchases for training services up to $25,000; and (3) payment of items costing over $2,500 that are on the General Services Administration’s (GSA) pre-approved schedule, including items on requirements contracts. The streamlined acquisition threshold for such contract payments is $100,000. Accordingly, cardholders may have single transaction purchase limits of $2,500 or $25,000, and a few cardholders may have transaction limits of up to $100,000 or more. Under the GSA blanket contract, the Navy has contracted with Citibank for its purchase card services, while the Army and the Air Force have contracted with U.S. Bank. The Federal Acquisition Regulation, Part13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. U.S. Treasury regulations issued pursuant to provisions of law in 31 U.S.C. 3321, 3322, 3325, 3327, and 3335, govern purchase card payment certification, processing, and disbursement. DOD’s Purchase Card Joint Program Management Office, which is in the office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology, has established departmentwide policies and procedures governing the use of purchase cards. The Under Secretary of Defense (Comptroller) has established related financial management policies and procedures in various sections of DOD’s Financial Management Regulation. Navy Purchase Card Acquisition And Payment Processes The Navy Supply Systems Command is responsible for the overall management of the Navy’s purchase card program, and has published the Navy Supply Command (NAVSUP) Instruction 4200.94, Department of the Navy Policies and Procedures for Implementing the Governmentwide Purchase Card Program. Under the NAVSUP Instruction, each Navy Command’s head contracting officer authorizes purchase card program coordinators in local Navy units to obtain purchase cards and establish credit limits. The program coordinators are responsible for administering the purchase card program within their designated span of control and serve as the communication link between Navy units and the purchase card issuing bank. Designation of Cardholders When a supervisor requests that a staff member receive a purchase card, the agency program coordinator is to first provide training on purchase card policies and procedures and then establish a credit limit and issue a purchase card to the staff member. The Navy had a total of about 1,700 purchase card program coordinators during fiscal year 2000, including one program coordinator at SPAWAR San Diego and one at the Navy Public Works Center San Diego. Ordering Goods and Services Purchase cardholders are delegated contracting officer ordering responsibilities, but they do not negotiate or manage contracts. SPAWAR San Diego and Navy Public Works Center San Diego cardholders use purchase cards to order goods and services for their units as well as their customers. Cardholders may pick up items ordered directly from the vendor or request that items be shipped directly to end users (requestors). Upon receipt of items acquired by purchase cards, cardholders are to record the transaction in their purchase log and obtain independent confirmation from the end user, their supervisor, or another individual that the items have been received and accepted by the government. They are also to notify the property book officer of accountable items received so that these items can be recorded in the accountable property records. Payment Processing The purchase card payment process begins with receipt of the monthly purchase card billing statements. Section 933 of the National Defense Authorization Act for Fiscal Year 2000, Public Law 106-65, requires DOD to issue regulations that ensure that purchase card holders and each official with authority to authorize expenditures charged to the purchase card reconcile charges with receipts and other supporting documentation. NAVSUP Instruction 4200.94 states that upon receipt of the individual cardholder statement, the cardholder has 5 days to reconcile the transactions appearing on the statement by verifying their accuracy to the transactions appearing on the statement and notify the approving official in writing of any discrepancies in the statement. In addition, under the NAVSUP Instruction, the approving official is responsible for (1) ensuring that all purchases made by the cardholders within his or her cognizance are appropriate and that the charges are accurate and (2) the timely certification of the monthly summary statement for payment by DFAS. The Instruction further states that within 5 days of receipt, the approving official must review and certify for payment the monthly billing statement, which is a summary invoice of all transactions of the cardholders under the approving official’s purview. The approving official is to presume that all transactions on the monthly statements are proper unless notified in writing by the purchase cardholder. However, the presumption does not relieve the approving official from reviewing for blatant improper purchase card transactions and taking the appropriate action prior to certifying the invoice for payment. In addition, the approving official is to forward disputed charge forms to the unit’s comptroller’s office for submission to Citibank for credit. Under the Navy’s contract, Citibank allows the Navy up to 60 days after the statement date to dispute invalid transactions and request a credit. Upon receipt of the certified monthly purchase card summary statement, a DFAS vendor payment clerk is to (1) review the statement and supporting documents to confirm that the prompt payment certification form has been properly completed and (2) subject it to automated and manual validations. The purpose of the automated validation is to confirm that a SPAWAR or a Navy Public Works Center obligation for a purchase card invoice has been recorded in their respective cost accounting systems in an amount sufficient to cover the payment. Quality control clerks manually verify that purchase card statement and payment data were correctly entered in the Navy’s vendor payment (disbursing) system—STARS 1-Pay. Once the payment has passed these validation tests, the quality control supervisor authorizes the statement for payment. The DFAS vendor payment system then batches all of the certified purchase card payments for that day, generates a tape for payment by electronic funds transfer to the purchase card bank, and sends the file to the accounting station for recording the payment as a summary record in the Navy’s accounting system. Figure 1 illustrates the current purchase card payment process used by SPAWAR and the Naval Public Works Center in San Diego. The Navy earns purchase card rebate revenue from Citibank of up to 0.8 percent based on sales volume (purchases) and payment timeliness. According to the DOD Deputy Director of DOD’s Purchase Card Joint Program Management Office, rebate revenue is generally to be recorded to the purchase card statements and used to offset monthly charges. San Diego Related Fraud Cases Investigated by NCIS Pursuant to Senator Grassley’s request, we identified five fraud cases related to Navy programs based in the San Diego, California, area and investigated by the Naval Criminal Investigative Service (NCIS). All of these cases can be linked to the types of internal control weaknesses discussed in this testimony. Of these five cases, two involved Navy Public Works Center San Diego employees and one involved 2,600 compromised purchase card accounts, including 22 currently active SPAWAR San Diego accounts. One of the remaining cases, which has been concluded, was related to a fraud that occurred at the Navy’s Millington (Tennessee) Flying Club—an activity of the Navy Morale, Welfare, and Recreation entity, which is based in San Diego. The other case involved a military officer and other service members who were assigned to the Marine Corps Station in Miramar, near San Diego. Case #1 The first San Diego-related purchase card fraud case is an example of the lack of segregation of duties. This case involved the cardholder at the Navy’s Millington (Tennessee) Flying Club, an entity of the U.S. Navy’s Morale, Welfare, and Recreation activity, which is based in San Diego, California. The cardholder, who was having financial problems, was hired by her stepfather, who was the club’s treasurer. The stepfather delegated nearly all purchase card duties to the cardholder, as well as the authority for writing checks to pay the Flying Club’s monthly purchase card statements. The cardholder made over $17,000 in fraudulent transactions to acquire personal items from Wal-Mart, The Home Depot, shoe stores, pet stores, boutiques, an eye care center, and restaurants over an 8-month period from December 1998 through July 1999. The fraud was identified when the club’s checking account was overdrawn due to excessively high purchase card payments and a bank official contacted the president of the Flying Club. The cardholder pleaded guilty and was sentenced to 15 months in jail and assessed about $28,486 in restitution due to purchase card fraud and bounced checks. The defendant commented that illegal use of the card was “too easy” and that she was the sole authorizer of the card purchases. Case #2 The second case involved a military officer and other service members who were assigned to the Marine Corps Station in Miramar, near San Diego, California. This alleged fraud occurred through collusion, and internal controls will not prevent collusion. However, adequate monitoring of purchase card transactions along with enforcing controls such as documentation of independent confirmation of receipt and acceptance and recording accountable items in property records would have made detection easier. In this instance, the military officer allegedly conspired with cardholders under his supervision to make nearly $400,000 in fraudulent purchases from five companies—two that he owned, one owned by his sister, and the other two owned by friends or acquaintances. They charged thousands of dollars for items such as DVD players, Palm Pilots, and desktop and laptop computers. The officer also allegedly made cash payments to employees to keep silent about the fraud and provided auditors with falsified purchase authorizations and invoices to cover the fraud. The fraud occurred from June 1999 through September 2000. The total amount of the alleged fraud is unknown. The alleged fraud was identified based on a tip from a service member. The U.S. Attorney’s Office in San Diego has accepted the case for prosecution and four other active service members are under investigation. Case #3 The third case involved a Navy Public Works Center San Diego maintenance/construction supervisor who allegedly made at least $52,000 in fraudulent transactions to a suspect contractor on work orders for which the work was not performed by that contractor. Adequate monitoring of purchase card transactions along with enforcing controls such as independent, documented receipt and acceptance and recording accountable items in property books would have made detection easier. Navy investigators believe that the employee also may have used his government purchase card to make unauthorized purchases for personal use, including jewelry, an air conditioner, and other personal items from The Home Depot from April 1997 through October 1998. The total amount of this alleged purchase card fraud is unknown. The alleged fraud was identified when the employee’s supervisor reviewed Navy Public Works Center work orders and noticed that four work orders totaling approximately $7,000 were completed by the employee and paid for with the suspect’s government purchase card. Further inquiry by the supervisor revealed that Navy Public Works Center employees, not the contractor, had completed the work. NCIS investigators and Naval Audit Service auditors identified approximately $52,000 in purchase card transactions made by the employee to a suspect contractor for work that was performed by either the Public Works Center or other legitimate contractors. The employee has resigned and an investigation by the Federal Bureau of Investigation and NCIS is ongoing. The U.S. Attorney’s Office in San Diego has accepted the case for prosecution. Case #4 The fourth case involved a Navy Public Works Center San Diego purchasing agent that allegedly made at least $12,000 in fraudulent purchases and planned to submit approximately $103,000 in fraudulent disputed charge forms, including payments for hotels, airline tickets, computers, phone cards, and personal items from The Home Depot. The alleged fraud occurred from April 1997 through July 1999. As with the other cases, adequate monitoring of purchase card transactions along with enforcing controls such as independent, documented receipt and acceptance and recording accountable items in property books would have made detection easier. The alleged fraud was identified during an investigation of a possible bribery/kickback scheme. The employee has resigned and an NCIS investigation is ongoing. The U.S. Attorney’s Office in San Diego has accepted the case for prosecution. Case #5 The fifth Navy purchase card fraud case is ongoing and involves the compromise of up to 2,600 purchase card accounts assigned to Navy activities in the San Diego area. Investigators were only able to obtain a partial list consisting of 681 compromised accounts so the exact number is not known. At least 45 of the compromised accounts were for SPAWAR San Diego and one of the compromised accounts was for the Navy Public Works Center in San Diego. Of these 46 compromised accounts, 22 SPAWAR San Diego accounts were still active in May 2001. None of the active accounts on the partial listing found by investigators were for the Navy Public Works Center San Diego. Although the account numbers showed up on a computer printer in a community college library in San Diego in September 1999, the Navy has not canceled all of the compromised accounts. Instead, according to NCIS and Navy Supply Command officials, the Navy is canceling the compromised accounts as fraudulent transactions are identified. Naval Supply Systems Command, SPAWAR San Diego, and Navy Public Works Center San Diego officials told us that they were aware of this incident but did not have a listing of the account numbers affected. As a result, the Navy did not take any measures to flag the compromised accounts and implement special monitoring procedures to detect any potential fraudulent use of these accounts. According to Navy investigators, as of January 2001, at least 30 of the compromised account numbers had been used by 27 alleged suspects to make more than $27,000 in fraudulent transactions for pizza, jewelry, phone calls, tires, and flowers. As of May 21, 2001, 22 of the compromised SPAWAR accounts were still active. Our review of the monthly credit limits associated with the 22 compromised accounts showed that SPAWAR continued to have an aggregate monthly financial exposure of $900,000 associated with these accounts nearly 2 years after the compromised list was discovered in a San Diego community college library in September 1999. Further, with the lack of controls over receipt of goods and certification of purchase card statements that we identified at the two activities we reviewed, it is impossible for the Navy to identify fraudulent purchases as they occur or to determine the extent of the fraudulent use of the compromised accounts. As a result, when fraudulent use of one of the comprised accounts was identified, the Navy could not determine if the incident was due to cardholder fraud or use of the compromised account by an outside party. A joint task force in San Diego, comprised of NCIS, the U.S. Secret Service, local police, and the U.S. Attorney’s Office, investigated this fraud. The task force investigators recently traced the list of compromised accounts to a vendor used by the Navy, which acknowledged that the list came from its database. The vendor identified two former employees as possible suspects. Objectives, Scope, and Methodology Pursuant to Senator Grassley’s request, we obtained and reviewed information on five fraud cases related to Navy purchase card programs in the San Diego, California, area and to review purchase card controls and accounting for two Navy units based in San Diego—the Space and Naval Warfare Systems Command (SPAWAR) Systems Center and the Navy Public Works Center. Our assessment of SPAWAR San Diego and the Navy Public Works Center San Diego purchase card controls covered the overall management control environment, including (1) span of control issues related to the number of cardholders, (2) training for cardholders and accountable officers, (3) management of rebates, and (4) monitoring and audit of purchase card activity; tests of statistical samples of key controls over purchase card transactions, including (1) documentation of independent confirmation that items ordered by purchase card were received, (2) proper certification of purchase card statements for payment, and (3) proper accounting for purchase card transactions; substantive tests of accountable items in our sample transactions to verify whether they were recorded in property records and whether they could be found; and analysis of the universe of transactions to identify (1) any potentially improper, fraudulent, and abusive transactions and (2) purchases that were split into one or more transactions to avoid micro-purchase thresholds or other credit limits. We used as our primary criteria applicable laws and regulations; our Standards for Internal Control in the Federal Government; and our Guide for Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in the GAO internal control standards to the practices followed by management in the four areas reviewed. To test controls, we selected stratified random probability samples of 135 SPAWAR San Diego purchase card transactions from a population of 47,035 transactions totaling $38,357,656, and 121 Navy Public Works Center San Diego transactions from a population of 53,026 transactions totaling $29,824,160 that were recorded by the Navy during fiscal year 2000. We stratified the samples into two groups—transactions from computer vendors and other vendors. With this statistically valid probability sample, each transaction in the population had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Table 7 presents our test results on three key transaction-level controls and shows the confidence intervals for the estimates for the universes of fiscal year 2000 purchase card transactions made by SPAWAR and the Navy Public Works Center in San Diego. Our analytical reviews covered the universe of fiscal year 2000 purchase card transactions for the two units’ San Diego-based activities covered about 47,000 transactions totaling about $38 million at SPAWAR San Diego and about 53,000 transactions totaling about $30 million at the Navy Public Works Center San Diego. For these reviews, we did not look for all potential abuses of purchase cards. For example, because a large number of store receipts (such as those from The Home Depot) were missing, we were unable to determine whether certain purchases were made for personal use. In addition, we did not physically examine purchases made to determine whether goods and services were received and used for government purposes. While we identified some improper and potentially fraudulent and abusive transactions, our work was not designed to identify, and we cannot determine, the extent of fraudulent, improper, or abusive transactions. We briefed DOD managers, including officials in DOD’s Purchase Card Joint Program Management Office and the Defense Finance and Accounting Service, and Navy managers, including Navy Supply Command, Navy Comptroller, SPAWAR San Diego, and Navy Public Works Center San Diego officials on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. We conducted our audit work from August 2000 through June 2001 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Following this testimony, we plan to issue a report, which will include recommendations to DOD and the Navy for improving internal controls over purchase card activity. Orders by Internet: To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
This testimony discusses internal controls weaknesses that left two Navy units in San Diego, California, vulnerable to purchase card fraud and abuse. GAO found a proliferation of purchase cards at the two units in San Diego--the Space and Naval Warfare Systems Command and the Navy Public Works. In the end, more than 1,700 cardholders essentially had the authority to make their own purchase decisions. A serious breakdown in internal controls over the receipt of government property and the certification of monthly statements, coupled with flawed or nonexistent policies and procedures and the failure of Navy employees to adhere to valid policies and procedures, led to (1) the loss, theft, and misuse of government property; (2) the potential abuse of purchase cards; and (3) payments of potentially fraudulent charges. Five fraud cases have already been identified, and the government remains extremely vulnerable to fraud, waste, and abuse arising from the purchase card program at the two Navy units. This testimony summarized the November report, GAO-02-32 .
Background Medicare is the nation’s health insurance program for those aged 65 and older and certain disabled individuals. All beneficiaries may receive health care through Medicare’s traditional FFS arrangement. Alternatively, a beneficiary may enroll in a Medicare managed care plan if one is available in the county in which he or she lives. The vast majority of the nation’s 39 million Medicare beneficiaries remain in the traditional FFS program, but enrollment in Medicare managed care plans has grown rapidly in recent years. Currently, about 17 percent of all Medicare beneficiaries are enrolled in a managed care plan. Medicare Managed Care Before BBA As of December 1, 1998, about 90 percent of Medicare’s managed care enrollees were in risk plans. Such plans assumed the financial risk of providing care for a fixed monthly per-beneficiary fee paid by Medicare. Payment rates were determined for each county on the basis of the average adjusted per capita FFS spending in that county. Because these plans were assumed to be able to provide services more efficiently than the FFS sector, Medicare law set payment rates at 95 percent of the FFS amount in each county. These county rates were adjusted up or down on the basis of enrollees’ demographic characteristics, such as age and gender. The adjustments, known as risk adjustments, were intended to account for differences in beneficiaries’ expected health care costs. That is, payment rates for enrollees who were expected to require more medical care were supposed to be higher than the rates for healthier enrollees. This payment methodology has been criticized for a number of weaknesses. Basing payments on per capita FFS spending resulted in significant variation in capitation rates across counties that did not necessarily reflect differences in costs faced by managed care plans.Rural areas, which generally had much lower payment rates than urban areas, often had few or no managed care plans. In addition, years of research indicated that Medicare’s payment methodology and demographic risk adjusters resulted in excess payments to plans because they generally attracted healthier beneficiaries with below-average health care costs. Consequently, many managed care enrollees would have cost Medicare less if they had stayed in the FFS sector. In 1997 the Physician Payment Review Commission estimated that Medicare paid as much as $2 billion annually in excess payments to managed care plans. Historical Trends in Plan Participation and Enrollment In recent years, plan participation in Medicare has grown steadily (see fig. 1). Between 1987 and 1991, however, the number of plans dropped dramatically, from 165 to 93. The number of enrollees affected by these withdrawals was fairly small because many of the terminating plans had few or no enrollees. In fact, HMO enrollment has steadily increased each year, even during the years when the number of plans decreased. In the last 3 years, enrollment in Medicare plans has more than doubled, from about 3 million in 1995 to over 6 million in 1998. Managed care enrollment is not evenly distributed nationwide. A comparison of counties with Medicare managed care plan enrollment greater than 5 percent in 1995 and 1998 shows that enrollment has increased in many counties but remains concentrated in the West, Northeast, and Florida. (See fig. 2.) BBA Changes to Medicare Managed Care The BBA substantially changed the method used to set the payment rates for Medicare managed care plans. As of January 1, 1998, plan payments for each county are based on the highest rate resulting from three alternative methodologies: a minimum payment amount, a minimum increase over the previous year’s payment, or a blend of national and local FFS spending (see app. II for a description of the new payment methodology). The changes were intended to address criticisms of the original payment system by loosening the link between local FFS spending increases and managed care rate increases in each county. In addition, the establishment of a minimum payment rate was meant to encourage plans to offer services in rural areas, which have historically had low payment rates and few participating plans. The BBA also directed the Secretary of Health and Human Services to develop and implement a better risk-adjustment system based on beneficiaries’ health status by January 1, 2000. The BBA created the Medicare+Choice program, effective January 1, 1999, to broaden beneficiaries’ health plan options. In addition to HMOs, two new types of managed care organizations were allowed to participate in Medicare: provider-sponsored organizations (PSO) and preferred provider organizations (PPO). The BBA also permits private indemnity plans to serve Medicare beneficiaries and allows beneficiaries to participate in medical savings accounts. Traditional FFS Medicare remains available to all beneficiaries. Other BBA provisions changed the requirements for plans participating in Medicare+Choice. For example, plans are required to implement new and more comprehensive quality improvement programs. Compared with pre-BBA requirements, plans must also collect more information on such activities as appeals filed by enrollees and the number and type of the services provided by the plan; in addition, plans must report more information to HCFA and to beneficiaries. The BBA moved up the date for plans to submit their benefit package proposals from November 15 to May 1 of each year, allowing more lead time to coordinate the beneficiary information campaign. Additionally, the BBA eliminated the requirement that no more than 50 percent of a plan’s enrollment may consist of Medicare and Medicaid beneficiaries. The elimination of this restriction means that Medicare plans can now serve areas without first building a commercial base. Plans’ Concerns About BBA Changes While expressing support for many of the changes implemented under the Medicare+Choice program, officials from organizations representing managed care plans have also voiced a number of concerns about payment rates and the administrative burden created by some of the new requirements. They stated that the recent rate increases have not kept pace with plan costs or medical inflation. In both 1998 and 1999, many health plans received the minimum 2-percent payment increase. Managed care plans are also concerned about the impact that the new risk-adjustment methodology will have on payments. HCFA estimates that the new risk-adjustment methodology, which will be phased in over 5 years beginning in 2000, will reduce plan payments by $11.2 billion over the period from 2000 to 2004. This reduction is in addition to the Congressional Budget Office’s (CBO) estimates of $22.5 billion in savings between 1998 and 2002 from the BBA’s plan payment changes. In addition, officials from organizations representing managed care plans believed that many of the new BBA requirements, as implemented by HCFA, are overly prescriptive, too costly, and being phased in too quickly. HCFA has responded to some of these concerns, for example, by giving plans more flexibility in meeting the new quality improvement requirements. Plans would also prefer a later submission date for their benefit package proposals so they can base their proposals on more current data. They believed that the May 1 date—8 months before the start of the contract year—is too early. Plans have to meet a similar deadline in order to participate in the Federal Employees Health Benefits Plan (FEHBP): they must submit similar benefit and rate information by May 31 each year to allow FEHBP to coordinate an information campaign for federal employees. To respond to plan concerns, HCFA officials recently changed Medicare’s benefit proposal submission date to July 1, 1999, for the year 2000. Plans, however, continue to have concerns about these and other aspects of the new Medicare+Choice regulations and would like to see further revisions. Withdrawals Reduce Access to Plans for Some Beneficiaries, but New Plan Entries May Increase Access for Others In the fall of 1998, an unusually large number of plans decided to not renew their Medicare contracts for 1999 or to reduce the number of counties in which they offered services. As a result of these decisions, about 7 percent of all Medicare managed care enrollees had to switch to another plan or return to FFS. A small group of the affected beneficiaries was left with no choice but to return to FFS. While some plans were deciding to leave, however, a number of plans were applying to enter the program or expand their existing service areas. If HCFA approves all of these applications, the number of beneficiaries with access to a managed care plan could increase in 1999 compared with 1998. Withdrawals Reduced or Eliminated Managed Care Option for Some Plan Members As of December 1, 1998, there were 346 plans to serve Medicare beneficiaries in specific locations. Each plan represents a contract to serve a particular geographic area. Many managed care organizations, such as Aetna/U.S. Healthcare and Kaiser, operate numerous plans across the country. MCOs terminated 45 (or 13 percent) of these plans as of January 1, 1999. The vast majority of organizations involved in these terminations, however, continue to offer services to Medicare beneficiaries in other areas. For example, Aetna/U.S. Healthcare dropped its plans in Delaware and Maryland but continues to offer plans in California and Florida. An additional 54 plans (16 percent) reduced the number of counties in their service areas. Nonetheless, over 70 percent of the plans operating in December 1998 remain in Medicare with no reduction in their service areas. These withdrawal decisions affected about 407,000 enrollees who could not continue receiving services in their chosen plan. Instead, they had to either choose a new managed care plan (if one was available in their county) or switch to FFS. About 61,000 of these enrollees, or 1 percent of the total Medicare managed care population, lived in counties in which no other Medicare+Choice plan was offered. Even if another managed care plan was available, about 450 beneficiaries affected by the withdrawals had end-stage renal disease (ESRD) and thus had to return to FFS.Medicare prohibits beneficiaries with ESRD from joining a managed care plan, although they may stay in a plan if they develop the disease while enrolled. For all affected beneficiaries, plan withdrawals can be highly disruptive and costly. Those who return to FFS typically face higher out-of-pocket costs than they incurred as managed care enrollees. Beneficiaries who choose another plan may have to switch health care providers and may have different benefit coverage. Of the 957 counties that were covered by Medicare managed care plans as of September 1, 1998, 406 experienced at least one plan withdrawal; 94 of these counties were left with no Medicare plans. However, of all the instances of plans withdrawing from a county, about 37 percent were by plans withdrawing from a county with 100 or fewer managed care enrollees, including 43 instances in which a plan withdrew from a county with no enrollees. For example, Southeastern United Medigroup of Kentucky eliminated 11 counties from its service area, but had no enrollees in those counties. Consequently, while over 40 percent of counties with at least one plan experienced a plan withdrawal, only 7 percent of managed care enrollees were affected. New Plan Applications May Mitigate Effects of Withdrawals While some plans have chosen to curtail their participation in Medicare, new plans are entering the program and some existing plans are expanding the areas they serve. HCFA has approved applications from 10 new plans that were able to enroll beneficiaries as of January or February 1999. HCFA is also reviewing 30 additional new plan applications. In addition, 6 service area expansions had been approved and 14 other service area expansion applications were pending as of January 1999. The number of recently approved and pending applications suggests that there is still considerable plan interest in participating in Medicare. Furthermore, total managed care enrollment has increased following the drop that occurred in January 1999 and is now slightly higher than it was when the withdrawals took effect. The 10 new Medicare plans approved by HCFA as of January 20, 1999, offer services in Florida, Hawaii, Illinois, New Jersey, New Mexico, New York, Ohio, Oregon, Washington, West Virginia, and Wisconsin (fig. 3 shows the counties affected by the new plans and by plan withdrawals). Fourteen of the new or pending plans are applying to enter counties that previously had no Medicare managed care options. In 1998, for example, no plans were available in any of the counties in which the newly approved plans in Illinois and Oregon are offering services. One pending new plan has applied to offer services in 68 counties in Iowa, Minnesota, and South Dakota that did not have any plan as of September 1998. Figure 4 shows those counties that have pending new plan applications or pending service area expansions. Even with these newly approved plans, the number of counties with at least one Medicare managed care plan decreased from 957 in September 1998 to 883 in January 1999 (see table 1). However, if all pending new applications and expansions are approved, 1,045 counties will have at least one managed care plan, including 181 counties that had no such plans in 1998. These counties are identified in figure 5 along with those counties that no longer have a plan as a result of the withdrawals and service area reductions. Although it is too early to estimate the impact of the recently approved and pending applications on managed care enrollment, it is possible to calculate the number of beneficiaries that have a plan available in their counties. In September 1998, 28.4 million beneficiaries lived in counties served by at least 1 managed care plan (see table 2). In January 1999, that number dropped by almost 800,000 beneficiaries because of plan withdrawals and service area reductions. However, if all pending new applications and service area expansions are approved, slightly more beneficiaries in 1999 will have the option to join a managed care plan than did in 1998. Nonetheless, fewer beneficiaries will have more than one plan to choose from even if all the new applications are approved. Most of the new plan applications are from traditional HMOs. Thus far, HCFA has approved one PSO and no PPOs, medical savings accounts, or private FFS plans. However, it may be too early to assess how many of these new types of health plans will be interested in participating in the program. Medicare+Choice is still very new, and interim final regulations governing the program were just published in June 1998. Plans had little time to prepare and submit applications for 1999. The number and diversity of applications may increase in future years as plans become more familiar with the new program. However, officials from organizations representing managed care plans believe that the reduced growth in payments and increased administrative burden under Medicare+Choice may discourage future plan participation. Several Factors, Such as Payment, Enrollment, and Level of Competition, Are Associated With Plan Participation No one factor can explain why plans choose to participate in particular counties. Although plans obviously consider payment rates, many other factors also influence their business decisions. Our previous work showed that some areas, such as Boston, Massachusetts, had relatively high payment rates in 1993 but few managed care plans and enrollees. Other areas, such as a number of Oregon counties, had low payment rates but still had several managed care plans with high enrollment in 1995. The pattern of recent plan withdrawals suggests that several factors, including payment rates, may have influenced plans’ decisions. A plan was more likely to withdraw from a county where payment rates were low relative to other counties in the plan’s service area, the plan had been operating since 1992, the plan had low enrollment, or the plan was in a weak competitive position compared with other plans in the county. An unusually high number of plans also withdrew from FEHBP in 1998, suggesting that general market conditions may have played some role in the Medicare plan withdrawals. In some respects, the current Medicare withdrawals are similar to those that occurred in the late 1980s. At that time, many plans left Medicare because they were unable to attract members and were unprofitable. Other factors, such as plans’ inability to establish provider networks, also may have influenced the current withdrawals, but we were unable to quantify those effects. Plans Withdrew From Both High- and Low-Payment Counties Both before and after the recent withdrawals, managed care plans were much more likely to offer services in high-payment-rate counties than in low-payment-rate counties. In 1999, for example, 91 percent of counties with monthly payment rates over $694 are served by a managed care plan. By contrast, only 11 percent of counties with the minimum payment rate of approximately $380 are served by a managed care plan. High-payment-rate counties, however, were disproportionately affected by the withdrawals (see table 3). Over 90 percent of the counties with the highest payment rates experienced a plan withdrawal, compared with 34 percent of counties with the lowest payment rate. It is possible that some plans withdrew from high-payment-rate counties because they anticipated that these counties will receive below-average payment increases in the coming years. In fact, for those counties with payments based on a blend of national and local FFS spending as specified in the BBA, this payment blending provision (expected to be implemented for the first time in 2000) will result in smaller payment increases for higher-payment-rate counties and larger payment increases for lower-payment-rate counties (see app. II for more information on the BBA’s payment provisions). In addition, over the next 5 years, Medicare payments for graduate medical education (GME) will be eliminated from the blended rates. Because GME spending is concentrated in high-payment-rate counties, its removal will disproportionately slow payment rate growth in high-payment-rate counties. Although a smaller percentage of low-payment counties were affected by withdrawals compared with high-payment counties, enrollees living in the low-payment counties were more likely to be affected by the withdrawals. For example, 16 percent of enrollees who lived in counties with the lowest payment rates were affected by a plan withdrawal compared with 1 percent of enrollees in the highest-payment-rate counties (see table 4). These findings indicate that the plans that withdrew from high-payment counties had relatively few members. For plans that dropped selected counties from their service areas, payment rates appear to be one factor that influenced their decisions. In 1999, for example, PacifiCare of Arizona withdrew from four of the eight counties in its service area, withdrawing primarily from counties with the lowest payment rates. It continued to provide services in Pinal County, which had the highest payment of all the counties in its service area, but dropped Cochise County, where the payment rate was about 25 percent lower. To assess the impact of relative county payment rates on plans’ service area decisions, we compared the payment rate for each county in a plan’s service area with the highest county payment rate in that plan’s service area. We repeated our calculation for every plan. The results (shown in table 5) suggest that counties with payment rates that were low relative to the maximum county payment rate in a given service area were disproportionately affected by service area reductions. For example, while plans reduced their service areas in 5 percent of counties with payments that were between 90 and 100 percent of a plan’s maximum-payment-rate county, they reduced their service areas in 28 percent of counties that had payment rates between 50 and 60 percent of the plan’s maximum-payment- rate county. Enrollment, Competition Level, and Other Factors Also Influence Participation Decisions Several factors, in addition to payment rates, appear to be associated with a plan’s decision to withdraw from a specific county: short length of time operating in the county, low enrollment, and a weak competitive position compared with other Medicare plans in a county. The Medicare managed care program expanded rapidly in recent years; many new plans entered the program, and existing plans expanded the areas they served. The recent withdrawals may represent a market correction—some plans with low Medicare enrollment and in counties dominated by large plans may have concluded that they could not compete effectively and so withdrew. A number of plans left the Medicare program between 1988 and 1991 for similar reasons. Moreover, the market conditions that led to the recent withdrawals may not be unique to the Medicare program. The experience of FEHBP, which also sustained an unusually high number of plan withdrawals this year, suggests that plans may be reacting to general market conditions as well as program-specific ones. Plans were more likely to withdraw from counties in which they had less Medicare experience. We looked at all instances in which a Medicare plan provided services in a county as of February 1998 and determined how long the plan had participated in Medicare in that county. In less than 1 percent of the instances in which a plan entered a county for the first time between 1980 and 1986—that is, plans with more than 12 years of Medicare experience in a county—did the plan withdraw from that county in 1998 (see fig. 6). In contrast, plans were much more likely to withdraw from areas in which they had less than 7 years experience. For example, about one-third of the plans with 5 years of Medicare experience in a county withdrew from that county in 1998. The withdrawal pattern suggests a retrenchment from the rapid growth of Medicare managed care that began in 1994. Plans that had difficulty attracting or retaining enrollees in a county were also more likely to withdraw from that county (see table 6). In almost a third of the instances in which a plan had no enrollees in a county, the plan withdrew from that county. In contrast, in only 12 percent of the instances in which a plan had more than 1,000 enrollees in a county did the plan withdraw. A plan was also more likely to withdraw from a county if it faced larger competitors. Specifically, a plan was more likely to withdraw from a county if its Medicare market share in that county was small relative to the market share of the plan with the highest Medicare enrollment in the county. The bigger the difference in market shares, the more likely the smaller plan was to withdraw from the county. Moreover, the smaller plan was more likely to withdraw if the rest of the market was dominated by a few firms rather than divided up among many firms. Some plans may have withdrawn from counties where they found it difficult to build or maintain provider networks. For example, a Medicare HMO in a rural area of North Dakota withdrew from the program when its hospital provider discontinued its contract with the plan. A HCFA official also told us that the two plans that withdrew from Utah made their decisions early in 1998, before the publication of the interim final regulations implementing Medicare+Choice. According to the official, the plans withdrew because they could not contract with enough physicians to maintain adequate provider networks. Physicians wanted higher reimbursements than the plan was willing or able to pay. Officials from organizations that represent managed care plans have also cited the administrative burden of the new Medicare+Choice requirements as a significant reason for plan withdrawal decisions. For the most part, however, this burden was not so great as to induce MCOs to leave the Medicare program entirely. Many national MCOs, such as Aetna/U.S. Healthcare or Kaiser, offer numerous plans across the country. Nearly all of the MCOs that terminated a Medicare plan in one area continued to operate Medicare plans in other areas. Nonetheless, it may be that the increased administrative requirements, coupled with the expected slow growth in payments and the uncertainties associated with a new risk-adjustment methodology, affected some plans’ participation decisions. Finally, an anomaly related to the transition from the previous Medicare managed care program to Medicare+Choice may have played a role in the unusually high number of withdrawals witnessed this year. Under the previous managed care program, if a plan withdrew from a county, it could not reenter that area for 5 years. The BBA included a similar provision for Medicare+Choice plans, but did not make it retroactive to include plans with contracts under the earlier program. Plans that withdrew before January 1, 1999, have by definition never been Medicare+Choice plans. Consequently, these plans do not face the exclusion period and can reenter any county without waiting 5 years. The effect of this provision may have been to concentrate some of the plan withdrawals in 1998. Some plans may have viewed this year as a one-time opportunity to pull back from the program while they waited to see what future changes might bring. Small Reductions Seen in Availability of Some Benefits Medicare managed care plans have typically offered more generous benefits—such as coverage for prescription drugs, dental care, and hearing exams—than those available in the FFS program. Although the extent of extra benefits varies by plan, they are more commonly offered in high-payment counties. Since the BBA payment changes were implemented, overall beneficiary access to plans that offer certain additional benefits declined slightly. However, beneficiaries who live in low-payment-rate counties experienced greater decreases in access between 1997 and 1999 than the average beneficiary. While the current benefit changes are having a greater impact on beneficiaries in low-payment counties, the BBA constraints on plan payment increases may lead plans to offer less generous benefits in the future to all beneficiaries than they have in the past. Because of limitations in the available data sources, we can only report on whether a plan offered a particular benefit. The scope of the actual benefit may vary significantly among plans and over time. For example, while two plans may offer coverage of prescription drugs, one plan may have a dollar cap on the benefit and offer coverage only for plan-approved drugs, while the second plan may cover drugs without any limitations. Our study did not distinguish between these two different benefit levels. Plans in counties with lower payments have generally offered fewer additional benefits; as a result, fewer beneficiaries living in lower-payment counties have had the opportunity to join plans that offer these benefits compared with beneficiaries living in higher-payment counties. In 1997, for example, only 61 percent of beneficiaries living in counties with payments under $330 (and with at least one plan) had access to a Medicare plan that offered prescription drug coverage, while 100 percent of beneficiaries living in counties with payments over $658 had such access (see fig. 7). Benefit Changes Had Larger Impact on Beneficiaries in Low-Payment Counties In comparing 1997 and 1999 plan benefit packages for beneficiaries living in counties with at least one managed care plan, we found that access to plans offering different additional benefits decreased slightly after the BBA payment changes (see fig. 8). For example, 71 percent of beneficiaries had access to foot care in 1997, but only 64 percent had access to such coverage in 1999. Access to physical examinations and immunizations did not change. The only benefit for which beneficiary access increased was prescription drug coverage—a benefit valued highly by beneficiaries who enroll in plans. Plans may be choosing to offer a different mix of benefits—substituting prescription drug coverage for other services. It is also possible that the drug benefit plans are offering is more limited; for example, it may have a lower maximum dollar amount that the plan will pay. Most beneficiaries with access to a managed care plan can enroll without paying a separate monthly premium. The percentage of beneficiaries living in counties where plans require enrollees to pay a monthly premium increased slightly from 12 percent in 1997 to 15 percent in 1999 (see fig. 9). In addition, the percentage of beneficiaries living in counties where the minimum plan premium was over $40 increased slightly. Although the changes in beneficiary access to plans offering additional benefits were relatively small, these benefit reductions were concentrated in low-payment-rate counties (see fig. 10). Access to plans offering additional benefits remained nearly constant for beneficiaries in high-payment-rate counties, although we do not know whether plans changed the scope of these benefits. For example, the percentage of beneficiaries in the lowest-payment-rate category with access to Medicare plans offering eye exams decreased from 98 percent in 1997 to 72 percent in 1999. In contrast, all beneficiaries living in the highest-payment-rate counties could obtain covered eye exams from a managed care plan in both years. Access to a plan offering prescription drug coverage, the only benefit for which overall beneficiary access increased between 1997 and 1999, decreased slightly for beneficiaries living in the lowest-payment-rate counties. The decrease in access to plans offering additional benefits in the lowest-payment counties is interesting because these counties experienced an average payment increase of 23 percent between 1997 and 1999 compared with a 4-percent increase for all other counties. It is unclear why coverage of additional benefits would decrease in the lowest-payment counties, given their relatively large payment increase in the past 2 years and higher-than-average payment increases expected in the future. Without data on the level of benefits being offered, the picture is incomplete. Plans in higher-payment-rate counties typically have more competitors than plans in lower-payment counties. Faced with more competition, plans in high-payment-rate counties may prefer to reduce benefit levels rather than eliminate benefit categories altogether. For example, a plan may lower the dollar limit on a prescription drug benefit or impose certain restrictions on the benefit. Plans facing less competition in lower-payment-rate counties may be more willing to eliminate benefits in the face of rising costs. Plans Signal Desire to Revise 1999 Benefit Offerings BBA constraints on plan payment increases may lead to more global reductions in future plan benefits. One indication of this potential effect is the effort by plans to revise their 1999 benefit packages. In 1998, plans were required to submit their proposed 1999 benefit packages to HCFA much earlier than in previous years and before HCFA had published the regulations implementing the new Medicare+Choice requirements. After HCFA published the new regulations in June 1998, some plans asked to revise their 1999 benefit packages. They argued that their initial submissions did not include the estimated costs of complying with the new regulations. In addition, plans noted that health care costs, especially prescription drug costs, had grown much faster than they had anticipated earlier. HCFA did not allow plans to revise their 1999 benefit packages because doing so might undermine the benefit submissions process. Plans normally establish benefit packages before they know what their competitors will offer. HCFA officials believe this uncertainty may motivate plans to offer more generous benefits. If plans were allowed to revise their benefit packages after they knew what other plans were offering, HCFA was concerned that plans whose original benefit packages were more generous than their competitors’ might reduce enrollee benefits or raise premiums. In addition, it would have been difficult for HCFA to review and approve benefit changes for all plans and still meet the statutory deadline for providing beneficiaries with comparative plan information. As a result of HCFA’s decision, some plans may have withdrawn from the program because they could not afford to provide the benefit packages they initially proposed. Other plans remained in the program but may revise their benefit packages in the future. Conclusions The Medicare provisions of the BBA were intended to control the growth in Medicare expenditures and offer beneficiaries more health plan options. Toward those ends, the BBA slowed the rate of growth in FFS payments to certain health care providers, such as hospitals and physicians, and mandated new payment methodologies for other FFS providers, such as home health agencies. At the same time, the BBA addressed a number of known problems with the Medicare managed care program. It revised plan payments to address significant overpayment problems and to encourage managed care plans to offer services in areas with few plans. It also allowed new types of plans to participate in Medicare and imposed new requirements to ensure the quality of care provided by plans. When plans announced they would be withdrawing from Medicare or reducing the areas in which they offered services, however, some observers expressed concern about the future of Medicare managed care and debated whether certain provisions established by the BBA should be revised. While future plan participation should be monitored, it is premature to conclude that Medicare+Choice must be radically revised to ensure the success of Medicare managed care. Enrollees affected by the withdrawals had to choose another plan or return to FFS, but only 1 percent of previously covered managed care enrollees were left without any Medicare+Choice plans. At the same time, HCFA has approved a small number of new plans and is reviewing 30 new plan applications, indicating continued plan interest in participating in Medicare. Some of these new plans, if approved, would offer services in counties that previously had few or no managed care plans. The current movement of plans in and out of Medicare may be primarily the normal reaction of plans to market competition and conditions. While the new payment rates and regulations were undoubtedly considered by plans in making their participation decisions, other factors associated with plan withdrawals—recent entry in the county, low enrollment, and higher levels of competition—suggest that a number of Medicare plans withdrew from markets in which they had difficulty competing. During the early years of the Medicare managed care program, a number of plans with low enrollment that were not operating profitably also withdrew from the program. The BBA transformed the Medicare risk program into Medicare+Choice with the goal of taking advantage of the efficiencies and choices that exist in the private managed care market. Medicare may not be able to harness these benefits without also experiencing some of the adjustments that occur in the health care market. Agency Comments and Our Evaluation In commenting on our report, HCFA found our analysis of plan participation in the Medicare+Choice program to be sound and agreed with our findings and conclusions. HCFA emphasized that recent trends in the overall managed care market, such as low profit margins, increased competition, and plan consolidations, played a major role in plans’ Medicare+Choice participation decisions. HCFA also noted that the withdrawal of many plans from FEHBP suggests the significance of overall market trends in plans’ decision-making. In its comments, HCFA listed the Medicare+Choice program changes it has proposed to (1) protect beneficiaries affected by plan withdrawals and (2) promote program stability by alleviating plans’ concerns regarding certain administrative requirements. (HCFA’s comments appear in app. III.) HCFA also provided us with technical comments, which we incorporated in the report where appropriate. We also provided a copy of the draft to representatives of the American Association of Health Plans (AAHP) and the Health Insurance Association of America (HIAA). Both groups expressed concern that our report understates the role of reductions in payment increases and the heavier administrative burden created by the Medicare+Choice regulations on the recent plan withdrawals. Similarly, they disagree with our conclusion that plans may be responding to current market conditions and competition. Instead, they believe that significant changes in program payments and regulations are needed to ensure future plan and beneficiary participation in Medicare+Choice. (AAHP’s and HIAA’s comments appear in apps. IV and V.) Both groups also provided technical comments, which we incorporated where appropriate. We recognize that the payment rates and administrative requirements of Medicare+Choice may have played a role in the decisions of some plans to withdraw from a county, particularly plans with low enrollment. However, we also believe that plan participation decisions are based on a number of factors. The relative importance of any single factor can be difficult to determine, in part because the significance of its role may vary among plans. We agree with AAHP and HIAA that plan participation in Medicare+Choice should be monitored, but we continue to believe that it is premature to conclude that the program needs to be radically revised. We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services, and other interested parties. We will make copies available to others on request. If you or your staffs have any questions about this report, please call me at (202) 512-7114 or James Cosgrove at (202) 512-7029. Other major contributors to this report include Kathryn Linehan, Susanne Seagrave, Patricia Spellman, and Michelle St. Pierre. Scope, Methodology, and Data Sources We reviewed pertinent laws, regulations, HCFA policies, and research by others to obtain information on the Medicare+Choice program, including its new payment methodology and new requirements for plans. To obtain different perspectives on why plans withdrew or reduced their service areas, we interviewed officials at HCFA’s Center for Health Plans and Providers and representatives from the American Association of Health Plans and the Health Insurance Association of America. We conducted our study from December 1998 to March 1999 in accordance with generally accepted government auditing standards; however, we did not independently verify data obtained from HCFA. To identify counties with a risk plan in 1998, we used HCFA’s September 1998 Medicare Managed Care Geographic Service Area Report (GSAR). We excluded cost, demonstration, and health care prepayment plans from our analyses. In cases in which the contract type was not identified with the plan name and contract number, this information was verified using the September 1998 Medicare Managed Care Market Penetration for All Medicare Plan Contractors—Quarterly State/County/Plan Data Files, September 1998 Medicare Managed Care Contract and Segment Service Area File, or HCFA’s Plan Information Control System. The GSAR provides a list of the service areas for all risk and cost managed care contracts. The count of enrollees by plan by county in a plan’s service area as of September 1998 was obtained from the State/County/Plan Penetration Files. To determine the effects of competitive market forces on plans’ decisions to withdraw from particular counties, we used this enrollee information to construct market shares for each plan in each county. We then used a linear probability regression model to analyze the market share information. To analyze the changes in plan participation in the Medicare+Choice program, we used HCFA data on the 1999 Medicare+Choice plan contracts. HCFA provided us with a list of plans that withdrew from the program or reduced their service areas as of January 1, 1998, and the counties and number of enrollees affected. We used this source to determine enrollees affected by withdrawing plans, because data from the State/County/Plan Penetration Files would have overstated the number of enrollees affected by service area reductions in those cases where a plan withdrew from only part of a county. HCFA also provided a list of new Medicare+Choice plans and service area expansion applications approved and under review as of January 1999 and the counties affected. We noted some inconsistencies between the service areas listed on the GSAR and the list of contract nonrenewals and service area reductions. To improve the accuracy of the GSAR, counties were added if they appeared on the contract nonrenewal/service area reduction file and were listed on the Plan Information Control System as part of a plan’s contracted service area but excluded from the original GSAR. We excluded Guam, Puerto Rico, and the Virgin Islands from the county-level analyses. In some of the analyses, the same counties are defined as separate entities if plans can contract with them separately. For example, Los Angeles County, California, is divided into Los Angeles - 1 and Los Angeles - 2; they are counted separately because plans may contract with them separately. The independent cities of Virginia are also counted as separate counties because their payment rates differ from those of their counties, and plans contract to serve these areas as if they were independent counties. County-level payment rate information for 1990 to 1999 for Medicare risk plans and Medicare+Choice plans, including payment reductions resulting from the removal of graduate medical education (GME) spending, was obtained from the HCFA Web site. In addition, we obtained a February 1998 file from HCFA’s Office of Information Systems containing historical county-level information on the year that plans first entered individual counties. There are 29 cases in which plans that withdrew from a particular county in January 1999 are not listed in the historical county-level information as ever having served that county. Some plans may have started serving these counties after February 1998, or the information might have been inadvertently omitted from the historical county-level information. As a result, the total number of plan/county combinations affected by the recent withdrawals contained in this file is incomplete. We obtained information on the number of Medicare beneficiaries by county and Medicare managed care enrollees by county and plan from HCFA. The September 1998 Medicare Managed Care Market Penetration for All Medicare Plan Contractors—Quarterly State/County Data Files showed the number of Medicare beneficiaries by county. This file was used to determine the net effect of the plan withdrawals, new plans, and service area expansions on Medicare beneficiary access to 1999 Medicare+Choice plans and benefits. Similarly, 1997 beneficiaries by county were counted from the December 1997 State/County Penetration Files. Medicare managed care plan enrollment for 1999 was obtained from the September 1998 State/County/Plan Penetration Files. To obtain information on benefits offered by plans in 1999, we used the 1999 Medicare Compare database and the January 1999 Medicare Managed Care Monthly Report. Merging these two sources provided us with plan benefits at a county level. We compared the 1999 benefits with 1997 benefits to identify any changes. We chose 1997 because it was the year before the implementation of the BBA changes. To obtain information on benefits offered by plans in 1997, we used the December 1997 GSAR, the December 1997 Medicare Managed Care Monthly Report, and the 1997 adjusted community rate submissions. Merging these three sources gave us benefits provided by each plan at a county level. Where a plan provided flexible benefits to a county in 1997, those benefits were used in the analyses. Of the 307 risk plans that contracted with HCFA in December 1997, we did not have benefit information for 10 plans. These 10 plans were excluded from both the benefit and premium analyses. Of the 311 plans contracting with HCFA in January 1999, 5 plans were excluded from the benefit analysis and 8 plans were excluded from the premium analysis because of a lack of benefit or premium information. BBA Changes to Plan Payment Methodology The BBA changed how payments to Medicare managed care plans were calculated in response to criticisms that the rates (1) overcompensated many plans for the beneficiaries they served, (2) varied greatly among counties, and (3) were too low in certain rural areas. This appendix describes the pre-BBA and post-BBA payment methodologies. Plan Payments Before the BBA Before the BBA changed the rate-setting process in 1998, the monthly amount Medicare paid managed care plans for each plan enrollee was directly tied to local spending in the FFS program. Although the actual rate-setting formula was complex, the methodology, in effect, was as follows. Each year, HCFA estimated how much it would spend in each county to serve the “average” FFS Medicare beneficiary. Because managed care plans were assumed to be more efficient than FFS, Medicare set plan payments in each county at 95 percent of the FFS amount. Payments for individual beneficiaries were based on county of residence. Because some beneficiaries were expected to require more health care services than others, HCFA adjusted the payment for each beneficiary up or down from the county payment depending on the beneficiary’s age, sex, and eligibility for Medicaid and whether the beneficiary was a resident in an institution. In 1997, the average county payment was $395 per month. This average increases to $468 when weighted by the number of beneficiaries in each county. From county to county, however, the rates vary dramatically. For example, a plan that served an average beneficiary in Arthur County, Nebraska, would have received about $221 per month. A plan that served a similar beneficiary in Richmond County (Staten Island), New York, would have received approximately $767. The wide variation in capitation rates among counties reflected the underlying variation in Medicare per-beneficiary FFS spending, which in turn was the result of local differences in the price and use of medical services. New Rate-Setting Process Under the BBA The BBA loosened the link between the payment rate in each county and the average FFS spending in that county. This change was made to reduce the wide disparity in payment rates that existed under the previous system. Payment rates in each county are now set at the highest of three possible payment rates: a minimum or “floor” rate, a minimum increase rate, and a “blended” rate. The BBA established a floor rate of $367 in 1998. The floor rate will be increased each year to reflect overall growth in Medicare spending. The BBA also established a minimum rate increase of at least 2 percent each year in every county. Finally, the BBA specified a blended rate for each county that reflects a combination of local and national average FFS spending. The blended rate is designed to reduce payment rate variation among counties. Blending will reduce payment increases in counties whose average FFS spending has been higher than the national average and will create larger payment increases in counties whose average FFS spending has been lower than the national average. Over time, the blended rate will rely more heavily on the national rate and less on the local rate. In 1998 and 1999, plans received either the floor rate or a 2-percent increase over their payment from the previous year. Because of a BBA requirement to keep overall county payments budget neutral to what they would have been without the legislation, no county received the blended rate in 1998 or 1999. For the year 2000, however, payment for 63 percent of counties will be based on the blended rate. Comments From the Health Care Financing Administration Comments From the Health Insurance Association of America Comments From the American Association of Health Plans The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on managed care plans' decisions to leave the Medicare program or to reduce the geographic areas that they serve, focusing on: (1) plans that receive capitated payments; (2) the patterns of plan and beneficiary participation in managed care; (3) factors associated with plans' decisions to enter or leave the Medicare Choice program; and (4) changes in plans' benefit packages and premiums. GAO noted that: (1) although an unusually large number of managed care plans left the Medicare program, a number of new plans have demonstrated their interest in serving beneficiaries by applying to enter the program or expanding the areas in which they offer services; (2) last fall, shortly before Medicare Choice was implemented, 45 plans announced they would not renew their Medicare contracts and 54 others announced they would reduce the geographic areas in which they provided services; (3) about 407,000 enrollees had to choose a new managed care plan or switch to fee-for-service; (4) at the same time, however, several new plans applied to enter the program; (5) thus far, the Health Care Financing Administration has approved 10 new plans for 1999 and is reviewing 30 additional plan applications; (6) some of the pending plan applications are for counties that previously had few or no managed care plans; (7) plan withdrawals cannot be traced to a single cause; a variety of factors appear to be associated with plans' participation decisions; (8) payment level is one factor that influences where plans offer services, but withdrawals were not limited to counties with low payments; (9) when a plan reduced its service area, however, GAO found that counties with low payment rates relative to payments in the rest of a plan's service area were more likely to experience a withdrawal than counties with higher payment rates; (10) a review of other factors suggests that a portion of the withdrawals may have been the result of plans deciding that they were unable to compete effectively in certain areas; (11) for example, plans were more likely to withdraw from counties where they had begun operating since 1992, where they had attracted fewer enrollees, or where they faced larger competitors; (12) some plans have indicated that they withdrew from areas where they were unsuccessful in establishing sufficient provider networks; (13) a broad comparison of plan benefit packages from 1997 and 1999 indicates modest reductions in the inclusion of certain benefits; (14) in 1999, a slightly greater percentage of beneficiaries can join a plan that offers prescription drug coverage, while a slightly smaller percentage of beneficiaries have access to a plan offering dental care, hearing exams, and foot care; (15) beneficiaries living in the lowest-payment-rate areas experienced greater decreases in access than the average beneficiary; and (16) those living in the lowest payment areas experienced a decrease in access to plans offering prescription drug benefits, while beneficiaries in higher payment areas saw an increase in access to plans offering drug benefits.